1 Legal and enforcement framework
1.1 In broad terms, which legislative and regulatory provisions govern AI in your jurisdiction?
As yet, there is no legislation in Ireland that covers AI specifically. However, this does not mean that there are no laws which apply to it. In the absence of specific legislation or regulation, ‘real-world' legislation and regulation will continue to play a role in determining the complex legal questions which AI continues to pose.
1.2 How is established or ‘background' law evolving to cover AI in your jurisdiction?
This may be less a case of evolution and more of trying to fit a square peg into a round hole. AI is not the first ‘concept' to emerge and make the law look obsolete. Think of click-wrap in the 1990s (and more recently, digital services and the cloud), where we were constantly trying to adapt copyright legislation, for example, which was more suited to old-style licences signed in manuscript.
Until more specific legislation and regulations are in place, the legal rules governing AI are likely to be a combination of existing law and contractual provisions to fill the gaps. So, for example, under the Copyright and Related Rights Act 2000, which governs copyright law in Ireland, copyright can be owned only by a legal person – effectively an individual, company or partnership. It essentially has to be created by an individual, so a machine that creates content cannot be the legal owner. Therefore, it will be necessary to determine any ownership of that content through contractual agreement.
1.3 Is there a general duty in your jurisdiction to take reasonable care (like the tort of negligence in the United Kingdom) when using AI?
Yes, Irish law is very similar to English law in this respect and has gone on the same journey. Irish courts first followed the English cases of Donoghue v Stevenson  and Anns v Merton Urban District Council ; and they embraced the more expansive view of a duty of care followed by the English courts thereafter until this was reined in by the decision in Murphy v Brentwood DC .
The Irish Supreme Court judgment in Glencar Exploration plc v Mayo County Council  confirmed that the Irish courts had restated their position to be in line with their English counterparts.
For a claim based on negligence to succeed, the existence of three requirements in a chain must be shown:
- the existence of a duty of care;
- breach of that duty; and
- proof that the breach was the cause of the injury which is the subject of the claim.
There is plenty of precedent to show that a manufacturer, for example, can be liable in negligence for a defective product which causes injury where these three elements can be easily proven. However, this may not be so straightforward for AI products. If the AI product gives choice to a user and the choice exercised causes damage, it may be hard to link that cause back to the manufacturer. If the AI product automatically makes the choice by design, then this may be more straightforward.
1.4 For robots and other mobile AI, is the general law (eg, in the United Kingdom, the torts of nuisance and ‘escape' and (statutory) strict liability for animals) applicable by analogy in your jurisdiction?
Irish common law is very similar to English law when it comes to negligence and tort, and includes the torts of nuisance, escape and strict liability for animals' behaviour in certain circumstances. The English case of Rylands v Fletcher  has also been the starting point of Irish jurisprudence in this area.
In addition to common law, a number of legislative acts impose strict liability for injury and damage caused by animals, such as the Control of Dogs Act 1986 and the Animals Act 1985.
Many commentators draw an analogy between animals on the highway and robots or autonomous cars on the streets. As these develop and become more sophisticated, it will be interesting to see whether common law aspects of strict liability or specific legislation similar to that for animals is applied or introduced to cover their operation.
Law, as we know, tends to lag behind technological change, so we will have to wait and see how existing Irish negligence law will be used to deal with emerging AI technologies.
1.5 Do any special regimes apply in specific areas?
While much jurisprudence and case law have developed in specific areas of negligence (eg, medical negligence) there are no specific regimes, other than indirectly in certain legislation, such as health and safety at work legislation.
1.6 Do any bilateral or multilateral instruments have relevance in the AI context?
Again, there is nothing specific in this area. However, as a member state of the European Union, Ireland is subject to all directives and regulations issued by its institutions.
A relevant example is the EU Product Liability Directive (85/374/EC), which was transcribed into law in Ireland by the Liability for Defective Products Act 1991. The act adds to, rather than replaces, the existing remedies available in both tort and contract, and provides for strict liability in circumstances where damage is caused wholly or partly by a defect in a manufacturer's product. In the absence of specific legislation, this act may well have a significant impact on dealing with claims relating to AI products in the future.
However, while a strict liability regime might give some certainty (and also allow suppliers to price in the risks of their products), this particular legislation is more geared towards moveable products and would therefore be difficult to apply to cloud-based systems.
1.7 Which bodies are responsible for enforcing the applicable laws and regulations? What powers do they have?
Ireland has four sources of law: the Constitution, legislation, case law and EU law. While the state has certain statutory powers to enforce certain legislation, the primary forum where commercial law is enforced is the Irish courts.
The courts have the power to settle disputes between parties and in certain cases to determine that legislation is illegal or in breach of the Constitution.
1.8 What is the general regulatory approach to AI in your jurisdiction?
At present, there is no specific regulation of AI in Ireland. However, EU law is the highest-ranking source of law in Ireland in respect of its areas of competence. It is most likely that any new regulation of AI in the short to medium term will come from that source.
2 AI market
2.1 Which AI applications have become most embedded in your jurisdiction?
We cannot yet point to standout applications, such as driverless cars or the prolific rollout of robots to carry out tasks previously done by humans.
However, the use of AI in Ireland is becoming more widespread and in many cases we are unaware that it is working in the background. There are no particular applications that one could turn to and say, "I couldn't live without that piece of AI in my life!" However, without the inclusion of AI in certain products or services, our user experience could be very different.
For example, would online shopping be the same without AI applications that help us to search for the product we want by brand, size or colour? For the retailer, this can help to provide valuable data about a shopper's preferences. In HR, it is used to gather and assess information from potential recruits to determine whether they are suitable for the advertised position before proceeding to an interview. On a day-to-day basis, it is used to eliminate repetitive, low-value HR tasks, thus increasing the focus on more strategic work. Siri on our phones and Alexa in our homes both include elements of AI that we don't even think about.
In the current COVID-19 pandemic, AI applications are used to model and predict likely hospitalisations and intensive care unit requirements.
2.2 What AI-based products and services are primarily offered?
Ireland has often been referred to as the ‘Silicon Valley of Europe', given the large number of multinational and indigenous technology companies operating in the country. Many global technology firms have their Europe, Middle East and Africa headquarters or manufacturing plants in Ireland. Many successful Irish firms also continue to flourish or have been acquired by global brands. These include companies engaged in the research and development (R&D) of AI products and services.
In respect of international companies, Accenture has its largest AI R&D hub in Ireland, which focuses on machine learning, natural language understanding, knowledge
representation and computer vision. IBM Research Europe, based in Dublin, is part of the IBM Watson AI programme, and researches and produces AI products for Internet of Things, security, privacy, healthcare and the cloud.
Irish AI success stories include:
- Nuritas, which uses AI to discover bioactive peptides from food sources;
- Movidius, which makes computer vision chips and which was snapped up by Intel for circa €300 million; and
- Boxever, which uses AI to develop and operate a customer personalisation marketing platform that is widely used in the travel industry.
2.3 How are AI companies generally structured?
Leaving aside global companies such as IBM, Ireland is a breeding ground for technology start-ups and AI start-ups are the latest incarnation of this rich tradition.
It is relatively cheap to start up a company in Ireland (this can be done for as little as €300 plus value added tax), and most new tech and AI companies start off as a private company limited by shares (LTD – previously a limited liability company) with an authorised share capital (of say €100,000). Company types are set out in the Companies Act 2014.
An LTD is a separate legal entity from its owners. It can trade, own assets and incur liabilities in its own right. Under the Companies Act, it restricts the right to transfer its shares and limits the number of shareholders to 149; and there can be no invitations to the public to subscribe for any shares or debentures of the company.
The advantage of an LTD is, of course, that its shareholders are not held responsible for any debts of the company, so it is a relatively risk-free way to set up a business. The shareholders' liability, should the company fail, is limited to the amount, if any, remaining unpaid on the shares held by them. An LTD requires just one European Economic Area resident director (though at least two would be advisable) and a company secretary.
2.4 How are AI companies generally financed?
Leaving aside funds available from the state or state bodies (see question 2.5), typical financing for a start-up AI company looks something like this.
In the first instance, it is typical for the founders to look to friends and family for early financing (the so-called ‘friends and family round'), as well as putting in some of their own funds. For further funds, companies might consider angel investors – wealthy individuals who provide funding in exchange for a share in the business.
Once that seed funding has been put in place and the company is up and running with a solid business plan, it will most commonly look to private equity firms or venture capitalists (VCs) to provide the next round of funding (commonly referred to as ‘Series A funding') to finance the next growth stage. There are a range of VCs, both international and domestic, that are very keen to get into start-up or early growth Irish AI companies. VCs can bring significant funds and expertise, but this obviously comes at a price, as they will want a share of the equity and very often a seat on the board. The company will then return to these investors and approach new ones to fund further growth, R&D and the like.
Other sources available include bank loans, which can be expensive in the long term, but may be fine to bridge a gap in financing. Incubator funds and accelerator funds are also popular. Crowdfunding is another option, but this is not very common in Ireland.
2.5 To what extent is the state involved in the uptake and development of AI?
The state is largely involved through a number of state agencies that make funds and grants available to help start-ups.
First, there are 31 local enterprise offices dotted around Ireland mandated to help new companies which employ up to 10 people. They provide grants for a variety of activities, including feasibility studies, capital investment, innovation and marketing and business expansion.
Enterprise Ireland (EI) is the state agency responsible for supporting indigenous companies and promoting their products overseas. It provides funding and support at each stage of a company's growth. These include grants for market research, innovation and mentoring at an early stage, as well as job expansion and development funds.
The Industrial Development Authority (IDA) is the state agency responsible for securing foreign direct investment into Ireland. As part of its programme, it promotes Ireland's AI companies and skills to overseas investors and potential employers.
EI and the IDA have collaborated on an initiative known as the Technology Centre Programme, which encourages Irish companies and multinationals to collaborate with research institutions. This has led to the establishment of CeADAR, the National Centre for Applied Analytics and Machine Intelligence. Its focus is to fund research in AI and create demonstrable working models. EI is providing €12 million of funding to CeADAR to assist with its programme.
The Department of Public Expenditure and Reform has also funded AI projects connected with the delivery of public services by providing grants directly to AI companies.
3 Sectoral perspectives
3.1 How is AI currently treated in the following sectors from a regulatory perspective in your jurisdiction and what specific legal issues are associated with each: (a) Healthcare; (b) Security and defence; (c) Autonomous vehicles; (d) Manufacturing; (e) Agriculture; (f) Professional services; (g) Public sector; and (h) Other?
AI is broadly used in healthcare in Ireland. Applications range from the use of cognitive technology to better understand health data to assistance with the diagnosis of illnesses such as cancer. AI applications for pattern recognition help to identify potential sufferers earlier than could previously have been achieved.
AI plays an increasingly important role in medical training, with the ability to simulate real-life situations; and of course, robotics has played a part in surgery for many years.
As well as those applications you would expect, we are seeing a proliferation of wearable devices (eg, for fitness and weight loss), AI and Internet of Things (IoT) devices (eg, to monitor patients at home), as well as the coming together of AI and medical devices. A growing number of apps combine the two (eg, digital therapeutic apps based on machine learning).
A discussion of all the legislation and case law surrounding medical negligence would be too detailed for this Q&A. However, from an AI perspective, we would expect much of the risk from critical systems, for example, to be covered contractually in agreements with suppliers. However, increasingly complex legal liability issues are likely to arise when an application or machine makes life-critical issues. The European Union is already looking at this in terms of legislation, and it is most likely that it will decide that any liability will lie with the deployer of those systems.
For AI that can be classed as products, the Liability for Defective Products Act 1991, which implemented Directive 85/374/EEC, may also be relevant.
In respect of medical devices, applications such as therapeutics and wearables will be subject, where relevant, to the Medical Devices Regulation (2017/745), which will be fully in force from May 2021.
(b) Security and defence
Ireland is a very small country. Its army has fewer than 9,000 personnel and it has no sophisticated defence systems. Neither is Ireland a manufacturer of defence products on any scale.
However, AI does have a role in cybersecurity generally. The technology is being used for real-time detection in an attempt to beat the hackers and AI security applications can scan vast amounts of data very quickly to aid protection. They can recognise patterns in data, so that security systems learn from previous incidents. This seems a worthwhile investment, given that a recent report estimated that companies take 196 days on average to recover from a data breach.
Ireland also has a rich pedigree of start-ups in the cybersecurity field, many of which embed AI in their applications.
As with the deployment of other cybersecurity products and services, users should be aware of any implications relating to the General Data Protection Regulation (GDPR).
(c) Autonomous vehicles
Under current road traffic legislation in Ireland, there is no scope to use autonomous vehicles (AVs). Any deployment would require amendments to those acts as a starting point; but it is much more likely that the use of AVs, when suitable for use on the roads, will be governed by EU legislation.
Notably, Ireland did not sign or ratify the Vienna Convention on Road Traffic of 1968, which largely governs the use of AVs in Europe.
Ireland is not a huge manufacturing economy in terms of heavy engineering. The products manufactured in Ireland are mostly light electrical machinery and apparatus, processed foods, chemical products, clothing and textiles, and beverages.
However, like other foreign manufacturers in those industries, Irish companies avail of certain AI applications, such as those for design, quality checking, predictive maintenance, forecasting and the like.
In respect of manufacturing, the legal issues most likely to arise would seem to be those under the Liability for Defective Products Act 1991 and questions as to who is responsible for actions – man or machine. Again, as previously highlighted, the company deploying an AI application will most probably be legally responsible for its results.
Ireland was historically an agrarian economy and therefore it is unsurprising that it both develops and uses AI products and services in that industry.
This is an industry where AI and IoT devices easily come together. For example, one Irish company (Cainthus), which uses remote cameras to monitor the feeding of livestock and then an AI cloud service to analyse that data, is having great success in the United States, where cattle are fed in pens. Another (Anuland) uses devices below the soil to feed back data on crop growth for analysis.
Other AI products and services are widely used in the industry to detect disease, control nutrition and manage crop yield and quality.
And of course, blockchain is now widely used in tracing and authenticating the origin of foods.
A myriad of laws applies to the agriculture industry in relation to food safety and traceability, and AI applications should help producers to comply with this.
Farm operators and food producers will want their contracts with AI suppliers to protect their data and IP rights. Those contracts also need to identify who will be responsible if the processing of data leads to wrong decisions affecting the food production chain.
(f) Professional services
The use of AI to automate processes and routine tasks (eg, data analysis) in order to save time and reduce costs by professional service companies is as common in Ireland as elsewhere.
Perhaps the most interesting issue, from a legal perspective, is the use of smart contracts – essentially, computer code that can automatically monitor, execute and enforce a legal agreement.
The UK government is moving to ensure that such contracts can easily be used under English law (in December 2021 the UK Law Commission was mandated to conduct a scoping study on the law surrounding smart contracts and it quickly issued a call for evidence). There has been no similar sense of urgency from the Irish Department of Justice.
While many commentators are of the view that smart contracts can operate and be effective within the parameters of current contract law, this seems a topic which is ripe for examination by Ireland's Law Reform Commission. With the recent proliferation of articles on the topic, hopefully this is something that it will address soon.
(g) Public sector
A recent Microsoft and EY survey reported that more than half of Ireland's public sector bodies have implemented AI solutions in their organisation, while more than 30% view AI as highly important for qualifying decisions and assuring quality.
Perhaps more than anywhere else, transparency will be a key issue surrounding the use of AI in the public sector. This may require publication of information on:
- how any decision to use an algorithm was made;
- the type of algorithm;
- how it is used in the overall decision-making process; and
- steps taken to ensure fair treatment of individuals.
Also, a person's rights under the GDPR to reject automated processing in certain circumstances must always be considered.
A poignant example of how things can go wrong occurred in 2020 as a result of the COVID-19 pandemic. The Department of Education cancelled the Leaving Certificate examination (the final school state exam which determines third-level offers to students) due to the health situation. Instead, it engaged a company from Canada to devise an algorithm to produce students' final grades, different from those predicted by their teachers (predicted grades). The results were published in August 2020. In October 2020 the government admitted that there was a mistake in the algorithm and some of the results were wrong. University places had already been awarded based on the published results.
When it seemed that large-scale (and possibly precedent-setting) litigation was likely by those who had missed out on their third-level places, this was avoided by the department funding additional places for the affected students.
AI is used widely across many services and professions in Ireland.
For example, Ireland has many large data centres used by global IT services providers and AI monitoring systems are used in them to control many processes, such as power consumption, temperature and bandwidth usage.
A common theme of this chapter, however, is that there is virtually no specific regulation in respect of AI in Ireland (or pretty much anywhere else in Europe, for that matter); and as a result, there is a touch of fitting square pegs into round holes when it comes to law and regulation.
4 Data protection and cybersecurity
4.1 What is the applicable data protection regime in your jurisdiction and what specific implications does this have for AI companies and applications?
As in all other EU member states, data protection in Ireland is governed by the General Data Protection Regulation (2016/679) (GDPR). This is (though directly effective anyway by its nature) incorporated into Irish law, with certain permitted nuances and additions, in the Data Protection Act 2018. A couple of the regulation's articles in particular are relevant to AI applications.
The first is Article 35 of the GDPR. This states that where a type of processing in particular using new technologies, and taking into account the nature, scope, context and purposes of the processing, is likely to result in a high risk to the rights and freedoms of natural persons, the controller shall, prior to the processing, carry out an assessment of the impact of the envisaged processing operations on the protection of personal data.
It goes on to state that where decisions in respect of people are being made which have legal effects, and which are based on automatic processing (including profiling), a data protection impact assessment will be required before implementing this.
The second article of particular importance is Article 25. This concerns privacy by design and default, and requires controllers to ensure that applications – including, most obviously, AI applications – are designed to implement data protection principles from inception in order to meet the requirements of the regulation and protect the rights of data subjects.
Other articles of the GDPR which users of AI applications need to take into account include:
- Article 5 (transparency), which requires that companies be open and honest about the processing being undertaken; and
- Article 22, which gives data subjects certain rights not to be subject to a decision based solely on automated processing, including profiling, which has legal effects for them.
For Irish controllers, very little specific guidance is available as yet from the Data Protection Commission. Irish users of AI applications tend to refer to UK Information Commissioner's Office guidance in its place.
4.2 What is the applicable cybersecurity regime in your jurisdiction and what specific implications does this have for AI companies and applications?
There are no specific cybersecurity laws in Ireland. However, there is legislation dealing with other legal implications (data protection and privacy being the most obvious) where cybersecurity incidents may have particular consequences.
For example, under the GDPR (and the Data Protection Act 2018), controllers are obliged to take "appropriate security measures" to protect personal data against unauthorised access or disclosure. A data breach which exposes personal data due to inadequate protection could ultimately lead to a large fine under the legislation (up to €10 million or 2% of global turnover), depending on its severity.
Under the e-Privacy Regulations 2011 as amended (which transcribe the EU Privacy Directives (2002/58/EC, 2006/24/EC and 2009/136/EC) into law in Ireland), providers of communications services must take appropriate technical and organisational measures to protect the security of their services. In addition, interception or surveillance of communications is prohibited. The Data Protection Commission can seek to have any breaches enforced and there are fines of up to €250,000 for breaches.
Also of relevance is the Second Payment Services Directive (2015/2366/EU), transposed into Irish law by the European Union (Payment Services) Regulations 2018 (SI 6 2019). This imposes certain obligations on payment service providers in respect of customer authentications and notifications of breaches.
The Security of Network and Information Systems Directive (2016/1148/EU) was transposed into Irish law by the European Union (Measures for a High Common Level of Security of Network and Information Systems) Regulations 2018 (NISD). Under the NISD, operators of essential services and digital service providers, as defined in the legislation, must in general take appropriate and proportionate technical and organisational measures to manage the risks posed to the security of the network and information systems they operate or use.
Significant incidents must be reported to the Computer Security Incident Response Team, a unit of the Department of Communications, Climate Action and Environment. Failure to comply can result in fines up to €500,000.
5.1 What specific challenges or concerns does the development and uptake of AI present from a competition perspective? How are these being addressed?
The Competition Act 2002 and the Competition and Consumer Protection Act 2014 are the main pieces of legislation governing competition law in Ireland (as well as EU Treaty provisions and directly effective regulations).
Section 4(1) of the Competition Act outlaws anti-competitive practices.
Applying this provision to the use of AI, it has been widely argued that the use of pricing algorithms, for example, can lead to anti-competitive practices and collusion by default –think of an algorithm which is being used by all major players in the market to review competitors' prices and adjust their own accordingly to fall in line. Of course, where the algorithm is used to review and then undercut, this should not be a problem.
Section 5 of the Competition Act prohibits the abuse of a dominant position. There are fears that AI could be used, for example, to collect ‘big data' and then refuse to share it, leading to abusive behaviour.
Irish competition law is based on EU law and therefore it is likely that any specific additions or amendments to deal with the use of AI will come via EU regulations or directives.
An interesting argument regarding AI is who is responsible if the application itself is taking the decisions which lead to anti-competitive behaviour or abuse of a dominant position. When this question was posed to EU Competition Commissioner Margrethe Vestager in September 2018, she made it very clear that she would hold the company running the algorithm responsible and this is likely to be reflected in future legislation.
6.1 What specific challenges or concerns does the development and uptake of AI present from an employment perspective? How are these being addressed?
We have moved on from the scaremongering of, "The robots are coming to take your jobs." In fact, AI is seen as a creator of opportunities for people. From recruitment through on-boarding training and appraisals to exit, AI is likely to play an increasingly significant role in the workplace.
AI algorithms now do much of the sifting of job applications. In addition, AI is increasingly used for personnel performance monitoring. The question that employers must grapple with is whether discrimination and bias can be eliminated from these systems. Should there always be a human performing the checks and balances? The Equality Employment Acts 1998-2015 prohibit discrimination in certain areas, including access to employment and promotion.
Employers need to be able to demonstrate that they understand the factors built into any algorithm, and that it reflects their views and values. They are encouraged to monitor the results regularly to ensure that any machine learning does not deviate from these.
Where the recruitment process is outsourced, employers need to ensure that their supplier is contractually obliged to reflect their values and should seek to audit the outcomes.
7 Data manipulation and integrity
7.1 What specific challenges or concerns does the development and uptake of AI present with regard to data manipulation and integrity? How are they being addressed?
This could be a particular problem for machine learning applications. If the original data is wrong or contains biases, how can you prevent this being compounded by AI? Of course, often it is not the data itself that is wrong, but rather how it was collected, collated or analysed. So now there are also quality control AI technologies to monitor and fix the collection and documentation of the data end to end, before any machine learning takes place.
Accenture has developed a tool that enables companies to identify and eliminate gender, racial and ethnic bias in its AI software.
Specific industries are likely to introduce their own rules for transparency. A good example is the financial services industry, where AI is widely used. For example, chatbots are used to interact with customers and can respond to customers' queries and concerns concerning their financial transactions. Algorithms are used to analyse data collected about a person's risk appetite and provide personalised investment advice accordingly. Such applications and their use are likely to attract increasing scrutiny from the Central Bank of Ireland, the industry regulator.
In general, however, we will likely have to wait for something to come out of Brussels. In April 2020 the European Parliament authored a Draft Report on Ethics and AI, which recommended that the European Commission introduce a new legal framework for AI which would include rules on transparency (2020/2012(INL)). It suggested that there be clear principles on safety, transparency and accountability, with safeguards against bias and discrimination.
8 AI best practice
8.1 There is currently a surfeit of ‘best practice' guidance on AI at the national and international level. As a practical matter, are there one or more particular AI best practice approaches that are widely adopted in your jurisdiction? If so, what are they?
Ireland is no different from other countries in respect of best practice and there are no unique circumstances in Ireland which would require a different approach.
Perhaps, though, there is a different emphasis in Ireland. Given the renewed interest in data protection following the entry into effect of the General Data Protection Regulation (GDPR), there is certainly a perceived focus on the rights of the individual.
So, privacy by design is emphasised, as well as the need for transparency. Though AI applications allow for big data, there is also a focus on data minimisation.
8.2 What are the top seven things that well-crafted AI best practices should address in your jurisdiction?
- The development and adaptation of AI applications in organisations is very similar to any other software implementation, but taking into account the particular aspects which are unique to AI. So this should be the starting point, with all of the checks and balances (eg, acceptance testing) that would normally be built into such a project.
- Have a robust contract with your supplier. We know that IP law cannot deal with all aspects of AI. Therefore, appropriate contractual provisions will be required to cover certain aspects of ownership. The same is true of liability: the contract may have to set out who will take responsibility for machine-made decisions or user actions based on machine recommendations.
- Where the application is likely to involve the processing of personal data, conduct a risk versus reward analysis. Consider whether it requires a data protection impact assessment.
- Determine at the very outset how any biases or discrimination can be eradicated in the collection of data. This may have to be in conjunction with the supplier and audits may be required to confirm that this remains the case. It would be advisable to document how this has been achieved.
- For AI applications that involve personal data, ensure that the thought process and risk analysis are well documented for GDPR purposes.
- Transparency is key. Ensure that any outputs or decisions based on AI applications can be easily explained, and that they are fair. And it may not be just the customer that needs an explanation – regulators may have questions as well!
- Ensure that AI systems are safe, secure and reliable. This can only help to build trust.
8.3 As AI becomes ubiquitous, what are your top tips to ensure that AI best practice is practical, manageable, proportionate and followed in the organisation?
While there is great interest in, and concern about, AI solutions at the C-suite level, company boards are probably still trying to work out their exact responsibilities and liabilities in this regard.
A number of recent surveys highlight both users' mistrust of AI and the acknowledgement at board level that gaining such trust is vital in order for those applications to succeed. This will include ultimate trust by users that these systems are secure – in particular, where lives could be at stake, such as in the healthcare setting or in relation to autonomous vehicles.
Corporate governance and responsibility must thus be top of the agenda. Someone at board level must take responsibility for the implementation and use of AI. This person must ensure that issues of bias and discrimination, for example, are eliminated; and that these systems are properly monitored.
There will also be an increasing focus on ethics. Companies and their communications departments should be ready to face related questions and have credible responses available. And don't forget internal communications – they should ensure that all staff buy into this too.
Companies also need to keep an eye on regulatory developments, both generally and specifically for their industries. It seems inevitable that regulation is coming, probably from the European Union; companies should be ready to adapt as necessary.
At board level also, directors must be cognisant of their responsibilities under Section 228 of the Companies Act 2014 to exercise care, skill and diligence in their role; these responsibilities obviously extend to the deployment of AI. And if they delegate any of their tasks to AI, they will need to be able to demonstrate that such activities are adequately monitored.
9 Other legal issues
9.1 What risks does the use of AI present from a contractual perspective? How can these be mitigated?
At first glance, it may seem as though purchasing or licensing AI products or services is not that different from concluding any other software or cloud service agreement. However, the dynamic nature of AI differentiates it and may necessitate some changes from a ‘normal' software or cloud product contract.
For example, when licensing software, one would normally look for a warranty that the software will perform in accordance with its specification. However, in a machine learning environment, the system is constantly evolving as it consumes more data and changes and refines its output. It will thus be necessary to draft around this and perhaps require the supplier to be responsible for outcomes outside certain parameters.
Where relevant, customers will also want to ensure, with appropriate contractual provisions, that the AI application has been developed in such a way that any potential for bias or discrimination has been eliminated from the base product. Ideally, they should also obtain the right to monitor the product to ensure that bias or discrimination cannot creep in during use, with appropriate liability; and to have the product remedied if bias or discrimination nonetheless results.
As explained in question 10, machines cannot (currently) legally own intellectual property. However, this does not mean that they cannot create it. Therefore, contracts will need to address this dichotomy with suitable ownership provisions.
See also question 9.3 in respect of negligence and liability and the increased difficulty in establishing responsibility. Contracts may have to deal directly with this too.
In the near future, we are likely to see more and more AI as a service (AIaaS) solutions. These already exist for services such as facial recognition and language translation, as well as Siri in iPhones and Amazon's Alexa. Elsewhere in this Q&A (See questions 4.1, 7.1. 8.1 and 8.2), we have highlighted the need for companies using AI applications to be transparent and be able to explain the background and operation of algorithms. It may be a challenge to include access to the necessary information in cloud contracts, as AIaaS providers may well see this information as proprietary.
9.2 What risks does the use of AI present from a liability perspective? How can these be mitigated?
Negligence is a well-understood concept in common law. There is a plethora of historical case law to assist in determining whether a duty of care exists and who is responsible for breaching it. The key questions are basically as follows:
- Who owed a duty of care?
- Who breached the duty of care?
- Did the damage flow from that breach?
- Was it foreseeable?
However, in respect of AI, finding out who is to blame for any damage or injury may be a little more complex: it could be the supplier or user of an application, or the application itself. In addition, foreseeability may be a difficult concept to establish where an application or machine is acting without human intervention.
Only a legal person can have responsibility under the law and obviously an AI application is not a legal person. Therefore, in the absence of any regulation on this topic, liability will rest with either the supplier or the user (vicariously liable for the results flowing from the use of the product, much like the responsibility taken for the actions of an employee). But it is by no means clear where the ultimate duty of care may lie and therefore this is something that the parties may need to cover contractually.
The Liability for Defective Products Act 1991 may also apply to AI, as discussed under Question 1.2 above.
In 2019 the European Commission published a report entitled Liability for Artificial Intelligence and Other Emerging Digital Technologies. In this report, it analysed existing liability regimes (which are regarded as broadly sufficient to cope with the current level of deployed AI) and made some recommendations for the future. Those recommendations include:
- the expansion of strict liability (eg, for operators of robots or autonomous vehicles in public places);
- the expansion of the duty of care to include specific types; and
- the introduction of joint and several liability where it is difficult to determine who is likely to be at fault.
Interestingly, the report rejects the idea of AI having its own legal personality.
The European Commission also published the Report on the safety and liability implications of Artificial Intelligence, the Internet of Things and robotics. This identified areas in which current legislation, such as the General Product Safety Directive (2001/95/EC), may have to be amended or replaced in the future to deal with AI and new technologies generally.
We will have to see how these reports are reflected in future EU legislation.
9.3 What risks does the use of AI present with regard to potential bias and discrimination? How can these be mitigated?
Discrimination and bias – whether based on gender, colour, religion or other grounds – is illegal in most developed countries as a result of legislation. However, this is not to say that it does not exist. AI is seen as a very effective tool in eliminating personal bias or prejudice in areas such as recruitment, for example; but there is also a danger that it can create or perpetuate bias or discrimination. The authors of AI applications could reflect their own prejudices when writing AI programs, for example.
In February 2020 the European Commission published a white paper on Artificial Intelligence – A European approach to excellence and trust. In this white paper, the European Union proposed that legal measures be introduced to ensure that the use of AI products or applications does not result in discrimination. Again, we will have to wait and see whether the commission proposes legislation and then how that might be covered between parties in contract clauses.
For the moment, customers would be advised to seek comfort from suppliers as to how they have ensured that their applications are devoid of discrimination and bias. This should be backed up with contractual warranties and indemnities.
10.1 How is innovation in the AI space protected in your jurisdiction?
Ireland has a robust IP legal regime, including copyright and patents, with much of the modern law based on EU legislation.
As AI develops, IP law may struggle to keep up; and in the short to medium term, the protection of AI innovation is likely to be based on a combination of IP law and contractual commitments.
Section 17(2) of the Copyright and Related Rights Act 2000 provides copyright protection for literary works, which include software programs. However, the work can only be owned by a legal person – thus excluding, it would seem, any similar output generated by a machine.
The Copyright Act also protects computer-generated software. Section 21(f) attributes ownership to the person by which "the arrangements necessary for the creation of the work are undertaken". However, some AI applications will have the capacity to produce content without human intervention. In such cases it will be difficult to meet this requirement and identify the relevant person.
This is compounded by the definition of a ‘computer program' in the Copyright Act, which requires it to be the author's own intellectual creation. Fitting the output from machine learning into this definition, for instance, would appear difficult.
Because of this uncertainty, we recommend that ownership matters be clearly dealt with in contracts between suppliers and customers.
In respect of patents, computer programs are not generally eligible for patent protection (Section 9(2)(c) of the Patents Act 1992). Section 15 of that act provides that only legal persons can own a patent. There is no concept similar to that of computer-generated software under the Copyright Act. Therefore, to obtain a patent on the output of an AI application, a legal person would have to prove that it was the inventor of the underlying algorithm.
Again, this is not ideal for machine-generated output and we see this also as something which will need to be covered contractually by the parties.
10.2 How is innovation in the AI space incentivised in your jurisdiction?
Ireland offers some generous incentives and grants to both incoming and indigenous companies.
From a tax perspective, Ireland has a much-envied corporate tax rate of 12.5%. It provides research and development (R&D) tax credits of 25% and a 6.25% preferential tax rate on income derived from qualifying intellectual property (Knowledge Box).
In respect of grants for foreign companies wanting to set up in Ireland, the Industrial Development Authority is a state agency which is responsible for procuring foreign investment in Ireland. It provides a range of grants, including capital grants, interest subsidies and loan guarantees, and grants for rent reduction, employment, training, R&D and technology acquisition. For further details see www.idaireland.com/how-we-help.
Enterprise Ireland is a state agency which looks after indigenous companies and procures markets for their products abroad. It also provides a number of grants and incentives, and has a high-performance unit for fast-moving companies (see www.enterprise-ireland.com/en/funding-supports/).
See also questions 11.2 and 13.1.
11 Talent acquisition
11.1 What is the applicable employment regime in your jurisdiction and what specific implications does this have for AI companies?
Employment law in Ireland is largely governed by employment contracts between an employer and employee or contractor within the parameters of relevant legislation. The law distinguishes between employees (including part-time employees) and independent contractors, with the majority of employment legislation applying to the former only.
Relevant legislation includes the Terms of Employment (Information) Acts 1994 to 2014 (as amended) and the National Minimum Wage Act 2000. The minimum hourly rate payable in Ireland is currently €10.10. The Industrial Relations (Amendment) Act 2015 provides for collective wage agreements in particular sectors, but these are virtually unheard of in technology companies. The Organisation of Working Time Act 1997 governs working hours and holiday entitlement.
We do not see any particular implications for AI companies in the legislation itself. However, ICT skills in general are in huge demand in Ireland because of the large number of technology companies and there is now a shortage of indigenous people with these skills. Many start-up and early growth companies in Ireland – including those involved in AI – outsource much of their development work to third-party contractors, both in Ireland and overseas. Eastern Europe is a particularly popular source of well-trained programmers and developers used by Irish tech companies.
The Department of Enterprise, Skill and Resources operates the Critical Skills Occupations List. This sets out a number of critical job types where there is a shortage of skills and as a result, workers from non-EU countries are encouraged to apply for working visas. Job types include IT developers, programmers, website designers, business analysts and more.
In 2019, nearly 17,000 critical skills working visas were issued to people from outside the European Union.
In January 2021 the Irish government published its first National Remote Work Strategy. It acknowledges the new reality brought on by the COVID-19 pandemic and that, while some people will work full time from the office or from home, most of the workforce could be blended workers, working sometimes from the office and other times from home, a hub or on the go.
It seeks to:
- mandate that home and remote working be the norm for at least 20% of public servants;
- provide for investment in remote hubs across Ireland;
- change the tax laws to accommodate the new norm;
- introduce a right in law to request remote working; and
- legislate for the right to disconnect.
11.2 How can AI companies attract specialist talent from overseas where necessary?
Aside from the fact that Ireland – and Dublin in particular – is a vibrant location that is popular with young people, there are a number of incentives to entice foreign professionals and workers to come to Ireland.
Many global firms – such as IBM, Apple, Intel, Google, Amazon and Facebook (all involved to some degree with AI projects) – have significant Irish presences and offer very attractive packages to entice workers to join them.
The Special Assignee Relief Programme (SARP) was first introduced in 2012 to
encourage key personnel to relocate to Ireland. It allows a foreign employee to exclude 30% of his or her total remuneration from the Pay as You Earn tax payment (other taxes – the universal service charge and pay-related social insurance – will still apply). Employers can also pay for return family trips to the country of origin and school fees without creating additional tax liabilities.
However, there are disincentives too. By its own admission the government recognises that personal tax – despite incentives under SARP – is still too high. There is also a shortage of affordable housing in Dublin in particular, which is where many of the global tech players are headquartered. Ireland's healthcare system is also regarded as far below best in class, with long waiting lists.
Nonetheless, those issues may be less of a problem for young people and a 2019 survey conducted by Indeed revealed that 22% of searches for Irish tech jobs came from abroad.
In addition, Project Ireland 2040, published by the Irish government, sets out a roadmap of how infrastructure and services can be improved to cater for a population increase of 1 million people (both Irish born and foreign nationals) over the next 20 years.
12 Trends and predictions
12.1 How would you describe the current AI landscape and prevailing trends in your jurisdiction? Are any new developments anticipated in the next 12 months, including any proposed legislative reforms?
In general, considering research and development, initiatives from global tech companies located in Ireland, the myriad of start-ups in the space and the support of state agencies for AI, the landscape looks extremely healthy.
Ireland is home to many global players that are active in AI, such as IBM and Accenture; and has also spawned its own success stories, such as Nuritas and Movidius. Behind them is a myriad of AI start-ups, all hoping to be the next big thing. These companies are developing solutions across all industries, including recruitment, healthcare, power generation, sport, legal cost analysis, transport, voice recognition and image recognition.
From a corporate user perspective, the most recent comprehensive study was carried out in 2019 by the National Standards Authority of Ireland. It revealed that 40% of Irish companies were already using AI in their businesses and 50% intended to deploy it in the future. Some 55% of the companies surveyed believed that AI would be transformational in developing their products and services in the future. There is thus a healthy appetite among corporates to deploy AI solutions.
However, Ireland is not a major manufacturing economy, so it is unlikely to see wide deployment of robotic engineering. It has no car manufacturing or assembly plants, so is unlikely to see the development of autonomous vehicles (AVs). It is instead very likely that Ireland will embrace AI as a service type of technology.
In respect of state involvement and support, the outlook is also relatively healthy.
Ireland is ranked sixth by the European Commission in the EU Digital Economy and Society Index (DESI). The DESI measures Europe's digital performance and tracks the evolution of EU member states in digital competitiveness.
In question 2.5, we highlighted the role of CeADAR, the National Centre for Applied Analytics and Machine Intelligence, which is supported by the government. Its mission is to act as a market-focused technology centre that drives the accelerated development and adoption of AI technology and innovation, and serves as a bridge between the worlds of applied research in AI and its commercial use.
Science Foundation Ireland – a state-sponsored agency engaged in scientific research – has a dedicated centre which focuses on the development of AI solutions across a wide spectrum, such as smart buildings, mobility and transportation, AVs, public service delivery, manufacturing, enterprise, cybersecurity, climate change and environment, agriculture, marine, food production and natural resources.
However, many commentators bemoan a lack of any strategic plans or input from the state.
The government has been promising to publish a National AI Strategy for some time, in order to provide high-level direction to the development, adoption and implementation of AI in Ireland. The latest mooted date was January 2021; but at the time of writing, there is still no sign of the strategy. Like many other initiatives, this has been side-lined by the concentration on fighting the COVID-19 pandemic.
13 Tips and traps
13.1 What are your top tips for AI companies seeking to enter your jurisdiction and what potential sticking points would you highlight?
There are many reasons why Ireland is attractive to AI companies. It has a long pedigree of attracting foreign direct investment, and substantial grants and aids are made available by the Industrial Development Authority (IDA). It is home to more than 1,200 foreign companies. Nine out of the top 10 global software companies have substantial bases in Ireland.
Ireland also has a young, well-educated workforce – 33% of the population is under 25. The percentage of the population aged 25 to 64 who have successful completed third-level education is one of the highest in the European Union. It has the highest proportion of science, mathematics and computing graduates in the Organisation for Economic Co-operation and Development.
Ireland is now the only English-speaking country in the European Union.
As outlined in question 10.2, Ireland has a corporate tax rate of 12.5%. It provides research and development tax credits of 25% and a 6.25% preferential tax rate on income derived from qualifying intellectual property (Knowledge Box).
For these reasons, AI companies should seriously consider Ireland as a destination and engage early with the IDA to avail of the generous grants available for locating to Ireland.
The one downside is that Ireland has become so popular for tech companies that a large number are now chasing the same Irish graduates and young employees. However, Ireland has not been slow to address this deficit by opening up the market to skilled workers outside the European Union. Ireland's infrastructure and health system also lag behind those of its European neighbours, but this does not seem to be a deterrent in capturing foreign direct investment or the necessary talent to meet demand.
The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.