1 Legal and enforcement framework

1.1 In broad terms, which legislative and regulatory provisions govern AI in your jurisdiction?

The systematic regulation of AI is still in its early stages in Spain. To date, there are only a few scattered and fragmentary references in Spanish law that could apply to AI systems, which are very far from constituting a complete body of regulation.

Examples include the following:

  • Article 23 of Law 15/2022 on equal treatment and non-discrimination sets out a series of provisions that public administrations must consider when using algorithms for decision making.
  • Book IV of the Royal Decree-Law 24/2021 on the transposition of the EU Copyright Directive establishes some limits to intellectual property that may be relevant in the case of AI, such as text and data mining.
  • Royal Legislative Decree 2/2015, which approves the revised text of the Workers' Statute, provides that works council have the right to be informed by the employer of the parameters, rules and instructions on which AI algorithms or systems that affect decision making on working conditions or access to and maintenance of employment are based.
  • The EU Digital Services Act (Regulation 2022/2065), which is directly applicable in Spain, establishes a framework in relation to the responsibilities of hosting service providers and online platforms with regard to algorithmic decision making.
  • Organic Law 3/2018 on the protection of personal data and the guarantee of digital rights and the EU General Data Protection Regulation (2016/679) apply to AI systems that use personal data.
  • The Digital Rights Charter emphasises, from a descriptive, prospective and assertive point of view, that AI should:
    • have a 'human-centric approach';
    • pursue the common good; and
    • comply with the principle of non-maleficence.

From a strategic point of view, there are direct references to AI in the National Strategy for AI, which provides a framework for reference and promotion of AI from an interdisciplinary approach. The strategy has the following objectives:

  • to position Spain as a centre of scientific excellence and innovation in AI;
  • to insert the Spanish language into the fields of application of AI;
  • to create qualified employment;
  • to transform and improve the productivity of companies and public administrations;
  • to create an environment of trust in relation to AI;
  • to promote humanistic values in AI; and
  • to promote inclusive and sustainable AI.

Notwithstanding the foregoing, it is expected that national regulations on AI will be defined mainly by the approval of key European regulations and guidelines in the sector that are still pending approval, such as:

  • the proposal for a regulation establishing harmonised rules in the field of AI;
  • the proposal for a directive on the adaptation of the rules on non-contractual civil liability to AI;
  • the proposal to amend the Product Liability Directive; and
  • the future regulation on horizontal cybersecurity requirements for products with digital elements.

The ultimate approval of these European regulations and guidelines will have a significant impact on the Spanish regulatory framework, establishing new guidelines in the field of AI and associated liability.

1.2 How is established or 'background' law evolving to cover AI in your jurisdiction?

In general terms, Spanish law has not been expressly adapted to AI and there is no known case law on this matter. However, in some areas, strategies, guidelines and recommendations are being issued or approved to promote a trustworthy environment regarding AI within the existing regulatory framework. Examples include the following:

  • The Agency for the Protection of Personal Data (AEPD) has prepared:
    • the Guide for Processing Operations that Include Artificial Intelligence, which clarifies the questions raised by AI within the framework of personal data protection; and
    • the Guide on Requirements in Audits of Processing Operations that Include Artificial Intelligence, which provides guidelines and objective criteria that should be taken into account in audits of data processing operations that include components based on AI.
  • Royal Decree 817/2023 establishes a controlled environment for testing compliance with the proposed EU AI regulation. This royal decree creates a sandbox for different projects chosen by public call with the objective of facilitating the implementation of principles that will govern the design, validation and monitoring of AI systems in order to help mitigate risks.

However, as far as basic legislation is concerned, regulatory changes will come mainly from EU regulations and directives, which are still in the pipeline (see question 1.1).

1.3 Is there a general duty in your jurisdiction to take reasonable care (like the tort of negligence in the United Kingdom) when using AI?

Spanish law, like the law of most continental civil law systems, requires individuals and organisations to exercise a duty of care and act responsibly in their acts and omissions. However, this regime differs from the tort of negligence in the United Kingdom in significant ways.

The Civil Code sets out a system of contractual and non-contractual liability, under which all natural and legal persons are liable for their actions. For such liability to arise, the following requirements must be met:

  • a culpable or negligent act or omission in relation to a legal or contractual duty or obligation;
  • actual and quantifiable damage; and
  • a causal relationship or nexus between the conduct (act or omission) and the harmful result.

Notwithstanding the foregoing, from a case law perspective and normatively in certain areas, so-called 'objective' or 'quasi-objective' liability has been gradually introduced, whereby the 'mere risk' of the activity carried out or the benefit it has for the owner of a property is more important than actual guilt – albeit without the latter disappearing altogether. One example of the above is the system of imputation of civil liability to drivers and owners of motor vehicles. On the other hand, civil liability can be imputed through:

  • direct liability – that is, where a party is held accountable for its own actions; or
  • indirect liability, which refers to cases where a party is held accountable for the actions of others, such as:
    • parents for the actions of minors under their care; and
    • animal owners for damages caused by their animals.

This general system of fault-based liability currently applies to suppliers and acquirers of IT solutions and systems. However, it is foreseeable that, due to the risks inherent in AI – and especially in the case of autonomous (unsupervised) decisions – a system of objective or quasi-objective liability will be adopted for operators of high-risk AI systems.

On the other hand, the General Law for the Defence of Consumers and Users establishes a dual liability regime for warranties and for defective goods and services that, under certain circumstances, could apply to AI systems. The first regime deals with the liability of sellers of both goods and digital content or services to consumers with respect to failures to conform to the essential characteristics of the contract. It is therefore a contractual liability; and the objective is mainly to ensure the delivery of goods and services in conformity with the characteristics of the contract. This regime applies to software made available to consumers, except in the case of free and open-source licences; therefore, in principle, it may also apply to specific AI systems in certain situations.

The second regime:

  • deals with personal and material damages caused by products and services that do not offer the safety that is expected or that one could legitimately expect; and
  • applies to the producer or importer or the supplier of the service in question.

In the case of product liability, it applies to:

  • any movable property, even if it is attached to or incorporated within other movable or immovable property; and
  • gas and electricity.

Given the consideration of software as an 'incorporeal' good in the Spanish legal system, it may also apply to AI systems. At the European level, the regulations on defective products are being modified to expressly incorporate AI systems within their scope.

1.4 For robots and other mobile AI, is the general law (eg, in the United Kingdom, the torts of nuisance and 'escape' and (statutory) strict liability for animals) applicable by analogy in your jurisdiction?

In Spain, as in almost all legal systems, robots and AI systems are not recognised as autonomous entities with rights and responsibilities, so they cannot be subject to direct liability separate from that of their owners or manufacturers. This debate even seems to have been postponed at the EU level.

With regard to the possible application by analogy of indirect liability to the owners of AI systems under the existing regulatory framework, Spanish law includes cases of indirect liability such as:

  • the liability of employers with respect to their employees;
  • the parental liability of parents and guardians with respect to their children; and
  • the liability of animal owners with respect to their animals.

Although to date, the Spanish courts have not ruled on the application of these regimes by analogy, some aspects bear similarities to the ownership of AI systems or robots such that the courts may favour such application by analogy.

1.5 Do any special regimes apply in specific areas?

As yet, Spain has no specific regulations on AI. However, the application of the civil liability regime to AI systems is the subject of study.

It is expected that in the coming months and years, a new liability regime for damage caused by AI will be approved on the following basis:

  • The proposed EU directive on liability for damage caused by defective products establishes that AI-based goods fall within the concept of a 'product' and thus fall within the scope of the directive. Therefore, when defective AI causes damage, the injured party may obtain compensation without proving the fault of the manufacturer, provided that the requirements of the standard are met (ie, proof of the defect, the damage and the causal nexus between both).
  • The proposed EU directive on liability in the field of AI will introduce an AI liability regime that seeks to lighten the burden of proof in favour of the affected person by including presumptions of fault and causal link in certain circumstances.

1.6 Do any bilateral or multilateral instruments have relevance in the AI context?

Bilateral and multilateral instruments that are relevant in Spain in the field of AI include the following:

  • In June 2018, the European Commission's Independent High-Level Expert Group on AI proposed ethical guidelines for trustworthy AI with the aim of ensuring that, throughout their lifecycle, AI systems are:
    • lawful (ie, they comply with applicable laws and regulations);
    • ethical (ie, they uphold ethical principles and values); and
    • robust, both socially and technically.
  • In addition, other recommendations have been published, such as those on:
    • policy and investment for trustworthy AI (April 2019); and
    • a final assessment checklist for trusted AI (July 2020).
  • As a member of the Organisation for Economic Co-operation and Development (OECD), Spain is one of 42 countries that have adopted (in May 2019) the OECD Principles on AI. These are a set of intergovernmental policy guidelines on AI aimed at ensuring the design of robust, secure, unbiased and reliable AI systems.
  • In November 2021, the United Nations Educational, Scientific and Cultural Organization presented the Recommendation on the Ethics of Artificial Intelligence, which addresses AI ethics as a systematic normative reflection, based on a comprehensive, global, multicultural and evolving framework of interdependent values, principles and actions that can guide society in addressing the challenges of AI systems for humans, societies and the environment and ecosystems.
  • The European Declaration on Digital Rights and Principles for the Digital Decade (January 2023) focuses on the need to promote and ensure human-centred, trustworthy and ethical AI systems throughout their development, deployment and use in line with EU fundamental rights and values. Furthermore, it emphasises the importance of ensuring that there is an appropriate level of transparency in the use of AI algorithms and systems.

1.7 Which bodies are responsible for enforcing the applicable laws and regulations? What powers do they have?

The following entities, among others, are responsible for enforcing the regulations that affect AI systems in Spain:

  • The Agency for Supervision of AI is assigned to the secretary of state for digitalisation and AI of the Ministry of Economic Affairs and Digital Transformation. It is a public body with administrative, inspection and sanctioning powers regarding the safe and reliable use of AI systems. Its actions include:
    • increasing awareness of AI;
    • providing training and promoting the responsible, sustainable and reliable use of AI;
    • establishing mechanisms for providing advice and raising awareness among society and other actors in relation to the development and use of AI;
    • collaborating and coordinating with other supervisory authorities, both national and supranational;
    • promoting the use of AI sandboxes; and
    • supervising the implementation, use and commercialisation of systems that include AI, especially those that may pose significant risks to health, safety and fundamental rights.

The agency's statute was approved on 22 August 2023.

The AEPD is responsible for monitoring compliance with the GDPR and Organic Law 3/2018 on the protection of personal data and the guarantee of digital rights. It has attributed functions and investigatory, corrective and sanctioning powers.

Other bodies with powers that may relate directly or indirectly to AI include the following:

  • The position of secretary of state for digitalisation and AI was established by the Council of Ministers in 2020 with the aim of promoting the digital transformation in Spain to ensure secure, reliable, integrated growth that places citizens front and centre.
  • The Data Office, which reports to the secretary of state for digitalisation and AI, is the competent body for data governance and sharing across different sectors of the Spanish economy and society. Its functions include:
    • designing strategies and frameworks for data management;
    • creating spaces for data sharing between companies, citizens and public administrations in a secure manner and with proper governance;
    • designing governance policies and standards for data management and analysis by the general state administration; and
    • coordinating the data initiatives of the various ministerial departments and public administrations within the framework of the strategies and programmes of the European Union.
  • The AI Advisory Council is a government advisory body comprising about 20 internationally renowned experts from various geographical areas and specialties, who provide independent advice and recommendations to ensure the safe and ethical use of AI.

1.8 What is the general regulatory approach to AI in your jurisdiction?

Spain has no specific regulations that address AI systems. However, efforts are being made to establish the basis for future regulation, which will be aligned with EU legislative initiatives. It is expected that in the coming months, a clearer legal framework for AI systems will be put in place to facilitate the design, development, implementation and use of ethical, safe and reliable AI systems, to the extent that the European Union is working on a regulatory framework for AI through various regulations and directives. In Spain, in October 2023, a proposal was made for an organic law to regulate simulations of images and voices of people generated through AI.

2 AI market

2.1 Which AI applications have become most embedded in your jurisdiction?

Spanish companies are adopting various AI applications to improve their operations and processes. While take-up has been limited thus far, it is now beginning to increase.

In general terms, AI systems are primarily used as tools that help companies to:

  • analyse large amounts of data;
  • extract relevant information; and
  • make more informed decisions.

Common areas in which these technologies are being used include:

  • data mining;
  • speech recognition; and
  • natural language processing.

Additionally, a 2023 study conducted by a National Observatory of Technology and Society working group on the use of AI and big data in Spanish companies revealed that AI is being used to:

  • automate workflows;
  • streamline repetitive tasks; and
  • free up time for employees to focus on higher-value activities.

This includes the development of service robots and chatbots that can:

  • provide automated assistance to customers; and
  • enhance the user experience.

As AI continues to evolve, it is likely that:

  • these tools will increasingly be used for functions such as business decision support; and
  • Spanish companies will increasingly avail of the capacity of AI to:
    • analyse data;
    • identify patterns; and
    • provide relevant information.

Taking all this into account, it can be said that there is a growing trend of incorporating AI into business operations.

2.2 What AI-based products and services are primarily offered?

The types of AI technology that are most commonly offered include:

  • machine learning through big data;
  • service robots and virtual assistants, or chatbots, for customer service; and
  • natural language processing.

More recently, as a result of the explosion in generative AI, generative AI solutions are being offered – for example, for:

  • the generation of legal documents; and
  • the preparation of commercial proposals.

2.3 How are AI companies generally structured?

In Spain, AI companies tend to be structured similarly to other technology and software companies. The organisational structure may vary depending on the size of the company.

AI companies in Spain are usually established as:

  • limited liability companies (SLs); or
  • public limited companies (SAs).

The choice between these types of companies depends on various factors, such as:

  • the company's size;
  • the corporate objectives; and
  • the nature of the business.

SLs are more common in startups and smaller companies, as they offer greater flexibility in common operations. On the other hand, SAs are used by larger companies or companies seeking access to financial markets.

Law 18/2022 on the creation and growth of companies has introduced changes to corporate regulation in Spain:

  • allowing for the establishment of an SL with a minimum share capital of €1; and
  • introducing procedures for the incorporation of companies in a quick and agile manner through telematic means.

2.4 How are AI companies generally financed?

In recent years, there has been an increase in venture capital investment in AI companies in Spain. There are venture capital investment funds and accelerators that specialise in disruptive technologies such as AI, which seek to identify and support promising startups and ventures in these fields.

Another alternative for financing AI companies in Spain is through collaboration between the public and private sector to boost the development of AI. Such collaborations can include:

  • joint research and development projects; and
  • the creation of financial support and guidance programmes for AI companies.

There are also certain incentives for research and development and specific mechanisms through which third-party investors with tax capacity can take advantage of deductions that would otherwise be lost in newly created companies that have not yet made a profit. This facilitates the promotion of structures in the form of economic interest groups, allowing investors to take advantage of the negative tax base in the development company and the aforementioned tax incentives.

2.5 To what extent is the state involved in the uptake and development of AI?

The Spanish government plans to allocate around €600 million to the implementation of the National AI Strategy between 2021-2023. This strategy comprises six pillars:

  • promoting scientific research, technological development and innovation in AI;
  • fostering the development of digital capabilities, enhancing national talent and attracting global talent in AI;
  • developing data platforms and technological infrastructures to support AI;
  • integrating AI into value chains to transform the economy;
  • enhancing the use of AI in public administrations and national strategic missions; and
  • establishing an ethical and regulatory framework that reinforces the protection of individual and collective rights in order to ensure social inclusion and welfare.

Calls for grants for aid to finance AI projects are gaining traction in Spain – for example, the call for grants for aid to finance projects as part of the R&D Missions in Artificial Intelligence 2021 Programme within the framework of the Spanish Digital Agenda 2025 and the National AI Strategy.

3 Sectoral perspectives

3.1 How is AI currently treated in the following sectors from a regulatory perspective in your jurisdiction and what specific legal issues are associated with each: (a) Healthcare; (b) Security and defence; (c) Autonomous vehicles; (d) Manufacturing; (e) Agriculture; (f) Professional services; (g) Public sector; and (h) Other?

(a) Healthcare

To date, there is no specific regulation of AI in the healthcare sector. However, in 2021, the Ministry of Health published the Digital Health Strategy, which highlights the need to:

  • develop data platforms and technological infrastructure that support AI;
  • create strategic missions in health;
  • promote the definition of an ethical and regulatory framework that strengthens the protection of individual and collective rights; and
  • ensure inclusion and social wellbeing.

One example of the implementation of the objectives of this strategy is the creation of a healthcare data pool through the Digital Health Commission of the Interterritorial Council of the National Health System in accordance with the Recovery, Transformation and Resilience Plan.

(b) Security and defence

There is no specific regulation of AI in the security and defence sector. However, the secretary of state for defence, through Resolution 1197/2023, has approved a strategy for the development, implementation and use of AI in the Ministry of Defence, with the aim of increasing the efficiency of the ministry's missions and tasks.

(c) Autonomous vehicles

The Law on Traffic, Circulation of Motor Vehicles and Road Safety provides that 'drivers' are persons who are in command of a vehicle. In the case of vehicles operated by a learner driver, the person behind the additional controls is considered to be the driver.

Article 11bis of the law provides that the owner of an automated driving system must communicate to the Vehicle Registry of the Central Traffic Department the capabilities or functionalities and operational design of the automated driving system, both:

  • at the time of registration; and
  • subsequently whenever there is any update of the system throughout the useful life of the vehicle.

In Instruction VEH 2022/07, the Directorate General of Traffic:

  • defines an 'automated vehicle' as a "motor vehicle designed and built to move autonomously for certain periods of time without continuous supervision by the driver but for which the driver's intervention is still expected or needed";
  • sets out the procedure and requirements for the authorisation of tests or research trials carried out with automated vehicles on roads open to traffic in general; and
  • sets out the requirements to apply for such authorisation and the procedure for the designation of an authorised technological recognition centre for the purposes of the instruction.

(d) Manufacturing

There is no specific regulation of AI in the manufacturing sector. At the national level, some projects are underway which are promoted by the General State Administration. One example is the Gaia-X National Hub for the development of open and secure data infrastructure, which has seen the establishment of:

  • several working groups focused on specific sectors such as:
    • health;
    • industry 4.0;
    • engineering and construction;
    • enabling technologies;
    • finance and public administration; and
  • four other cross-cutting working groups focused on:
    • legal;
    • technical;
    • projects; and
    • ethics.

(e) Agriculture

There is no specific regulation of AI in the agricultural sector. At the national level, the Gaia-X National Hub for the development of open and secure data infrastructure has a working group dedicated to the agri-food sector. The digitalisation strategy for the agri-food and rural sector also includes important measures for this sector. It covers matters such as:

  • open data;
  • training and advice on digital skills;
  • the generation of information; and
  • funding for digital entrepreneurship.

(f) Professional services

There is no specific regulation of AI in the professional services sector. This notwithstanding, the measures envisaged in the National AI Strategy include:

  • developing digital capabilities;
  • enhancing national talent; and
  • attracting global talent.

These are considered essential to enhance technical AI skills among the active population in order to:

  • facilitate access to quality new jobs; and
  • address challenges in the future job market.

(g) Public sector

Within the framework of the National AI Strategy, the Charter of Digital Rights and European initiatives regarding AI, Article 23 of Law 15/2022 on equal treatment and non-discrimination stipulates that public administrations must:

  • encourage the implementation of mechanisms to ensure that algorithms involved in decision-making processes seek to minimise bias and enhance transparency and accountability whenever technically feasible. These mechanisms should include the design and training data and address the potential discriminatory impact of the algorithms. To this end, impact evaluations should be conducted to determine potential discriminatory bias;
  • prioritise transparency in the design, implementation and interpretability of decisions made by algorithms involved in decision-making processes;
  • promote the use of ethical and reliable AI that respects fundamental rights; and
  • promote quality certification for algorithms.

At the regional level, for example, Decree-Law 2/2023 on urgent measures to boost AI in Extremadura establishes an essential framework for measures aimed at supporting, promoting and developing AI systems in the autonomous community of Extremadura.

4 Data protection and cybersecurity

4.1 What is the applicable data protection regime in your jurisdiction and what specific implications does this have for AI companies and applications?

The Spanish personal data protection regime comprises:

  • the General Data Protection Regulation (GDPR); and
  • Organic Law 3/2018 on the protection of personal data and the guarantee of digital rights, which is the national law that complements the GDPR.

The Agency for the Protection of Personal Data (AEPD) has also published:

  • the Guide for the Adaptation to the GDPR of Processing Operations that Incorporate AI, which lays the foundations for adapting products and services that incorporate AI components to the GDPR; and
  • the Guide on Requirements for Audits of Processing Operations that Include AI, which contains a series of guidelines and criteria that should be considered when carrying out audits of personal data protection processing operations that include AI-based components.

In view of the above, and for illustrative purposes only, companies and their applications must:

  • be transparent about the use of AI; and
  • clearly explain how personal data is collected and processed.

The processing of personal data through AI systems requires a data protection impact assessment. The data controller must further give users the right to:

  • access data;
  • rectify data;
  • erase data;
  • restrict the processing of data;
  • object to the processing of data; and
  • not be subject to a decision based solely on automated processing.

In addition, appropriate organisational and technical measures must be implemented to protect personal data from unauthorised access, loss or alteration.

The principle set out in Article 25 of the GDPR – privacy by design and by default – plays a fundamental role in the development and use of AI systems. In this sense, AI systems must integrate privacy and security measures by design to ensure:

  • data minimisation;
  • access control;
  • transparency; and
  • data accuracy.

Similarly, AI systems should be configured by default in favour of privacy and personal data protection.

4.2 What is the applicable cybersecurity regime in your jurisdiction and what specific implications does this have for AI companies and applications?

Public sector entities must comply with Royal Decree 311/2022, which regulates the National Security Scheme (ENS). The ENS also applies to private sector companies that render services or provide solutions to public sector entities for the exercise by the latter of their competence and administrative powers.

Critical infrastructure operators must comply with:

  • Law 8/2011, which establishes measures for the protection of critical infrastructure; and
  • Royal Decree 704/2011, which approves the regulation for the protection of critical infrastructure.

On the other hand, certain companies will have to comply with Royal Decree-Law 12/2018 on the security of networks and information systems. Meanwhile, financial institutions must comply, once applicable, with EU Regulation 2022/2554 on the digital operational resilience of the financial sector.

With regard to AI systems involving the processing of personal data, companies must adopt, in accordance with the GDPR, all necessary measures to ensure the security of the processing, taking into account:

  • the state of the art;
  • the costs of implementation;
  • the nature, scope, context and purposes of the processing; and
  • the rights and legitimate interests of the data subjects.

To this end, they must implement measures such as pseudonymisation, data encryption and other appropriate measures to ensure the confidentiality, integrity and availability of personal data.

5 Competition

5.1 What specific challenges or concerns does the development and uptake of AI present from a competition perspective? How are these being addressed?

In 2020, the National Commission of Markets and Competition published a joint contribution together with the Catalan Competition Authority which revealed that AI will result in changes to the functioning of economic markets and outlined the challenges this presents for the defence of competition. Noteworthy points include the following:

  • The exclusive or predatory practices of some companies that hinder access to data by competitors may create barriers to accessing fundamental data for the training of AI systems.
  • It is difficult to evaluate concentrations between operators with a significant volume of data (data mergers), because – unlike physical mergers, where the relevant market is relatively apparent – if a concentration is driven by an interest in obtaining data, it becomes more complicated to anticipate the markets that may be affected.
  • There is a risk of algorithmic collusion through the use of algorithms to set prices. The increasing use of these algorithms may encourage collusion in several ways, such as to monitor a previously agreed pricing strategy.

6 Employment

6.1 What specific challenges or concerns does the development and uptake of AI present from an employment perspective? How are these being addressed?

Companies' use of AI systems to make decisions that affect or may affect their employees may present risks to the fundamental rights of workers, such as infringement of the fundamental right to:

  • privacy (Article 18.1 of the Constitution);
  • the protection of personal data (Article 18.4 of the Constitution);
  • equality and non-discrimination (Article 14 of the Constitution); and
  • occupational health and safety (Article 15 of the Constitution).

In this context, the Spanish legal system recognises a series of rights and obligations regarding algorithmic information, from both:

  • an individual law perspective through the General Data Protection Regulation (GDPR); and
  • a collective perspective through Royal Legislative Decree 2/2015 approving the revised text of the Workers' Statute.

Thus, under Article 22 of the GDPR (read with Articles 13.2(f), 14.2(g) and 15.1(h) of the GDPR), workers have:

  • the right to be informed about the implementation of fully automated decisions, including profiling; and
  • the right to meaningful human intervention and supervision in each decision.

Under Article 64 of the Workers' Statute, work councils have the right to be informed by the employer of the parameters, rules and instructions on which AI algorithms or systems that affect decision making on working conditions or access to and maintenance of employment are based.

7 Data manipulation and integrity

7.1 What specific challenges or concerns does the development and uptake of AI present with regard to data manipulation and integrity? How are they being addressed?

The integrity and non-manipulation of data in AI systems are fundamental from a security perspective – that is, in terms of both:

  • the security and reliability of the internal functioning of AI solutions, to avoid compromising their environment; and
  • the ability of AI to resist threats and external vulnerabilities.

There are several risks that derive from the development and adoption of AI due to manipulation and lack of data integrity, including:

  • the risk of bias in AI systems, from which discrimination problems may arise; and
  • the risk of malicious attacks where the manipulation or alteration of data causes negative and/or undesired results.

The measures that can be employed to improve data integrity and prevent manipulation in order to reduce AI risks such as bias include:

  • the use of metrics and debugging and traceability techniques to ensure the fidelity and integrity of the dataset;
  • the classification and debugging of training data;
  • algorithm impact analysis techniques aimed at identifying the risk of bias in the algorithms; and
  • audits of the AI system logic to ensure data integrity and accuracy.

These and other measures should be considered from a risk-based perspective during the ideation, development and implementation of the AI system.

8 AI best practice

8.1 There is currently a surfeit of 'best practice' guidance on AI at the national and international level. As a practical matter, are there one or more particular AI best practice approaches that are widely adopted in your jurisdiction? If so, what are they?

Efforts are being made both nationally and internationally to establish AI best practice guidelines from different perspectives, such as ethics, security and personal data protection.

From an ethical standpoint, Spain, as a member of the Organisation on Economic Co-operation and Development (OECD), adopted the OECD Principles on Artificial Intelligence in May 2019. These principles set out intergovernmental guidelines aimed at promoting the development of robust, safe, secure, unbiased and trustworthy AI systems.

From the perspective of personal data protection, the Agency for the Protection of Personal Data has published:

  • the Guide for the Adaptation to the GDPR of Processing Operations that Incorporate AI, which lays the foundations for adapting products and services that incorporate AI components to the GDPR; and
  • the Guide on Requirements for Audits of Processing Operations that Include AI, which contains a series of guidelines and criteria that should be considered when carrying out audits of personal data protection processing operations that include AI-based components.

From an international and risk management perspective, ISO/IEC 23894, "Information technology: Artificial intelligence – Guidance on risk management" sets out guidelines on risk management in the context of AI for organisations that develop, produce, implement or use AI-based products, systems and services.

8.2 What are the top seven things that well-crafted AI best practices should address in your jurisdiction?

Well-crafted AI best practices should encompass the following, among other things:

  • Human oversight and explainability: Ensure auditability, applicability, traceability, human oversight, governance and reliability throughout the entire lifecycle of the AI system.
  • Transparency: Document the decision-making process, the data and algorithms used and all other relevant aspects during the operation of the AI system.
  • Privacy and data governance: Protect privacy and conduct a rights impact assessment in the design of algorithms in the case of automated or semi-automated decision making.
  • Technical robustness and security: Design and ensure throughout the lifecycle of an AI system that a cybersecurity approach is applied by design and by default.
  • Non-discrimination and equity: Design, implement and audit AI systems that ensure the right to non-discrimination in relation to AI-based decisions, data use and processes.
  • Accountability: Ensure accountability for AI system decision making and outcomes throughout the lifecycle.
  • Regulatory compliance: Ensure that the AI system complies with applicable regulations such as the Constitution and privacy regulations.

8.3 As AI becomes ubiquitous, what are your top tips to ensure that AI best practice is practical, manageable, proportionate and followed in the organisation?

Measures to help companies ensure the ethical and regulatory compliance of AI systems throughout the AI system lifecycle include the following:

  • Design and implement a clear corporate AI strategy that reflects the company's priorities, objectives and position with respect to the adoption or development of AI systems.
  • Draft and implement policies and procedures regarding the use of AI systems in the company, describing, for example:
    • permitted and prohibited activities;
    • AI systems training processes and decision-making processes; and
    • mechanisms to address potential bias, discrimination and unintended consequences.
  • Implement and continuously improve internal controls and processes to ensure the auditability, explainability, traceability, human oversight, governance and reliability of the organisation's AI systems.
  • Rigorously and periodically audit the AI systems to ensure that they are robust and secure.
  • Identify and implement the requirements derived from:
    • the new regulatory frameworks;
    • international standards;
    • codes of conduct; and
    • industry best practices.
  • Conduct impact assessments on the rights of individuals of AI systems and regularly identify and assess the risks of AI systems with regard to:
    • fundamental rights and freedom of individuals; and
    • cybersecurity.
  • Conduct ongoing and effective training to develop an ethical, safe and reliable AI culture within the company.
  • Develop and/or implement AI systems with a focus on privacy and security by design and by default.
  • Develop and implement a sustainable approach to AI in a way that promotes the use of AI to address sustainability and environmental challenges.
  • Audit the chosen AI system periodically.

9 Other legal issues

9.1 What risks does the use of AI present from a contractual perspective? How can these be mitigated?

The use of AI presents significant risks that must be managed contractually. These risks may vary depending on the type of AI product or service in question. For example, a generative AI service based on the generation of text, documents or images is different from an AI service that makes decisions and executes decisions directly and without human supervision. Obviously, the risks in the latter case can be very high. The particularities of AI mean that there are specific risks in contracts for the sale of AI-based solutions and services that set these apart from other cases. These include the following:

  • Risks relating to the attribution of non-compliance: These derive from:
    • the opacity of AI;
    • the complexity of algorithms and their operation;
    • unpredictability;
    • data dependence; and
    • cybersecurity vulnerability.
  • There may be difficulties in proving a contractual breach and the resulting liabilities.
  • Risks relating to proof of non-compliance and damage: Regardless of the obligation or responsibility that is involved, the corresponding non-compliance must be proven in accordance with:
    • the requirements of the contract itself, in terms of controls and preventions; and
    • the requisite standards and certifications which, had they been complied with, would have avoided or minimised the damage.
  • Damage limitation risks: This is the legal principle whereby, other than in case of bad faith or wilful misconduct, a party is liable only for damages foreseen or foreseeable at the time the obligation was created. This means that certain damages may not be covered. On the other hand, for suppliers, the limitation of damages may not compensate for the margin or profit expected from the sale of the AI solution.
  • Plurality of actors: The different participants in the development of the AI system must establish clear criteria for responsibility and provide mechanisms to buyers that prevent this plurality from becoming an obstacle to legal action.
  • Regulatory risks in data protection: Another risk derives from the failure to consider privacy by design and by default, which can lead to non-compliance with personal data protection regulations. To mitigate this risk, it is advisable to implement privacy protection measures from the initial stages of the design and development of AI systems. This involves, among other things:
    • incorporating privacy measures and controls into the design of algorithms;
    • implementing anonymisation and data minimisation mechanisms; and
    • establishing adequate controls to guarantee information security and conducting periodic audits to ensure compliance with current regulations.
  • Third-party rights: The dependence on data and the need in certain cases for supervised or unsupervised AI training creates a risk of infringing the IP rights of third parties by using copyrighted content (eg, artworks, books, music) and generating new content without the prior authorisation of the author. It is essential to ensure that contracts and agreements clearly establish compliance with the IP rights involved in the use of AI, including:
    • licences;
    • usage rights; and
    • possible compensation for infringements.

9.2 What risks does the use of AI present from a liability perspective? How can these be mitigated?

There is no ad hoc liability regime for damages derived from an AI system. Therefore, two major risks arising from the use of AI can be identified from a liability perspective:

  • The existing regulations do not include AI systems within the definition of a 'defective product', which leaves personal injury and property damage caused by a defective AI system in a regulatory limbo. The mitigation of this risk is foreseen in the proposed directive on AI liability through a rebuttable presumption of causation between the breach of the duty of care and the damage caused by the AI.
  • Another risk arises from the difficulty for the person affected by an AI system to prove the fault of the person liable, the damage and the causality between fault and damage in order to be entitled to compensation. Mitigation of this risk is foreseen in the proposed directive on AI liability through a rebuttable presumption of causation between the breach of the duty of care and the damage caused by the AI.

Measures are also expected to be directed towards establishing joint liability among all participants of a 'business and technological unit' through solidarity mechanisms that allow claims to be brought against any of them, without prejudice to subsequent recourse among them, for either:

  • the distribution of liability; or
  • the complete transfer thereof.

In view of the above, it is expected that in the coming months and years, regulations will set out the regime for liability from the use of AI systems.

9.3 What risks does the use of AI present with regard to potential bias and discrimination? How can these be mitigated?

The use of AI systems can involve risks of bias and/or discrimination. Examples include the following:

  • Limitation of access to essential private services: This may occur where AI systems are used to assess the creditworthiness of individuals or to establish their credit rating and applicants are denied opportunities without justification as a result.
  • Restriction of access to and enjoyment of essential public services and assistance: This may occur where public authorities use AI systems to assess the eligibility, withdrawal or recovery of individuals to essential public services and assistance.
  • Obstruction of access to employment: This may arise where AI systems are used in the recruitment environment to select personnel and may thus limit opportunities for employees without justification.
  • Discrimination in the judicial sphere: This may arise where AI systems are used by law enforcement authorities to assess the risk of an individual committing an offence or reoffending.

Further to the guidelines foreseen in the proposed EU AI regulation, the above systems are examples of high-risk AI systems. Concrete measures should be implemented in such cases to ensure respect for the fundamental rights and values of the European Union. Such measures include:

  • implementing risk management systems;
  • having up-to-date technical documentation;
  • ensuring transparency in the use and operation of the system; and
  • enabling users to understand and properly use the system or exercising human supervision.

10 Innovation

10.1 How is innovation in the AI space protected in your jurisdiction?

The protection of innovation in Spain depends on whether:

  • the innovation constitutes a work or invention; and
  • the circumstances of the case or the unauthorised use.

The following means of protection may be distinguished:

  • IP laws apply to works which are subject to IP protection rights or related rights, such as software and databases;
  • industrial property laws apply to inventions, utility models and industrial designs (also called drawings and models); and
  • trade secrets apply where specific valuable knowledge from a business perspective, which is not generally known in the circles in which that type of knowledge is used (ie, which is secret), can be protected against unauthorised acquisition or access.

In other cases, certain types of market conduct can be claimed based on the Unfair Competition Law where there is:

  • illegitimate exploitation of innovation which is not protected by the previous laws; or
  • certain acts of systematic imitation.

However, in such cases, intentional behaviour by the infringer – for example, the intent to exclude a competitor – is usually required.

Finally, the protection of algorithms – understood as a finite set of well-defined and ordered steps or instructions designed to perform a task or solve a specific problem – may be excluded from copyright protection of computer programs in accordance with Recital 11 of Directive 2009/24 to the extent that it is intended to protect implicit principles and not only their expression. Although this subject is beyond the scope of this Q&A, whether algorithms can be protected as an invention also remains unclear; so protection as a trade secret is always advised at minimum.

10.2 How is innovation in the AI space incentivised in your jurisdiction?

As in other sectors, innovation in the AI space is incentivised through different mechanisms, which are mainly of a fiscal nature. The most important include the following:

  • Tax deductions are available for research and development (R&D) and technology, respectively. These allow a company to deduct from its total corporate income tax liability:25% of the related expenses incurred in the relevant tax period, in the case of R&D; or 12% of the related expenses incurred in the relevant tax period, in the case of technology.
  • Startups are incentivised through various mechanisms, including the adoption of tax measures aimed at retaining and promoting talent in Spain. The Startups Law (28/2022) incentivises innovation through tax measures such as the following:
    • Companies which qualify as 'emerging companies' are subject to a reduced corporate income tax rate of 15% in the year in which they acquire this status and the following three years. The tax exemption for stock options plans has been increased from €12,000 to €50,000 annually in the case of delivery of shares or participations derived from the exercise of call options.
    • The maximum deduction base for investment in newly or recently created companies has been increased from €60,000 to €100,000 per year and the deduction rate has been increased from 30% to 50%.

11 Talent acquisition

11.1 What is the applicable employment regime in your jurisdiction and what specific implications does this have for AI companies?

The most relevant legal instrument in the Spanish labour system is the Workers' Statute, approved by Royal Legislative Decree 2/2015, together with the regulations thereunder.

The statute has mandatory application, although the rights it enshrines may be enhanced by:

  • the provisions of applicable collective agreements; and
  • ultimately, the will of the parties as expressed in the employment contract.

Collective agreements are concluded between the representatives of employees and employers to establish the working conditions for a specific sector, territory, company, group of companies or similar. These agreements:

  • may have the nature of rules and erga omnes character for the subjects included within their personal, functional and territorial scope; and
  • may not provide for less favourable rights than those established in the Workers' Statute, except where this is expressly permitted by law.

11.2 How can AI companies attract specialist talent from overseas where necessary?

The Startups Law aims to attract and retain national and international talent by incorporating measures that:

  • encourage graduates to seek employment in Spain; and
  • establish a special digital nomad visa for holders who are self-employed or who work for employers from anywhere in the world.

The SpAIn Talent Hub is an initiative in collaboration with Invest in Spain that serves as an information point for attracting and retaining academic and professional talent in the field of AI.

The Startup Law also aims to attract and retain talent through tax measures such as:

  • a reduced rate of corporate income tax and non-resident income tax; and
  • an increase in the tax exemption for stock options.

12 Trends and predictions

12.1 How would you describe the current AI landscape and prevailing trends in your jurisdiction? Are any new developments anticipated in the next 12 months, including any proposed legislative reforms?

The evolution of AI law in Spain is and will continue to be shaped by the objective of establishing a trustworthy environment for AI at:

  • the technological development level; the
  • the regulatory level; and
  • the social level.

In general terms, it is expected that national regulations on AI will be influenced by the approval of key European instruments in the sector, such as:

  • the EU AI regulation;
  • the directive on AI-related liability;
  • the revised directive on the liability for defective products; and
  • the future cyber resilience regulation.

These instruments will have a significant impact on the Spanish regulatory framework, establishing new guidelines and recommendations in the field of AI and associated liability.

In relation to the protection of privacy and the personal rights of individuals, the following trends are expected:

  • an increase in projects focused on the legal review and analysis of privacy by design and by default regarding the processing of personal data through AI systems; and
  • enhanced awareness of the need to conduct impact assessments regarding the design of algorithms and AI systems.

It is also anticipated that companies will ramp up training in order to develop an ethical, safe and reliable AI culture.

In addition, resources and efforts are being allocated towards the consolidation of an open data regulatory framework that regulates the strategy for publishing and accessing data of public administrations, with the aim of:

  • facilitating the use and sharing of multilingual data among administrations and other private actors; and
  • ensuring the correct and secure use of data.

Adjustments to the terms and conditions of online platforms may be envisaged, as well as the implementation of other regulatory and technical measures aimed at:

  • ensuring due and diligent algorithmic transparency; and
  • limiting data mining.

Finally, an increase in AI innovation is expected as the different cohorts of the Spanish AI regulatory sandbox are put into operation and begin to produce strategies and guidelines to improve practices in the implementation of AI systems.

13 Tips and traps

13.1 What are your top tips for AI companies seeking to enter your jurisdiction and what potential sticking points would you highlight?

Companies that wish to develop or implement AI systems in Spain should bear the following best practices in mind:

  • Adopt a regulatory and ethical compliance approach that covers the entire lifecycle of the system.
  • Assume proactive responsibility in order to identify, implement and review all necessary measures, controls and mechanisms to ensure that the AI system in question:
    • is ethical, safe and reliable; and
    • guarantees the rights of individuals.
  • Consider certain premises such as designing and implementing a clear corporate AI strategy that reflects their priorities, objectives and position with respect to the development of AI systems.
  • Establish a transparent framework of policies, procedures and controls on the use of AI systems to ensure the auditability, explainability, traceability, human oversight, governance and reliability of those systems.
  • Identify and comply with regulatory requirements derived from:
    • new regulatory frameworks;
    • international standards;
    • codes of conduct; and
    • industry best practices.
  • Continuous and effective awareness and training are essential to ensure the development of an ethical, safe and reliable AI culture.
  • Encourage a sustainable approach to AI across the organisation in a way that:
    • promotes the use of AI to address sustainability and environmental challenges; and
    • ensure the periodic review of the impact of AI systems in terms of sustainability.
  • Ultimately, do everything in their power – taking into account the state of the art – to implement an AI compliance system that ensures, by design and by default, that the AI system:
    • is ethical, safe and reliable; and
    • guarantees the rights of individuals.

The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.