1 Legal and enforcement framework
1.1 In broad terms, which legislative and regulatory provisions govern AI in your jurisdiction?
Hungary has no legislation that is specifically dedicated to AI. However, there are numerous laws which deal with the use of algorithms or AI (explicitly or implicitly), such as the following:
- Data protection: As the data protection laws are technology neutral and AI operation, by definition, is based on data, these laws have a significant impact on AI.
- Copyright: This includes the rights that parties can claim on:
-
- the components of AI (eg, databases or raw materials used to train AI models); and
- elements generated with the help of AI (eg, software, creations).
- Data economy: The legislative framework on national data assets regulates the use of public data (including in relation to the potential use of AI).
- E-administration: The legislative framework on e-administration contains explicit rules on electronic administration tasks and activities conducted with the help of AI.
- Contract law: Parties may agree on rights and obligations in relation to AI projects.
- Tort: Various forms of liability (eg, liability for hazardous activity, liability based on fault, product liability) could apply to AI by analogy; however, this has not yet been tested before the Hungarian courts.
- Other sectoral laws: As there is no dedicated AI legislation, various sector-specific rules (eg, on pharmaceuticals, financial services, health services) may apply to AI depending on the sector in which it is used.
In line with the EU Digital Strategy, there are numerous new legislative developments at the EU level – including the AI Act and the Data Act – which will significantly change the Hungarian landscape. On 30 September 2024, the government called the minister of national economy to prepare a proposal, with the involvement of relevant stakeholders, on the necessary legislation and related measures to implement the AI Act in Hungary. It is expected that Parliament will table this legislation in May 2025.
1.2 How is established or ‘background' law evolving to cover AI in your jurisdiction?
Hungary has not yet established a dedicated legal framework for AI. However, the National AI Strategy 2020–2030 has established a framework for future legislation (see question 1.8)
1.3 Is there a general duty in your jurisdiction to take reasonable care (like the tort of negligence in the United Kingdom) when using AI?
There is a general duty in Hungary to take reasonable care in relation to all acts by any person; however, there are no specific requirements in relation to AI. Under the principle of the duty of care, one should act with the care that may generally be expected from a reasonable person in the relevant circumstances. This duty of care forms the basis of fault-based liability.
This principle equally applies to operators of AI technology. For example, an operator of AI technology may be liable if it does not adhere to its duty of care when:
- choosing the right AI systems; and/or
- monitoring or maintaining AI systems.
The principle of the general duty of care in the context of AI has not yet been tested before the Hungarian courts.
1.4 For robots and other mobile AI, is the general law (eg, in the United Kingdom, the torts of nuisance and ‘escape' and (statutory) strict liability for animals) applicable by analogy in your jurisdiction?
Liability for robots and other mobile AI has not yet been tested before the Hungarian courts. Nevertheless, the general law may apply to mobile AI by analogy in Hungary, especially the following:
- Strict liability for hazardous activities: This may apply by analogy to mobile AI, meaning that an operator of mobile AI (as the party which is in control of the risks connected with the operation of mobile AI) may be liable. This means that an operator of mobile AI:
-
- may be liable even in the absence of fault; and
- may be exempted from liability only if it can prove that the damage occurred in the context of an unavoidable event that was beyond its control.
- Product liability: Hungary has implemented the old EU Product Liability Directive (85/374), and as such, a manufacturer of mobile AI products may be subject to product liability. This is also a form of strict (no-fault) liability. However, at a practical level, the application of product liability in this context is very challenging, for reasons such as the following:
-
- Whereas product liability traditionally focuses on the point at which a product is put into circulation, AI products are continually evolving;
- Given the interconnectivity of AI products/systems, it is difficult to capture what exactly constitutes a ‘defect'; and
- The ‘black-box' effect of AI makes it difficult for victims to prove the defect.
- These challenging issues will be partially addressed by the new EU Product Liability Directive (2024/2853), which will replace Directive 85/374. The new EU Product Liability Directive extends to digital products, including AI systems. It also:
-
- extends the liability even after a product has been placed on market (eg, through software updates); and
- imposes disclosure obligations on defendant companies to hand over evidence and introduces rebuttable presumptions to alleviate the victim's evidentiary difficulties.
- It is not yet known when Hungary will implement the new EU Product Liability Directive.
1.5 Do any special regimes apply in specific areas?
Hungary has no dedicated regime for AI; but various sector-specific laws (eg, data protection, consumer protection, competition, telecommunications) may apply, depending on the context.
In terms of liability, various forms of liability (eg, strict liability, fault-based liability, product liability) could apply to AI by analogy, depending on the context. However, this has not yet been tested before the Hungarian courts and no case law has developed as yet.
1.6 Do any bilateral or multilateral instruments have relevance in the AI context?
As a member of the United Nations, the Organisation for Economic Co-operation and Development (OECD) and the European Union, Hungary has adopted various multilateral instruments such as:
- the Hiroshima AI Process Comprehensive Policy Framework;
- the OECD AI Principles;
- the United Nations Educational, Scientific and Cultural Organization Recommendation on the Ethics of AI; and
- the Council of Europe Framework Convention on Artificial Intelligence.
1.7 Which bodies are responsible for enforcing the applicable laws and regulations? What powers do they have?
To date, there is no designated body that enforces AI-related rules or requirements in Hungary. However, a new enforcement regulatory body will soon be formed under the supervision of the Ministry of National Economy with the task of:
- exercising local supervision of the AI Act; and
- operating the regulatory sandbox.
In parallel, the Hungarian AI Council will soon be established to issue guidelines and opinions on the implementation of the AI Act.
In addition, existing horizontal and sector-specific regulators exercise their powers within their sphere of competence. These include:
- the domestic courts in case of civil, labour or criminal disputes;
- the consumer protection authorities and the Hungarian Competition Authority (HCA) in cases involving consumer protection or unfair commercial practices involving AI;
- the HCA and the European Commission in the competition law sphere;
- the Hungarian Data Protection Authority (DPA) in the data protection sphere;
- the Supervisory Authority for Regulated Activities, in case of cybersecurity under the Second Network and Information Systems Directive;
- the Central Bank of Hungary (CBH), in the financial sector; and
- the Hungarian National Media and Info-communications Authority, in terms of the application of the Digital Services Act and info-communications (AI constitutes part of the info-communications network).
1.8 What is the general regulatory approach to AI in your jurisdiction?
In 2020, the Hungarian government published the country's AI Strategy 2020–2030, which sets out its regulatory approach towards AI.
The strategy confirms that the regulation of AI is necessary at the national level, in conformity with the applicable EU legislative instruments. Among the main areas for regulation, it highlights:
- the framework for regulating data assets;
- the creation of a comprehensive AI regulatory environment (including AI registries, an AI-related legal entity, liability and industry-specific rules); and
- the adoption of industry ethical standards.
The DPA was the first regulatory body to deal explicitly with a case involving the use of AI. In that particular case, the DPA imposed a fine of HUF 250 million, suggesting that it has adopted a relatively strict regulatory approach when it comes to assessing the operation of AI in compliance with the data protection regulations.
Other regulatory bodies have also started exercising their powers in this regard. For example, in 2024, the HCA investigated the impact of AI on competition and consumers' transactional decisions. The HCA also recently launched an investigation against Microsoft for possible unfair commercial practices in relation to certain new AI-integrated features of the Bing search system to answer questions through its search interface.
Meanwhile, the CBH recently conducted a thematic study on the IT, privacy and other risks of AI in the banking and insurance sectors. The CBH focused in particular on:
- the detection of ‘data poisoning'; and
- how to choose the right patterns to avoid algorithmic discrimination.
Additionally, the CBH has established a regulatory sandbox for fintech companies in order to provide a safe harbour for testing and impact assessment.
2 AI market
2.1 Which AI applications have become most embedded in your jurisdiction?
Several initiatives are using AI or paving the way for its future use. The main technologies typically include:
- chatbot-based customer services;
- precision agriculture applications;
- predictive maintenance systems;
- fleet route optimisation programs;
- inventory forecasting; and
- medical diagnostics (in particular, cancer screening).
Surgical robots are also in operation in several hospitals.
There are frameworks in place that should help to underpin the future deployment of AI. Examples include:
- a test track for autonomous vehicles;
- an integrated health dataset; and
- a central identification service for public administrations.
Facial recognition systems and machine vision/image analysis solutions are also applied. In the logistics systems of some factories, AI continuously monitors order and stock levels, including shipments that are still in transit. These systems detect problems in the supply chain and suggest alternative routes or rescheduled deliveries. AI also controls the automated process of loading trucks to optimise the use of loading space. Workstations equipped with visual image processing supported by AI also handle quality assurance tasks during production.
However, the percentage of companies currently using AI in Hungary remains extremely low, at only 3%. This figure is higher – above 20% – if only Internet of Things applications are taken into account (according to Eurostat).
The use of AI in legal services is less common. In the services arena, the financial and insurance sectors are making strong use of AI-based solutions.
2.2 What AI-based products and services are primarily offered?
Primarily, the following AI-based products and services are offered in Hungary:
- The AI products of Big Tech multinationals are also available on the Hungarian market.
- The government-initiated, state-funded Artificial Intelligence Coalition is using the Machine Intelligence Designer platform to help industrial engineers to develop deep learning solutions for machine vision and time series analysis problems. Marketed products include:
-
- AI-enabled communication assistants;
- voice imaging services; and
- voice transcription services.
- In the banking sector, AI-based software helps to:
-
- detect fraud;
- ensure compliance with anti-money laundering legislation; and
- manage risk.
- AI-based technology for dermatology has been launched, making it possible for patients to diagnose skin diseases from the comfort of their own homes. This not only simplifies patients' lives but also allows a doctor to treat up to 40 cases per hour, increasing the efficiency of care. The technology is also being treated as the first public digital hospital in the European Union.
- Developments are significantly linked to the development ambitions of industrial companies (eg, self-driving vehicles).
- In the labour market, AI is used to evaluate job applications.
2.3 How are AI companies generally structured?
From a legal perspective, AI companies have no specific peculiarities. As regards the ownership structure, AI companies are usually startups, established with a well-defined goal of developing and marketing AI-driven products and services. The typical structure is a company with a small number of shareholders, a few of whom contribute to the professional output of the company, while the others provide the necessary funding to finance research and development (R&D).
2.4 How are AI companies generally financed?
AI companies are typically startups whose shareholders are generally investment companies of financial institutions or other entrepreneurs that provide the necessary financing for R&D. The state also provides grants for the development of AI technology which may be used in partnerships between industrial players and research institutes (universities).
2.5 To what extent is the state involved in the uptake and development of AI?
The Hungarian state is playing an active role in the development of AI. The Hungarian government announced its AI Strategy 2020–2030 in 2020. The role of the state is that of regulator rather than investor. The state allocates budgetary resources to subsidise the development of AI technology, but not as a market investor. It is also attempting to deploy AI in public administration and other state activities as far as possible.
The Hungarian state considers the development and application of AI as a competitive advantage. It has therefore launched a broad programme of:
- data economy development;
- application deployment; and
- technology building.
The government is also:
- introducing the use of AI technology into services provided by the state; and
- establishing a framework for the responsible development and use of AI.
In this context, it aims to:
- develop and promote responsible data asset management;
- modernise its own processes; and
- prepare for data and AI governance, with a particular focus on:
-
- health sector developments; and
- maintaining security.
The government is also developing regulatory frameworks to safeguard the rights of data subjects and end users in both data use and technology development, while enhancing transparency and security in law enforcement.
The government is further promoting the development of AI-based business advisory services (chatbots) that can expand and support the pool of digitally advanced businesses in Hungary. The aim is that these tools will be integrated into a government voice-based AI platform.
A further priority for the government is to promote the use of AI by small and medium-sized enterprises (see question 10.2).
3 Sectoral perspectives
3.1 How is AI currently treated in the following sectors from a regulatory perspective in your jurisdiction and what specific legal issues are associated with each: (a) Healthcare; (b) Security and defence; (c) Autonomous vehicles; (d) Manufacturing; (e) Agriculture; (f) Professional services; (g) Public sector; and (h) Other?
(a) Healthcare
The state is focusing on regulatory issues and on increasing the accessibility of health data. Making available and exploiting health data assets through modern infrastructure is an explicit regulatory objective. This requires an appropriate regulatory environment. The clear objective is to create the necessary infrastructure to support the use of health data assets. The regulatory environment is intended to facilitate the use of secondary health data.
(b) Security and defence
In the areas of security and defence, the regulatory and public administrative framework designed for automated systems will apply.
Specific security-related areas of focus include:
- the development of border control systems and complex identification systems;
- data-driven law enforcement and crime prevention using complex analysis;
- the introduction of existing AI technologies into the investigative process;
- AI-based mapping of offender contact networks.
Specific defence-related areas of focus include:
- the automation of big data processing, information operations and decision-making systems;
- the implementation and development of predictive supply systems;
- the development of autonomous systems in all relevant operational domains (airspace, surface, space, cyberspace);
- the development of human-machine interaction on both sides;
- protection against AI-supported systems in all relevant operational spaces (including modelling and simulation); and
- developments aimed at protecting and analysing the defence-related elements of national data assets.
(c) Autonomous vehicles
The Hungarian regulatory environment allows for flexibility in the conduct of test operations for self-driving vehicles. The government has made the promotion of such developments an explicit public objective by providing appropriate test tracks and an innovation-friendly legal environment.
(d) Manufacturing
In the manufacturing sector, an explicit aim is to create a test environment for the analysis of manufacturing data, in order to:
- facilitate manufacturing-related data management;
- develop cybersecurity and data protection in manufacturing;
- implement data standardisation protocols to facilitate data analytics in manufacturing; and
- introduce manufacturing data to the data marketplace.
To increase the efficiency of manufacturing and promote the development of new manufacturing processes, it is necessary to:
- centralise research and match it to industrial needs; and
- set up an innovation ecosystem (to be undertaken by the future AI National Laboratory).
Short-term areas of focus: These include:
- parameter control of production processes;
- manufacturing decision support;
- quality control with AI tools;
- online product testing;
- layout and process simulation;
- factory optimisation;
- predictive maintenance;
- high-accuracy indoor and outdoor positioning systems with 5G and AI;
- robot control support with AI solutions;
- artificial vision manufacturing applications;
- open production IT architecture; and
- manufacturing in the city.
Medium-term areas of focus: These include:
- AI use in 6G networks;
- after-sales product tracking;
- AI-based data processing;
- service demand estimation and forecasting;
- drone management in the industrial domain (sample factory, sample area);
- automated management of critical machine-to-machine communication;
- extensive use of Internet of Things devices and private communication devices in the industrial domain (sample area);
- supply chains;
- product tracking;
- optimisation of manufacturing logistics;
- optimisation of manufacturing energy management; and
- manufacturing cybersecurity.
With regard to the small and medium-sized enterprise (SME) sector, which is a key engine of the Hungarian economy, there is a need to implement digital transformation projects to ensure that manufacturing SMEs can remain competitive.
(e) Agriculture
The aim is to implement and disseminate AI technologies in line with the digital transformation of the agricultural sector. Agriculture-related focus areas include:
- the development of the Agro-Data Framework by creating a cloud-based data information platform that allows producer (farm-level) and government data related to agriculture to be recorded, processed and stored in a uniform, structured way;
- the establishment of a Digital Agro-Innovation Centre to develop a digital innovation ecosystem and incubate startups using AI technology. This will include the creation of a testing ground for innovation and testing of robots based on the use of AI technology;
- revision of the regulations on the use of drones and autonomous machines in the agriculture sector; and
- the development of a crop forecasting service.
(f) Professional services
Professional services as such are not currently the focus of legislation. In highly regulated sectors such as financial services and insurance, the supervisory authority plays a key role in promoting, implementing and controlling AI-driven technologies. The same applies to the implementation of AI-driven technologies in the public administration. As for other professional services, there is no relevant specific national regulation and none is expected in the near future.
(g) Public sector
With regard to both public administration and case management, the focus is on developing automatic decision making and automatising processes as far as possible.
(h) Other
Other AI-related developments include the following:
- Energy: The aim is to utilise data assets in the energy sector in the best possible way and to develop personalised services as a result. Among other things, developments include:
-
- the rollout of smart meters;
- smart grid development;
- the development of data-driven energy market models;
- predictive maintenance;
- autonomous operation; and
- the development of smart energy supply and optimisation systems.
- Banking/insurance: Many AI-related projects have already been implemented in these sectors, such as:
-
- automatic email responses using language processing;
- support of credit analysis;
- identification by analysing transaction patterns;
- preliminary processing of incoming claims; and
- modelling of possible damage events.
- Telecommunications: Several AI-related projects have already been implemented in this sector, including:
-
- automated customer service with phonebots/chatbots;
- forecasting of failures in the network infrastructure; and
- calibration of network coverage by applying self-learning antennae.
4 Data protection and cybersecurity
4.1 What is the applicable data protection regime in your jurisdiction and what specific implications does this have for AI companies and applications?
The main laws in the data protection field are:
- the EU General Data Protection Regulation (GDPR); and
- Act CXII/2011 on the Right of Informational Self-Determination and on Freedom of Information (‘Data Protection Act').
In order to implement the GDPR and the EU Law Enforcement Directive (2016/680), the Data Protection Act was completely amended in July 2018. It now contains three groups of provisions:
- additional procedural and substantive rules on data processing which falls under the scope of the GDPR;
- rules on data processing which does not fall under the scope of the GDPR; and
- rules on data processing for law enforcement, national security and national defence purposes.
As both the GDPR and the Data Protection Act are technology neutral, their provisions apply equally to AI companies. Further, both legislative instruments contain dedicated provisions on automated decision making and profiling that involve huge datasets and algorithmic AI software.
In general, the GDPR provides that data subjects have the right not to be subject to automated decision making. AI companies may also be required to conduct data protection impact assessments.
AI companies should plan well in advance to implement data protection principles (eg, data minimisation, purpose limitation transparency or accuracy) prior to the implementation of new AI solutions in line with privacy by design (rather than treating this as an afterthought).
AI companies should also be mindful of the European Data Protection Board opinion published in December 2024 which considers:
- when and how AI models can be considered anonymous; and
- whether and how legitimate interest can be used as a legal basis for the development or use of AI models.
4.2 What is the applicable cybersecurity regime in your jurisdiction and what specific implications does this have for AI companies and applications?
On 1 January 2025, the new Cybersecurity Act entered into force with the aim of:
- fully transposing the Second Networks and Information Systems Directive (NIS2); and
- consolidating the scattered Hungarian cybersecurity legislation into a single statute.
The Cybersecurity Act provides a detailed cybersecurity framework, including:
- security risk classification of information systems;
- risk management;
- implementation of controls; and
- roles and responsibilities.
Further detailed rules will be set out in decrees issued by:
- the government;
- ministers; and
- the cybersecurity regulator.
The legislature has not yet specified the quantum of fines for non-compliance, but based on NIS2, these may range up to:
- the higher of €7 million or 1.4% of a company's total worldwide annual turnover in the previous financial year, in the case of important organisations; and
- the higher of €10 million or 2% of a company's total worldwide annual turnover in the previous financial year, in the case of essential organisations.
Companies that fall under the scope of the Cybersecurity Act will be audited by cybersecurity auditors every two years; thus, ensuring NIS2 compliance is highly recommended.
Apart from compliance with the Cybersecurity Act, there are no specific implications for AI companies; but in line with the principle of proportionality, AI companies should establish proper mechanisms to resist state-of-the-art attacks, including:
- membership inference from the AI model;
- model inversion;
- regurgitation of training data; or
- reconstruction attacks.
The Data Protection Authority (DPA) has also established cybersecurity and data breach management as a General Data Protection Regulation (GDPR) enforcement priority in recent years. For example, in May 2020, the DPA imposed a GDPR fine of HUF 100 million on a Hungarian telecommunications company after an ethical hacker reported a security vulnerability to the company. Cybersecurity risks are particularly high at AI companies that manage large datasets, potentially including personal data, so attention should be devoted to this issue – especially in light of the DPA's enforcement practice.
5 Competition
5.1 What specific challenges or concerns does the development and uptake of AI present from a competition perspective? How are these being addressed?
The application of AI as a driving force of industry presents numerous competition law related challenges/concerns, such as the following:
- the close vertical interrelationships between companies dealing with AI;
- robust barriers to entry;
- difficulties in accessing resources for AI (eg, large datasets);
- challenges relating to interoperability and data portability;
- the high level of business secrecy; and
- the cost and deployment of licensing regimes.
In short, the benchmark for entry to the AI market is very high.
Hungary has no national legislation addressing this issue, as competition law is highly harmonised across the European Union. The Digital Markets Act covers core platform services – including virtual assistants, cloud computing and online intermediation services – that largely depend on algorithms and AI systems. Among the main obligations regarding data management, core platform service providers designated as gatekeepers:
- may not use any data provided by business users to adjust their own AI offerings; and
- must provide business users with access to the data generated by their activities on the gatekeeper's platform.
6 Employment
6.1 What specific challenges or concerns does the development and uptake of AI present from an employment perspective? How are these being addressed?
There are several employment-specific challenges, including in relation to:
- transparency;
- bias/discrimination;
- delegation of employer tasks to AI;
- autonomous decision making; and
- liability.
Hungarian law does not specifically address these challenges. However, much can be concluded from the general principles of Hungarian employment law and data protection laws. Also, the Platform Work Directive aims to regulate employment relationships where many of the employee's obligations and conditions are set by algorithms.
From the employment perspective, some of the key takeaways include the following:
- An employer must notify an employee or job applicant about its use of AI tools.
- An employer may implement only those AI tools which are:
-
- absolutely necessary for the given employment purpose; and
- proportionate to the limitation of employees' personality rights.
- An employer must carry out a risk assessment for HR-related AI systems, ensuring:
-
- transparency for low-risk systems; and
- comprehensive compliance with the AI Act for high-risk systems.
- If an employer uses any AI systems that are prohibited under the AI Act, it must stop using such systems by February 2025.
- An employer that utilises AI systems must ensure a sufficient level of AI literacy among its staff by February 2025.
- Human oversight must be ensured: a person must be responsible for the final HR decision.
- An employer may be required to carry out a data protection impact assessment.
- An employer must implement proper policy/measures to prevent employee bias/discrimination (eg, seeking data points outside the existing organisation; ensuring that sensitive characteristics are not the decisive factor).
- Employees have the right not to be subject to a decision based solely on automated processing which produces legal effects concerning them.
- The works council must be notified prior to the implementation of AI tools.
- An employer must state in its internal policy:
-
- whether the use of AI tools is banned or permitted for the completion of employees' tasks; and
- who is responsible for the results of such tools.
7 Data manipulation and integrity
7.1 What specific challenges or concerns does the development and uptake of AI present with regard to data manipulation and integrity? How are they being addressed?
AI presents a risk of potential bias and discrimination if its use is based on manipulated or outdated training sets of data. On the other hand, if the dataset is not sufficiently representative, the AI will be prone to reproduce the same logic in different situations, leading to possibly discriminatory decisions.
AI companies must ensure that AI solutions are built in a robust way, to avoid manipulation or inconsistency in their predictions. Attention should be paid to this in both design and deployment, while continuously monitoring the model across its entire lifecycle. The other side of the coin involves the cybersecurity resilience of the AI system, as outlined in question 4.2.
In general, data manipulation and integrity in the context of AI are not specifically addressed in Hungarian law, apart from the general principle of integrity enshrined in the Hungarian data protection and cybersecurity legislation.
In this context, the Hungarian AI Strategy 2020–2030 aims to create a ‘data marketplace' with the possibility of certification to confirm the trustworthiness of AI companies' datasets.
8 AI best practice
8.1 There is currently a surfeit of ‘best practice' guidance on AI at the national and international level. As a practical matter, are there one or more particular AI best practice approaches that are widely adopted in your jurisdiction? If so, what are they?
Hungary has not yet developed national best practice guidance on AI. However, it is possible that this will be developed in the future, as the Artificial Intelligence Regulation and Ethics Knowledge Centre has been set up with the aim of resolving legal issues and matters of ethics relating to AI regulation.
8.2 What are the top seven things that well-crafted AI best practices should address in your jurisdiction?
There is no ‘one size fit solution' for all companies. However, to ensure a well-crafted AI best practice, a company should:
- conduct risk impact assessments and testing across the entire AI operational lifecycle, from design to implementation;
- ensure that basic principles are embedded in the AI operation (eg, transparency, fairness, non-discrimination, human supervision, robustness);
- ensure proper data governance (including data quality control/certification, registration of datasets);
- develop an independent internal AI ethics committee;
- provide internal AI-related training;
- address the needs of different stakeholders; and
- understand the peculiarities of the local legal and business environment.
8.3 As AI becomes ubiquitous, what are your top tips to ensure that AI best practice is practical, manageable, proportionate and followed in the organisation?
To implement the right internal processes in relation to AI, it is important to:
- devise a tailormade action plan with clear targets;
- establish a dedicated AI team with the right allocation of tasks/responsibilities; and
- set the tone at the top of organisation to promote a culture of ethical AI operation.
9 Other legal issues
9.1 What risks does the use of AI present from a contractual perspective? How can these be mitigated?
AI can be utilised as part of the contracting process, especially when it comes to the conclusion of a great number of contracts, as in the case of consumer transactions. Sophisticated AI may assist consumers in making choices; but it may also be a tool of pressure and manipulation which is difficult to detect. This primarily involves issues relating to:
- pre-contractual liability;
- the duty of disclosure;
- the duty to cooperate; and
- in consumer contracts in particular, fairness.
In specific sectors, such as financial services and insurance, the application of AI in the contracting process (eg, by credit scoring) is a critical issue which helps significantly in risk assessment but also raises issues of liability.
AI makes it possible for big commercial platforms to:
- monitor consumer feedback and complaints;
- detect performance problems suffered by sellers on the platform;
- promote products through the platform; and
- influence consumer choice through the platform.
This suggests that the position of such commercial platforms is different from that of auction houses, and that they may have greater exposure to liability scenarios.
The contracting party normally has little chance to reduce these risks, because he or she does not know the algorithms that are used by the other party. Especially in monopoly positions or in consumer contracts, even if the algorithm is transparent, this does not help to properly address the unequal bargaining position. However, such risks may be mitigated:
- through mandatory rules in contract law; and
- by shifting the burden of proof or risks to the party that is utilising the AI.
9.2 What risks does the use of AI present from a liability perspective? How can these be mitigated?
The primary risks include:
- the opacity of the causal link resulting in the loss;
- the large number of potential victims or large volume of losses to be compensated; and
- the difficulty of identifying the damaging conduct.
There may also be some liability gaps. Often, the damage that occurred or the interference with protected rights is difficult to detect because it remains hidden. As the loss is often a result of human-machine or machine-machine (software-software) interaction, or lies in the data:
- the risk cannot be predicted by the potential tortfeasor; and
- the actual tortfeasor (or tortfeasors) cannot be identified.
In the context of tort liability, the victim normally has little chance to mitigate the loss. The loss can be mitigated by:
- compliance with ex ante regulatory measures (if applicable);
- validation by the producer (or the service provider); or
- adherence to ethical standards and best practices (if there are any).
The Hungarian system of liability in tort provides:
- further ways to shift the liability to the producer or to an operator of AI systems instead of the actual wrongdoer; and
- ways to reverse the burden of proof as to fault or the causal link.
Thus, the structure of tort law and the rules of evidence in civil procedure can shift the allocation of loss to those that benefited from the activity which caused the damage.
9.3 What risks does the use of AI present with regard to potential bias and discrimination? How can these be mitigated?
Discrimination is among the biggest social risks associated with AI technology. From a legal and ethical perspective, there may be considerable disagreement in society as to the kinds of factors taken into account in the course of decision making that could result in discrimination. From a legal and ethical perspective, a choice is discriminatory if it results in social exclusion, even if this is statistically justified.
The issues in Hungary are no different from those in other European societies. Discrimination may result from:
- the unbiased data used to teach the algorithm; or
- the algorithm itself.
These risks can be mitigated by, among other things:
- ensuring human supervision;
- applying datasets which are sufficiently representative;
- testing and validating AI systems (including the datasets); and
- keeping adequate records (eg, programming, training methodologies and techniques used to build, test and validate AI systems), so that AI decisions can be traced back.
10 Innovation
10.1 How is innovation in the AI space protected in your jurisdiction?
AI innovations are generally protected in Hungary under IP laws, as follows:
- Copyright: AI technologies and their outputs can be protected by copyright if they meet the requirements of the author's work (ie, the bottom line remains that only human-made contributions are eligible for copyright).
- Sui generis database right: AI databases can be protected under the sui generis database rights of the database producer if it can be demonstrated that the database producer has made substantial financial, material or human efforts in order to prepare and use the contents of the database.
- Patent right: The scope of patent rights in the context of AI is rather questionable, as only humans can be inventors under the Patent Act.
- Trade secrets: AI can also be protected under the rules on trade secrets, as long as the AI technology holder implements appropriate technical and organisational measures to kept it confidential.
10.2 How is innovation in the AI space incentivised in your jurisdiction?
Promoting the use of AI by small and medium-sized enterprises (SMEs) is a priority for the government. Startups are supported by the state through:
- the provision of open datasets to support their development;
- the development of a network of early adopter partners;
- the development of AI-specific accelerators;
- the development of AI-specific investment funds; and
- sector-specific support.
The Hungarian government aims to:
- promote experimentation;
- build AI marketplaces;
- encourage development through AI innovation prizes; and
- support participation in university research projects.
Apart from this general framework, there are several initiatives that aim to promote innovation in the AI space:
- The Artificial Intelligence National Laboratory was established to fund AI-related research with a total of HUF 12 billion available until 2025.
- The Artificial Intelligence Coalition runs an online marketplace for AI providers, helping them to launch new AI projects. It has also established an accelerator centre where small and medium-sized enterprises can apply to receive support in developing their business and communication streams with the help of AI.
- The Hungarian National Bank has deployed a regulatory sandbox for fintech to provide a safe harbour for testing and impact assessment.
- Hungary is participating in the European Information Technologies Certification Academy programme to recognise AI experts.
11 Talent acquisition
11.1 What is the applicable employment regime in your jurisdiction and what specific implications does this have for AI companies?
As a general principle, all employers must respect the ban on discrimination. AI is often used in the talent recruitment process – especially in sorting applications or filtering applicants according to given criteria. Consequently, AI systems should run on a non-discriminative basis, ensured by quality raw data; companies must thus train and test such systems properly before incorporating them into HR processes.
There are also some other specific implications for AI companies and companies using AI in the talent acquisition process:
- Only those AI tools may be implemented which are:
-
- absolutely necessary for the given talent acquisition process; and
- proportionate to the limitation of the personality rights or employees/applicants.
- Human oversight must be ensured: a person must be responsible for the final HR decision.
- A data protection impact assessment is usually required before AI tools are utilised in the talent acquisition process.
- Employees have the right not to be subject to a decision based solely on automated processing which produces legal effects concerning them.
- AI tools must:
-
- collect only that data which is relevant to recruitment/employment purpose; and
- avoid collecting sensitive personal data.
11.2 How can AI companies attract specialist talent from overseas where necessary?
AI companies can attract specialist talent through Hungary's Digital Nomad Programme – a fast-track, simplified route to applying for a Hungarian White Card. This type of residence permit may be issued for up to one year and can be renewed for a further year.
To apply for a White Card, the applicant must pursue a foreign gainful activity – that is, he or she must own his or her own remote foreign business or work for a company located outside Hungary. A continuous stay in Hungary is not a requirement, but leaving Hungary for more than 90 days could result in withdrawal of the White Card. A minimum monthly income of €3,000 is required.
The process for obtaining a White Card is not overly complex. In general, the applicant needs:
- a valid passport;
- a Hungarian lease agreement;
- insurance covering Hungary;
- documents supporting his or her financial background; and
- proof of pursuing foreign gainful activity.
As a general rule, the immigration authority will decide on the application within 30 days of receipt.
If the applicant plans to stay in Hungary for a longer period, he or she can also apply for an intra-company transfer permit (valid for a maximum of three years) or an EU blue card (valid for a maximum of four years) under certain conditions.
12 Trends and predictions
12.1 How would you describe the current AI landscape and prevailing trends in your jurisdiction? Are any new developments anticipated in the next 12 months, including any proposed legislative reforms?
The Hungarian government is making significant investments in AI research and development. In addition, the goals of the Hungarian Artificial Intelligence Strategy include supporting AI startups by developing specific AI accelerators, investment funds and incubators. Since the introduction of ChatGPT in March 2023, the attention has turned towards generative AI.
It is expected that the Hungarian legislature will table a law in May 2025 to implement the AI Act in Hungary. It is further anticipated that Hungary's AI Strategy 2020–2030 could be reformed in the next 12 months.
13 Tips and traps
13.1 What are your top tips for AI companies seeking to enter your jurisdiction and what potential sticking points would you highlight?
For AI companies seeking to enter the Hungarian market, the top tip is to devote the necessary attention to ensuring legal compliance. As Hungary is an EU member state, it shares the European Union's general pro-regulatory enthusiasm, in contrast to the more laissez-faire approach found in other jurisdictions. As the regulatory framework for AI products and services is still in a preparatory phase, AI companies should follow the general principles (eg, fairness, transparency, robustness, accountability) of the AI governance framework emphasised in several legal instruments, which will form the basis of future AI regulation.
AI companies should also be mindful of horizontal legislation (eg, data protection, cybersecurity, consumer protection, cybersecurity and competition law) which could be a ticking timebomb in light of increased regulatory scrutiny and the new collective redress mechanisms introduced through the implementation of the EU Directive on Representative Actions. Certain markets will also be heavily impacted by sector-specific requirements, such as the expectations of the Central Bank of Hungary in the financial sector.
Finally, it is advisable to calculate the cost implications of compliance, because these may vary from the respective amounts in other jurisdictions.
The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.