ARTICLE
28 January 2026

How Businesses Must Adapt To AI

GP
Giambrone & Partners

Contributor

Giambrone & Partners is an international multi-jurisdictional, multi-lingual law firm with many years’ experience providing dynamic, solution-focused international legal advice, across a range of jurisdictions. The firm’s in-depth understanding of each country’s local culture enables our lawyers to have clear insight into our clients’ expectations and objectives.
As AI encroaches on all aspects of the commercial landscape it is critical to be aware of the legal developments.
United Kingdom Technology
Sergio Filonenko Kibu’s articles from Giambrone & Partners are most popular:
  • in Europe
  • in Europe
  • with readers working within the Law Firm industries

As AI encroaches on all aspects of the commercial landscape it is critical to be aware of the legal developments. However, this poses a challenge with regard global regulation as jurisdictions adopt divergent approaches and the belief in some quarters that regulation may compromise international competition. In the current geopolitical landscape businesses face the light-touch stance of the US as well as the EU's comprehensive legislative AI Act which came into force in 2024 with its implication staged over two years. The UK maintains a sector-specific approach.

The government will publish two reports under the Data (Use and Access) Act 2025, as well as the results of a consultation on copyright and AI. There are also plans on balancing the rights of AI developers and rights holders and UK copyright protection for AI generated information. Also, the Court of Appeal will hear Getty's secondary copyright infringement appeal in the dispute with Stability AI.

AI technology is AI is transforming how companies compete, innovate and interact with customers. However, AI is leading to workforce restructuring. A recent survey found that 26% of large UK firms plan to reduce staffing in roles where AI can replicate human work, particularly in administrative, clerical and junior professional roles.

However, there is a significant risk with AI that cannot be overlooked. Companies that have embraced AI must be aware that AI can provide inaccurate information, known as hallucinations, meaning that highly inaccurate false information is provided in response to a request. The misinformation can be relatively minor and easily spotted but often it can be highly plausible and be completely convincing. This often happens when accurate information cannot be found in the data.

In the US legal case Mata –v- Avianca, a personal injury claim, the legal representative of the injured client used AI for his legal research which had extensive inaccurate information he relied on to support the case. The judge recognised that there were many citations and quotes that were non-existent and made up by AI. Human fact-checking is essential before using the data to make decisions. AI does not have inherent understanding and can repeat factual errors and misinformation from the sources it draws the information from. The University of Edinburgh is leading AI in Europe. AI does not have the inherent understanding that an individual has and can repeat factual errors and misinformation introduced by the sources it draws the information from.

Sergio Filonenko Kibu, an Associate commented "whilst the UK has adopted a comparatively flexible, principles-based approach to AI regulation, businesses should not underestimate their existing obligations under data protection, intellectual property, employment, and consumer protection laws. AI does not operate in a legal vacuum".

Businesses implementing AI-driven tools should carefully review their contractual arrangements, particularly in relation to liability, intellectual property ownership, data usage, and reliance on third-party providers. These issues are increasingly central to commercial negotiations."

The government has been developing and introducing new laws and regulatory frameworks to ensure businesses can harness AI responsibly while protecting consumers and their workers. Amongst the new laws are:

  • Data (Use and Access) Act 2025, wWhich updates the UK's data protection landscape in relation to AI and copyright. The Act includes provisions about how AI developers access and use copyrighted material. It requires government reporting on the economic impact of these policies on creators, developers and users, including small and medium businesses.
  • The Online Safety Act 2023. While not AI-specific, the Online Safety Act 2023 imposes duties on internet platforms to tackle harmful and illegal online content, including AI-generated material that could facilitate fraud, misinformation or abuse. Platforms must implement risk assessments and content moderation systems, responsibilities that affect any business using or hosting user-generated AI content under UK jurisdiction.

For businesses, this means ensuring that AI-enabled services do not inadvertently facilitate harms, such as identity fraud or manipulated media. If they are judged to have done so they may face regulatory penalties.

  • The AI Regulatory Framework and Emerging Bills. The UK government has been pursuing a pro-innovation approach to AI regulation. Instead of a single AI Act, the existing law is being enhanced and sector regulators are being encouraged to interpret current legal frameworks to address AI risks. Regulators such as the Financial Conduct Authority (FCA) and the Bank of England are being urged to publish AI-specific guidance and conduct stress tests to assess systemic risk in financial markets.
  • Artificial Intelligence (Regulation) Bill. This Bill was reintroduced to Parliament in 2025. Proposals within it include creating an AI Authority to coordinate regulatory approaches, establishing independent sandboxes where businesses can test innovations safely and requiring companies to designate an AI officer responsible for AI compliance and oversight. If enacted, this would represent a significant shift toward a more structured AI regulatory regime.

AI also offers clear competitive advantages. Businesses that effectively adopt AI can personalise services, launch new products faster and make more informed strategic decisions.

Business will almost certainly need to hire legal and technical experts and adapt their policies to ensure data protection compliance under UK GDPR and the Data (Use and Access) Act.

Sergio Filonenko Kibu is an Abogado (qualified Spanish lawyer) and is admitted to practise in England & Wales as a Registered Foreign Lawyer (RFL). He speaks English, Ukrainian, Spanish, Polish, Russian and is an associate based in the London office.

Sergio assists commercial clients with a range of contentious and non-contentious matters. He is a highly regarded, astute lawyer advising on challenging cross-border disputes, including debt collection, contractual disputes and breach of contract. Sergio advised on a complex high-value international financial disputes across several jurisdictions. He also assists businesses with drafting shareholder agreements, share purchases, mergers and acquisitions and security agreements.

Sergio has extensive experience assisting high-net-worth individuals with the acquisition of real estate, both residential and commercial, in excess of €10 million across the globe. He also advises clients in matters involving AI as well as the range of inheritance issues.

He leads the Ukrainian division and also the Mercosur countries division in South America.

The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.

[View Source]

Mondaq uses cookies on this website. By using our website you agree to our use of cookies as set out in our Privacy Policy.

Learn More