ARTICLE
23 July 2025

Human In The Loop: Making AI Work Without Losing Control

IG
IR Global

Contributor

IR Global is a multi-disciplinary professional services network that provides legal, accountancy and financial advice to both companies and individuals around the world. Our membership consists of the highest quality boutique and mid-sized firms who service the mid-market. Firms which are focused on partner led, personal service and have extensive cross border experience.
Implementing AI systems can be a great way to increase productivity in business and cut down on costs and resources.
United Kingdom Technology

How can businesses in your jurisdiction adopt AI and automation responsibly, and what guidance are you offering to ensure regulatory compliance?

Implementing AI systems can be a great way to increase productivity in business and cut down on costs and resources. These systems can help businesses by allowing them to automate tasks and cut down on research and writing time. AI can also help in producing and interpreting large data sets in a fraction of the time that it would take a human to do so. This allows businesses to dedicate their valuable time to more critical or high-level functions of the business.

While AI systems are efficient, they are not always accurate. Moreover, one must remember that AI is a machine and cannot always pick up on human nuance, nor can it make decisions considering factors outside the algorithm.

It is critical to have a human presence reviewing every decision and every document produced by AI in order to ensure accuracy. Human oversight is essential to implementing AI systems responsibly.

In Massachusetts, the Attorney General has advised that any use of AI must comply with current consumer protection, anti-discrimination, and data privacy laws. Our firm can help businesses interpret these laws as they apply to specific uses of AI. We are also closely monitoring legislative developments to ensure that if any specific AI regulations pass, we can accurately advise our clients as to how they can best comply.

What are the key risks of implementing AI, from data privacy to ethical concerns, and how can you help businesses in your jurisdiction navigate these complexities?

Data privacy and security risks are particularly high when implementing AI systems. These data privacy and security risks pose a threat to both businesses and their clients or consumers. On the one hand, AI systems review and hold a large number of a business's confidential and sensitive information such as business models, financials or future plans. On the other, an AI system may hold sensitive and private information on a business' customers. If businesses are not careful, this information could be subject to data leaks and may compromise the sensitive information stored in the AI system. There is also a potential for internal misuse of information if comprehensive data protections are not in place for the AI system.

We work with our clients to perform a risk assessment when implementing AI into any of their systems or practices. This risk assessment allows us to specifically tailor a program to best protect against these risks.

On top of this assessment, we will always include ensuring that a business's AI practices are compliant with the strictest data privacy laws such as the California Consumer Privacy Act or the European Union's General Data Protection Regulation. Even if a business does not operate in these jurisdictions, we always want to ensure that we are doing the most to protect the privacy of our clients and their customers.

It is also essential to ensure that our clients invest in a closed AI system, particularly if any confidential information is being inputted into the system. Closed AI systems keep their code and data internal, which allows for better data security and confidentiality. They also allow for a business to have greater control over the technology.

Are you seeing any trends in AI-driven disputes or liability concerns? How can firms assist clients in addressing potential AI-related litigation or regulatory scrutiny?

Emerging areas of AI-driven litigation derive from questions such as "who owns the product that the AI is producing?" or "who is responsible for any mistakes or misrepresentations made by AI?".

The growing use of AI in everyday business practices opens the door for intellectual property litigation, particularly when it is used for purposes such as content creation, patent filings, or software development. Businesses must be careful to understand who has the ownership rights to the AI work product before choosing to utilise these systems. A lack of due diligence could open the door to future litigation.

There is also a growing trend in consumer protection litigation as it relates to the use of AI. As more and more consumer interactions are being sourced to AI systems, businesses should be sure they are making accurate representations to customers. There needs to be human oversight on all communications made between AI systems and consumers, as well as transparency with consumers that they are interacting with AI. Without that oversight and transparency, there is a risk that AI systems can make representations or promises to consumers that the business cannot honour or support.

In Massachusetts, AI systems still need to comply with Massachusetts General Laws, Chapter 93A which provides regulation of business practices for consumer protection. Any misrepresentation made by AI to consumers are subject to the statute and may result in violations of Chapter 93A, thus subject to litigation.

Our firm strives to assist our clients in implementing AI systems that provide them with the greatest benefit but with the least amount of legal risk. We assess the legal consequences of different AI practices under the current regulatory framework and anticipate how trends in litigation and regulation will shape the structure of AI systems in the future. Our practical, risk-based approach to AI implementation allows our clients to feel confident that they have chosen an AI system that works best for them and their business's needs.

Key Takeaways

  • While AI increases productivity and enhances data processing, it is not infallible. In Massachusetts, responsible adoption requires robust human oversight to ensure outputs are accurate and reflect human nuance. Businesses must also comply with consumer protection, anti-discrimination, and data privacy laws, even as the regulatory environment evolves.
  • AI systems often process highly sensitive internal and customer data. To mitigate security and privacy risks, firms should conduct tailored risk assessments, adopt closed AI systems where appropriate, and comply with stringent privacy laws, such as the GDPR and CCPA – even when not legally required in those jurisdictions.
  • Emerging disputes revolve around intellectual property rights and liability for AI-generated errors or misstatements. Businesses must ensure clear ownership of AI-generated content and maintain transparency in AI-led consumer interactions to avoid breaching Massachusetts consumer protection laws (Chapter 93A).

The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.

Mondaq uses cookies on this website. By using our website you agree to our use of cookies as set out in our Privacy Policy.

Learn More