ARTICLE
14 April 2025

Protecting Data And Avoiding Pitfalls With AI Assets During M&A

The availability of vast amounts of operational and consumer data, and advancements in artificial intelligence present opportunities...
Belgium Technology

The availability of vast amounts of operational and consumer data, and advancements in artificial intelligence present opportunities for dealmakers to drive growth by enhancing operations and developing new products and services.

But that ocean of data and AI also present several pitfalls dealmakers must maneuver around.

Data and AI can enable shifts in telemedicine and personalized health care, developments in smart logistics and supply chain management, and improvements in digital marketing and consumer products. And they will be key drivers of deal activity in 2025.

In a 2024 KPMG survey of US-based technology companies and private equity firms, 91% of corporate and 60% of private equity respondents reported that AI in target company selection is either crucial for strategic decision-making or plays a supportive role.

In the same survey, when asked what strategies companies were considering to adjust and thrive in the evolving AI landscape, corporate and private equity respondents broadly supported integrating AI into existing products and services, investing in infrastructure for AI and machine learning technology, and hiring talent with skills in AI.

But acquiring data and AI assets comes with legal risks and costs. Dealmakers will be required to comply with a developing patchwork of US state privacy laws and minimize the risk of data privacy, cybersecurity, and AI enforcement actions.

Antitrust enforcement could present other challenges. Federal Trade Commission Chair Andrew Ferguson and Gail Slater, who heads the Department of Justice's antitrust division, have signaled that both agencies will continue scrutinizing merger and acquisition activity, including deals involving Big Tech.

However, Ferguson has said he won't impede M&A deals unless he's confident the FTC can prove the deals violate antitrust law, because he believes M&A activity is essential for economic growth.

Considering these challenges, businesses exploring such opportunities must develop effective legal strategies that account for the potential legal pitfalls associated with acquiring or selling AI technology. These include:

Analyzing data and AI governance programs. Dealmakers should carefully review target companies' governance practices. Companies with robust AI governance practices that factor in the relevant administrative and technical regulatory measures may provide greater confidence that risks have been discovered and addressed.

When acquiring AI companies, it's crucial to analyze governance practices to determine their appropriateness for the development, deployment, and maintenance of AI systems.

Heeding enforcement trends. Dealmakers acquiring data assets or AI-driven companies should identify whether the target companies engage in data processing that has been the target of enforcement actions.

For example, the California attorney general recently announced an investigative sweep focused on the processing of location data. And the FTC and state attorneys general have brought enforcement actions against companies accused of making deceptive claims about the capabilities of their AI tools. Analyzing prior enforcement actions can be vital to identifying risk and informing strategies to protect against the risk of similar actions.

Confirming rights to process personal data. To the extent dealmakers are acquiring data-rich targets, target companies should have the necessary rights to process personal data for their actual use cases. Processing personal data without obtaining such rights may violate privacy laws and create risks under data breach notification laws if it results in unauthorized processing.

Additionally, unauthorized processing of personal information in AI models could lead to operational and liability risks. Some regulators have required businesses to delete data allegedly processed without authorization, and AI models trained on such data, to resolve allegations of unlawful data processing.

Planning for integration and post-closing. Compliance remains important post-close. Corporate businesses should ensure that material data privacy, cybersecurity, and AI risks are all addressed during integration. These plans should account for risks created during integration, such as testing privacy compliance mechanisms to confirm that they operate correctly from a technical perspective, particularly if IT systems are merged.

Private equity firms should consider whether portfolio companies require "clean-up services" to protect their exit value. Portfolio companies with potentially unsophisticated cybersecurity or privacy practices may need training to respond to security incidents or help with remediating privacy risks identified during the transaction.

Businesses that develop strategies to mitigate data privacy, cybersecurity, and AI risks will be better equipped to protect their data and AI assets in a challenging dealmaking environment.

Originally Published By Bloomberg Law

The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.

Mondaq uses cookies on this website. By using our website you agree to our use of cookies as set out in our Privacy Policy.

Learn More