AI is a source of excitement and concern for stakeholders in the health care and life sciences industries. AI-enabled solutions are maturing and transforming various areas in these industries — such as drug discovery, disease diagnosis, clinical trials and precision medicine — motivating more life sciences companies to invest in AI tools.

While some life sciences companies are building their own in-house AI capabilities, doing so can be challenging due to various factors, such as having to build and train an AI workforce and associated costs, without the guarantee of successful development of an AI model that can provide the right solutions.

Therefore, many life sciences companies are partnering with AI technology companies to access AI algorithms, infrastructure, and experienced software scientists and engineers. Partnership, however, can expose both life sciences and AI companies to risks of losing rights in their own intellectual property.

For example, when a life sciences company develops a new drug through an AI model developed by an AI company and trained using its own large datasets, there can be many questions as to:

  • Who gets to claim the ownership of the new drug and its IP?
  • Who gets to own and use the trained AI model to develop other drugs?
  • Where does the liability fall when there is a legal or regulatory concern?

While there are no cookie-cutter solutions to these questions, life sciences and AI companies must carefully consider and negotiate on these issues before the collaboration begins to set boundaries and minimize risks.

No One-Size-Fits All Formula

The partnerships that life sciences companies may form to integrate AI capabilities into their research processes and business activities can take different forms.

First, a life sciences company may acquire an AI company whose technology aligns with its strategic intended uses. For example, BioNTech SE recently acquired British AI startup InstaDeep, with the goal of integrating AI in all aspects of its work, from target and lead discovery to manufacturing and product delivery.

A life sciences company may also form a strategic partnership or a joint venture with an AI company to co-develop the drug candidates. In August 2022, Sanofi SA and Atomwise Inc. signed strategic and exclusive research collaboration to accelerate discovery of drug targets.

Another approach a life sciences company may take is licensing the AI software or services for targeted uses. Recently, Google LLC's Google Cloud introduced two new cloud-based AI tools, the Target and Lead Identification Suite and the Multiomics Suite, to help life sciences companies accelerate drug discovery and precision medicine. Some early partners of these cloud-based AI tools include Cerevel Therapeutics LLC, Colossal Biosciences and Pfizer Inc.

Thus, there is no one-size-fits-all solution for how life sciences companies may collaborate with AI companies to access and use AI.

Key Components of AI

Many terms and conditions in AI licensing and transaction agreements are like those in traditional software licensing and transactions agreements. Unlike traditional software, AI involves multiple key components that may require specific arrangement of rights, obligations and liabilities, depending on the parties, technologies and uses.

The different uses of AI in the life sciences industry also present the parties with unique legal considerations. Thus, a starting point for the parties negotiating an AI licensing agreement may be identifying and addressing the contributions, rights, and risks of the following key components.

Training Data

Referring to large datasets used to train, test and validate AI models that use machine learning or deep learning algorithms to perform specific tasks, high-quality, accurate and adequate training data is critical to the performance of an AI model.

In a partnership between a life sciences company and an AI company, the training data may include health or drug-research data, collected by the life sciences company. But there may be training datasets provided by the AI company for the initial training, fine tuning and conditioning of the AI model.

AI Model

Software programs or algorithms that use training data to learn to perform the specific tasks — such as to process and analyze data, recognize patterns, make predictions or decisions, or create new content — AI models can be of various types, such as Deep Neural Networks, Linear Discriminant Analysis and Support Vector Machines.

As more data is fed into the AI model, the AI model evolves through training and improves its performance.

AI Output

The term AI output refers to the result that the AI model produces after receiving input data. For example, an AI model may produce a prediction of a drug target, a classification of a physiological condition, an optimized process, or a hypothesis to test through traditional methods.

The life sciences company may use the AI output to develop a product, such as a disease diagnostic platform, a new drug or a personalized health or disease monitoring software application.

AI Know-How

AI know-how refers to the proprietary knowledge and expertise that companies gain through their own research and business activities, such as knowledge related to interpretation of data, methods for preparing training data and input data, and methods for training, testing, and validating the AI model.

Considerations for AI Licensing

Scope of Agreement

Like traditional software licensing agreements, an AI licensing agreement should clearly define the subject technology, its associated platform and infrastructure, the inputs and outputs, and the training datasets.

To this end, the parties may include a definition section to ensure there is a meeting of the minds as to the AI capabilities and intended outcome of the partnership.

Furthermore, the parties should consider defining the scope of the collaboration and use rights. Many AI models may be trained and used for different purposes.

For instance, the same foundational convolutional neural network model may be used for face recognition, animal or subject tracking, and medical image classification.

Defining for what purpose and how the life sciences company or the AI company is allowed to use the AI technology, data, or output may be restricted based on exclusivity, fields of use, geographical restrictions, legal or ethical prohibitions, and other terms of use as agreed upon by both parties.

In addition, the parties should negotiate a detailed agreement addressing each key component discussed above. The parties should ask questions on issues such as who contributes each component, what rights each party has as to each component, and how to use each component during and after the conclusion of the partnership, keeping the parties' business goals and potential liabilities in mind.

Training Data

Training data may include health or drug related data typically gathered and owned by the life sciences company. But there may be training data that the AI company may provide for fine tuning and conditioning of the AI algorithm. Training the AI model may also require combining the parties' data with data from third parties.

The parties should carefully identify the sources of data, delineate ownership, use rights, and restrictions on data from the various sources. The parties should then negotiate detailed terms and conditions to ensure that each party's data is adequately protected, and the use of the data complies with applicable laws and any third-party data provider requirements. The agreement should also specify what and how data shared in the partnership should be purged or returned at the conclusion of the partnership, and whether certain rights to the data are retained by the parties.

To illustrate, if the AI technology is cloud-based, the AI company may desire to use the training data or the output data from the partnership to improve its cloud-based AI technology to benefit all its partners. But the parties should contemplate whether such uses may be restricted by any legal, policy and business considerations.

AI Model

The AI company typically is the owner of the AI model and provides a license to the life sciences company to use the AI model to generate an output. For the life sciences company, successful implementation of the licensed AI is critical to the success of the product development process. AI malfunction, such as due to faulty assumptions, improper implementation or maintenance, or insecure platform, may expose the life sciences company to unintended and costly outcomes, such as inaccurate disease diagnosis or drug target identification, or data privacy breaches. But because AI continues to evolve and the development and implementation of AI can involve other participants — third-party data, platform and infrastructure providers — allocating liability risks can be challenging.

The parties should agree on the ground rules for the various participants — such as requiring acknowledgement and representations as to their obligations to comply with legal requirements, industry standards or performance assessment framework — and contemplate any potential remedial measures.

For the AI company, it is vital to maintain ownership of the AI model as well as the improvements to the AI model from the partnership to protect its key IP and stay competitive. But the life sciences company may also consider negotiating ownership of any custom AI model or features, or restricting use rights of the improved AI model and features.

AI Output

Typically, the life sciences company expects to own the AI output unless the parties negotiate an alternative arrangement. For example, the AI company may desire to have rights to the products developed through the collaboration to generate revenue from royalties.

Also, a customized AI model may be the desired output of the life sciences company, such as for developing a software product for diagnosing a specific disease or to identify drug candidates based on a custom dataset. In such instances, the life sciences company should consider negotiating ownership or an exclusive license to the customized AI model or features.

Data Protection

With respect to AI know-how, the partnership agreement should require the parties to acknowledge and protect each other's confidential information, clearly identify any trade secret, and set forth reasonable safeguards to maintain the trade secret rights and minimize risk of improper disclosure.

Such safeguard arrangements may make the parties more willing to share confidential information with each other, third parties and collaborators.

Other Unique Life Sciences Licensing Issues

The use of AI in the life sciences industry also involves other unique legal challenges that the parties should consider when drafting an agreement. High-quality data is key for generating valuable AI models and outputs. Securing such high-quality data is critical for the companies to gain advantage in the industry.

Competitors may seek this high-quality data to jump start their own AI development. Thus, the life sciences company should insist on highly secure storage, management and maintenance of proprietary training and output data.

Furthermore, despite the recent surge in open-source AI models and platforms that drove existing progress in various fields, the parties should carefully consider whether to use open-source codes that may come with restrictions on code use.

Some open-source licenses impose an obligation on the licensees to disclose or license their modified code, and there may be related copyright infringement concerns as well. Squaring away these issues as early as possible during development will prevent unexpected road blocks when modifying the AI model may not be possible.

Another industry-specific consideration is regulations concerning collection and use of sensitive health data. Before partnering with another company, the parties should discuss and agree on policies on data privacy.

For example, the parties should negotiate on measures to provide and use data in a de-identified format so that the AI model would not use or produce personally identifiable information.

Both parties should acknowledge and represent that data is provided and used in compliance with applicable law, such as Health Insurance Portability and Accountability Act and U.S. Food and Drug Administration regulations on clinical trials and electronic health records data.

If any participant involved in the partnership is not familiar with the requirements for collecting and handling sensitive health information, they can expose all parties to the partnership to potential liabilities and penalties.

Therefore, the parties should set policies and guidelines on data privacy and negotiate terms for allocating liability appropriately between parties in case of a privacy breach.

AI systems and platforms are also vulnerable to increasing cybersecurity attacks, such as data breaches and system malfunction, which can impact downstream products. The parties should identify potential cyber vulnerabilities, build robust systems and agree on best practices to reduce the number and impact of attacks.

Conclusion

AI has potential to offer immense opportunities for life sciences companies to develop new drugs and treatments with more efficiency and precision.

When life sciences companies and AI companies form partnerships to stay competitive and foster innovation, the parties should align interests as well as mitigate risks by strategically identifying and addressing the unique AI-related legal issues as early as possible.

The parties should also stay aware of the evolving regulatory landscape of data privacy and AI in the health care and life sciences industries and be prepared to renegotiate agreement addendums to remain compliant with applicable laws and policies.

Originally published by Law360.

The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.