I attended the ABA's AI and Robotics National Institute earlier this month. The institute was presented by the ABA's Science and Technology Law Section in collaboration with the IP Law Section. Here are some of my key takeaways:
The Time to Begin Learning AI is Now
AI is quickly becoming too ubiquitous to ignore, and the best first step you can take to harness (or mitigate) its risks is to build your awareness and understanding of it as a tool. Even if you have no intention of using AI yourself, chances are your employees, vendors, or customers do, and their use will inevitably impact you. Begin building your AI awareness and competence now by discussing AI with your team and vendors to identify potential vulnerabilities. This awareness will help you assess risks, develop effective AI policies, and make informed decisions that align with your goals.
Balance Reputation and FOMO
The excitement surrounding AI can make it easy for businesses to experience FOMO ("fear of missing out"), especially if their competitors are adopting AI while they are not. In the rush to keep up, some businesses may indeed overlook common risks associated with AI, such as copyright issues, ownership disputes, or data breaches, in favor of gaining a competitive edge by being a first mover. While some companies may downplay these risks, reputation is critical for everyone. It's therefore extremely important to weigh the potential risks of AI, like security breaches or ownership challenges, against the benefits of early adoption. There's no right or wrong answer, but the best choice with respect to risk is always the one you make yourself.
Be Cautious with AI Stacks and Indemnity
Many AI service providers now offer tools built on top of other AI tools or "foundational models"; also known as "AI Stacks". These "Stacks" of course offer even better functionality. It's crucial, however, to remember that service providers typically can't offer you better terms on matters like confidentiality or indemnity than what they've secured for themselves. If you discover a provider is using an AI stack, be sure to carefully review the Terms and Conditions between the vendor and the foundational model provider and any other agreement between your vendor and any other AI tool vendors they use. Don't assume their indemnity will protect you – in fact, it's best to assume it won't.
Use Enterprise Models and Stage Implementation to Reduce Risks
If you, your team, or your vendors are using AI tools, keep in mind that with "free" versions, any data entered may become part of the model and could be publicly accessible. In contrast, enterprise-level AI products often provide stronger confidentiality protections, which are better suited for business use. If your AI policy permits employees to use AI tools, ensure they are using an enterprise solution tailored for your company. Don't hesitate to negotiate with providers if their terms don't meet your needs. If you're concerned about which data is processed by the AI, consider staging the implementation so that only specific types of data are input. This approach ensures sensitive information stays separate from the AI and can be incorporated into your AI Policy. While many AI providers claim data is "de-identified" within the model, it's best to avoid inputting any data related to intellectual property, such as trade secrets, into the model to safeguard it effectively.
The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.