As large language models (like ChatGPT) and other types of generative AI grow in popularity, researchers are starting to uncover their vulnerabilities and the ways they can be exploited for nefarious purposes. Earlier this month, researchers at Robust Intelligence published a blog post detailing the ways they were able to prompt the NVIDIA AI Platform-designed for businesses to customize and deploy their own generative AI models (for example, to integrate with customer service chat bots)-into revealing personally identifiable information from a database.

Though NVIDIA has begun to address and resolve the issues the researchers identified, the research indicates that, broadly, AI "guardrails"-the rules, filters, and other mechanisms designed to ensure safe and ethical use of the applicable software-may not be sufficient to protect against undesirable outputs, especially where the AI model is trained on "unsanitized" data.

Key Takeaways:

  • Even advanced AI systems can be vulnerable to data leaks and other exploits
  • There may be severe legal consequences for organizations that fail to prevent AI models from revealing personally identifiable information
  • Organizations need robust internal guidelines and policies detailing how AI (as well as sensitive data) should and should not be used
  • In addition to those written policies, organizations should maintain meaningful human oversight to regulate the use of AI

www.fkks.com

This alert provides general coverage of its subject area. We provide it with the understanding that Frankfurt Kurnit Klein & Selz is not engaged herein in rendering legal advice, and shall not be liable for any damages resulting from any error, inaccuracy, or omission. Our attorneys practice law only in jurisdictions in which they are properly authorized to do so. We do not seek to represent clients in other jurisdictions.