ARTICLE
25 October 2023

Recent Assessment Shows Want Of Transparency Of AI Models

FL
Foley & Lardner

Contributor

Foley & Lardner LLP looks beyond the law to focus on the constantly evolving demands facing our clients and their industries. With over 1,100 lawyers in 24 offices across the United States, Mexico, Europe and Asia, Foley approaches client service by first understanding our clients’ priorities, objectives and challenges. We work hard to understand our clients’ issues and forge long-term relationships with them to help achieve successful outcomes and solve their legal issues through practical business advice and cutting-edge legal insight. Our clients view us as trusted business advisors because we understand that great legal service is only valuable if it is relevant, practical and beneficial to their businesses.
Arecent transparency index evaluating 10 foundational AI models has sounded a clarion call for increased disclosure in the AI industry.
United States Technology

A recent transparency index evaluating 10 foundational AI models has sounded a clarion call for increased disclosure in the AI industry. Researchers from Stanford, MIT, and Princeton contend that without greater transparency on the inner workings, training data, and consequences of advanced AI tools, understanding and mitigating the associated risks will remain elusive. Self-regulation has proven inadequate, as leading companies have grown more secretive due to competitive and safety reasons. The researchers argue that prioritizing transparency should be a central focus of AI legislation, emphasizing the urgency of making AI development more comprehensible and accountable.

A damning assessment of 10 key AI foundation models in a new transparency index is stoking new pressure on AI developers to share more information about their products — and on legislators and regulators to require such disclosures. Why it matters: The Stanford, MIT and Princeton researchers who created the index say that unless AI companies are more forthcoming about the inner workings, training data and impacts of their most advanced tools, users will never be able to fully understand the risks associated with AI, and experts will never be able to mitigate them.

The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.

Mondaq uses cookies on this website. By using our website you agree to our use of cookies as set out in our Privacy Policy.

Learn More