A recent transparency index evaluating 10 foundational AI models has sounded a clarion call for increased disclosure in the AI industry. Researchers from Stanford, MIT, and Princeton contend that without greater transparency on the inner workings, training data, and consequences of advanced AI tools, understanding and mitigating the associated risks will remain elusive. Self-regulation has proven inadequate, as leading companies have grown more secretive due to competitive and safety reasons. The researchers argue that prioritizing transparency should be a central focus of AI legislation, emphasizing the urgency of making AI development more comprehensible and accountable.
A damning assessment of 10 key AI foundation models in a new transparency index is stoking new pressure on AI developers to share more information about their products — and on legislators and regulators to require such disclosures. Why it matters: The Stanford, MIT and Princeton researchers who created the index say that unless AI companies are more forthcoming about the inner workings, training data and impacts of their most advanced tools, users will never be able to fully understand the risks associated with AI, and experts will never be able to mitigate them.
The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.