“Today’s new BSI AI auditing standard marks a significant milestone, bringing structure to a hugely important area that has, until now, lacked a clear framework. Like GDPR in its early days, it’s a step towards answering vital questions like, what does genuine independence in AI auditing look like? And who, exactly, is qualified to do it - and what are the skills required to do it well? It’s a very important start, but it’s not a silver bullet. AI models are inherently not predictable, and the pace of innovation makes it very difficult to stay ahead of the risks.
“If businesses treat certification as a proxy for safety, it risks creating a false sense of security. Like with any security or safety standard, it is no substitute for truly understanding how systems behave in the real world and the risks they create. With that in mind, this standard is the starting line, not the end point. Effective AI assurance goes beyond periodic audits and requires a combination of continuous monitoring, human oversight, and controls built deep into an organisation’s culture - much like the kind of safety checks required across the entire value chain in the aviation industry or healthcare. Used well, this standard can help build public trust and drive innovation, but the real safety lies in how organisations plan to stay compliant rather than just ticking a box.”