It has been a good week for a move towards the safe use of AI.
We have seen the UK Prime Minister, Rishi Sunak, give impetus to London Tech Week with his promises of an AI Taskforce to engage and collaborate to define a balanced regulatory framework, as well as greater international co-operation - with a global AI conference planned.
Closely on the back of this comes the welcome news that the EU Parliament has finally passed the draft EU Regulation on AI (the EU AI Act), but not without taking account of recent well-publicised concerns on generative AI particularly. This proposed principles-based regulatory framework focusing on high-risk AI systems and services now has added protections for generative AI as well as biometric surveillance systems, and the use of copyrighted materials in AI training data.
The UK communications regulator, Ofcom, has also been quick to showcase its planned AI strategy this week – announcing that it is working with companies that are developing and integrating generative AI tools which might fall into the scope of the UK's Online Safety Bill, to understand how they are proactively assessing the safety risks of their products and implementing effective mitigations to protect users from potential harms. The need for AI ethics to be considered at all stages of the system lifecycle is crucial, all the way from initial design through to commercialisation and beyond.
So the guardrails are coming (albeit slower than the pace of technological development). Hopefully not to derail innovation and investment, which some fear. Just to provide clear (ideally global) guidance for AI use to be fair, transparent and proportionate. It is perhaps not an overstatement to say that society as we know it may depend on it.
There is not yet widespread understanding of where AI may be used internally within organisations or incorporated into internal systems – users could include employees, suppliers, sub-contractors - and so more internal oversight will be needed to be built in. Going forward, managing the ethics and other risks associated with AI should be a key part of an organisations' internal governance – in the same way as other technology related risks have been incorporated, such as data privacy and cyber security.
The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.