The House of Commons Science, Innovation, and Technology Committee published its interim report following its inquiry into the governance of AI.
AI has experienced exponential change and development in 2023, with AI solutions such as ChatGPT now fast becoming household names. Given the unprecedented development AI has experienced, an inquiry was deemed necessary to consider the potential risks and challenges such rapidly evolving technology poses and the issues arising from governing the same.
The report, published on 31st August 23, has identified 12 primary challenges which must be addressed via domestic policy and international engagement. Our Technology Lawexperts outline these challenges below.
The 12 Challenges
The 12 challenges highlighted in the report include:
- Bias. AI can introduce or perpetuate biases that society may find unacceptable.
- Privacy. AI can allow individuals to be identified and their personal data to be misused.
- Misrepresentation. AI can generate material that deliberately misrepresents someone's character, behaviour or opinions.
- Access to data. AI requires large datasets which are held by few organisations, raising competition concerns.
- Access to compute. AI requires significant computing power, which is limited to a few organisations, again raising competition concerns.
- Black box. AI cannot explain why certain results are produced, which raises concerns that it is inexplicable and its process is not transparent.
- Open-source. Views differ on whether the software code used to make AI should be publicly available (open-source).
- Intellectual property and copyright. Use by AI solutions of proprietary materials must protect the rights holder.
- Liability. Government policy should consider whether the AI developers and providers are liable should the AI cause harm.
- Employment. Government policy should manage the disruption caused to the employment market by AI.
- International coordination.AI is a global technology, and developing governance frameworks to regulate its uses must be an international undertaking.
- Existential.If AI presents a threat to human life, governance should afford protection for national security.
The UK is set to host thefirst global AI summitthis autumn, where key countries, leading Tech companies and researchers shall meet to agree on the collective measures required to mitigate the risks arising from AI technology. The report recommends that the 12 challenges identified form the basis of such risk analysis conducted at the summit.
What Should Businesses Operating Within The Tech Sector Do Next?
The Government's response to the interim report is due by 31st October 2023. In the meantime, the inquiry continues, and a final form report is awaited in due course.
AI and other emerging technologies are demonstrating fast-paced development and change, and the Government's report on governance concerns highlights the growing demand for proactive regulation and governance in this area.
Organisations operating within the Tech sector should ensure that they remain up to date with any proposed changes to the regulation of AI or other emerging technologies and the legal and practical consequences this may have for their organisations.
To view original article, please click here.
The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.