In a blog post published on 20 July 2021 Alister Pearson, the ICO's Senior Policy Officer - Technology, introduced a new beta version of the ICO's AI and Data Protection Risk Toolkit. Mr Pearson explains that the ICO decided to publish the toolkit as it recognised that understanding how to assess compliance with data protection principles can be challenging in the context of AI. Mr Pearson says that the work draws upon the Guidance on AI and Data Protection, as well as the ICO's co-badged guidance with The Alan Turing Institute on "Explaining Decisions Made With AI".
The toolkit contains risk statements to help organisations using AI to process personal data understand the risks to individuals' information rights, Mr Pearson explains. It also provides suggestions on best practice organisational and technical measures that can be used to manage or mitigate the risks and demonstrate compliance with data protection law.
The toolkit reflects the auditing framework developed by the ICO's internal assurance and investigation teams. Mr Pearson says that if an organisation is using AI to process personal data, then by following this toolkit, it can have "high assurance" that it is complying with data protection legislation.
The toolkit is a beta version, following on from the successful launch of the alpha version in March 2021. The ICO is now looking to start the next stage of the development of this toolkit, which involves testing it on live examples of AI systems that process personal data to see how practical and useful it is for organisations. The IOC plans to release the final version of the toolkit in December 2021. To read the blog post in full and for information on how to provide feedback to the ICO, click here.
The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.