The ICO has published a blog piece reflecting on the first year since publication of the guidance.
The ICO worked with the Alan Turing Institute to produce the guidance, which was published in May 2020. The guidance aimed to "develop a framework for explaining processes, services and decisions delivered by AI, to improve transparency and accountability". It was produced as a best practice document to help organisations of various sizes from different sectors. The ICO consulted with 56 organisations that make decisions about their customers using personal data and AI. The group included SMEs, public sector organisations and established technology organisations.
The ICO says that the feedback was positive. Respondents said that the guidance provided a good foundation for improving awareness and understanding of the need for explanations relating to AI systems, and how to construct those explanations. They also said that the guidance clearly defined the key elements needed to build explainable AI systems and when further detail was needed this was also easy to understand.
However, respondents identified that the guidance was too long. Accordingly, the ICO has added "at a glance" sections alongside the guidance, as a summary document, which puts the fundamental elements of the guidance into one place and makes it easier to find them quickly.
The ICO says that it will add case studies to the guidance, so that organisations can reference practical examples of good practice in action. The consultees indicated that this would be a valuable addition. The ICO is inviting interested parties to submit examples of good case studies that it might use. To read the blog post in full and for details on how to submit useful case studies, click here.
The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.