The Information Commissioner’s Office (ICO) and the Alan Turing Institute have recently released an interim report (Report) outlining their approach to best practices in explaining artificial intelligence (AI) to users. The Report is of particular relevance to operators of AI systems who may be considering their duties under the General Data Protection Regulation 2016/679 (GDPR). In particular, operators of AI systems should be aware that articles 22 and 35 GDPR may be engaged by AI systems which involve automated decision-making.

The research commissioned for the Report established three key themes, which are outlined in the Report:

  • the importance of context in explaining AI-related decisions;
  • the need for improved education and awareness around the use of AI for decision-making; and
  • the challenges in deploying explainable AI, such as cost and the pace of innovation.

Explanation and AI utility

In this area, evidence was gathered from prospective users who faced a scenario which sought to understand whether their preference was for more accurate AI with limited explanation or less accurate AI that was more easily explainable. The results indicated that users preferred a more accurate AI system that is less well explained to them in a health care scenario. However, where AI systems are deployed in recruitment or criminal justice, users indicated a stronger preference for an explanation of those specific systems.

As well as prospective users, the research involved a number of key individuals from industry, regulators and academia (Stakeholders). The Stakeholders, in part, agreed with the findings from the users, although emphasised the need for explanations to promote user trust, help eliminate AI system bias, and improve on current human practices, which are subject to their own biases.


The Report also found that education around AI systems is a key area for building confidence in the decisions of AI systems. Education could be in the form of school lessons, TV and radio programming, or public awareness campaigns. The Report also identified that important topics to cover included how AI systems work, their benefits, and clearing up any misconceptions around AI.

The Stakeholders were more reserved in their approach although they acknowledged the need to clear up the misconceptions around AI. One concern raised included the risk that more information may confuse prospective users further.

Key challenges

The key challenge identified by the Report is the cost of complying with any transparency and explanation requirements imposed on operators of AI systems. Concerns were also raised about the potential for information overload if information about AI systems is provided in the same way as terms of use policies are today. The Report highlights the need, as much as is possible, for AI systems operators to translate complex decision-making processes into an appropriate form for a lay audience.


AI has been a hot topic for regulators recently. Last year, the ICO identified AI as a key priority in its Technology Strategy. AI has also recently been the subject of regulatory focus from the Council of Europe, Centre for Data Ethics and Innovation, European Union Agency for Network and Information Security and European Commission. The Report offers an interesting perspective on some of the concerns regulators will be addressing in the AI space. We expect further updates from the ICO and other regulators in this area and we will continue to keep you up to date on any relevant developments. If you would like to help the ICO in the development of its AI regulatory framework, you can contact the relevant ICO team here.