Artificial intelligence (AI) is a key area of focus for the Information Commissioner’s Office (ICO). The ICO is already working on a related AI project that focuses on building the ICO’s Auditing Framework. One of the goals of the ICO is to increase the public’s trust and confidence in how data is used and made available. In line with this, on 2 December 2019, the ICO published a blog on explaining decisions made by AI (here). The ‘Explaining decisions made with AI’ guidance (Guidance) has been prepared in collaboration with the UK’s national institute for data science and artificial intelligence, the Alan Turing Institute. The Guidance seeks to help organisations explain how AI decisions are made to those affected by them.
We have outlined some of the key takeaways below.
The Guidance is issued in response to the commitment in the government’s AI Sector Deal, a joint venture between the UK government and industry, to push the UK to the forefront of emerging technologies. It is also the result of ICO research showing that over 50 per cent of people are concerned about AI being used to make complex automated decisions about them.
The Guidance consists of three parts:
- Part 1 – The basics of explaining AI: outlines the definitions, the legal basis for explaining AI and the benefits and risks of doing so. This part of the Guidance is relevant to all members of staff involved in the development of AI systems.
- Part 2 – Explaining AI in practice: helps with the practicalities of explaining AI to individuals. It sets out a systematic approach to selecting and delivering explanations. The Guidance here is primarily aimed at assisting technical teams in organisations, as well as organisations’ DPO/compliance team.
- Part 3 – Explaining what AI means for your organisation: outlines the various roles, policies, procedures and documentation that can be put in place to ensure your organisation follows good practice in providing meaningful explanations to affected individuals. This part of the Guidance is aimed at senior management teams.
While both the General Data Protection Regulation (GDPR) and the Data Protection Act 2018 are technology neutral, the Guidance lays out four key principles organisations must consider when developing AI decision-making systems. These principles are rooted in the GDPR:
- Transparency: being clear, open and honest about how and why an AI-assisted decision was made about an individual, or where their personal data was used to train and test an AI system. The ICO here refers organisations to Recital 60 of the GDPR, which states that you should provide any further information necessary to ensure fair and transparent processing of personal data that takes into account the specific circumstances and context in which you process the data.
- Accountability: providing an explanation to the individuals affected that demonstrates that they were treated in a fair and transparent way when an AI-assisted decision was being made about them. The ICO here refers organisations to compliance with the other principles set out in Article 5 of the GDPR, including those of data minimisation and accuracy.
- Context: considering contextual factors (domain, impact, data, urgency and audience) to determine the correct approach in explaining AI-assisted decisions. The importance of context was previously stated in the ICO’s interim report released in June 2019 and remains key in the Guidance.
- Impact: assessing the AI model’s potential impact. This includes the risks of deploying the system, and the risks for the person receiving the AI-assisted decision. The assessment should also ask and answer questions about the ethical purposes and objectives of the AI project.
While the Guidance is not a legally binding instrument, it sets out good practice for explaining decisions to individuals that have been made using AI systems. The Guidance will also help organisations adhere to GDPR standards when considering developing or using AI systems.
The ICO is consulting on the draft version of the Guidance until 24 January 2020. The ICO will use the feedback it has collected to conduct further research and finalise the Guidance. We expect to see the final version of the Guidance later in the year. Keep an eye on this blog for news on the final consultation paper!