Late last year, we reported that the Information Commissioner’s Office (ICO) had published draft guidance for assisting organisations with explaining decisions made about individuals using with AI. Organisations that process personal data using AI systems are required under the GDPR to provide an explanation of the logic involved, as well as the significance and the envisaged consequences of such processing in the form of a transparency notice to the data subjects.

On 20 May 2020, followings its open consultation, the ICO finalised the guidance (available here). This is the first guidance issued by the ICO that focuses on the governance, accountability and management of several different risks arising from the use of AI systems when making decisions about individuals.

As with the draft guidance, the final guidance is split into three parts. We have outlined the key takeaways for each part below.

Part 1 – The basics of explaining AI

Part 1 is directed at data protection officers (DPOs) and compliance teams.

This part explains the reasoning behind giving a notice with thoughtful explanations about the use of AI to data subjects in relation to AI-assisted decisions. An explanation not only satisfies legal compliance and internal governance, but can also build trust in an organisation, and help society experience better outcomes by being able to meaningfully engage in the decision-making.

However, the guidance notes that organisations must be careful not to provide too much information in explanations because they may divulge commercially sensitive details, such as algorithmic trade secrets. A potential result of this may be to allow individuals to ‘game’ or exploit an organisation’s AI model if they know enough about the rules that support the AI model.

The guidance identifies six main types of explanation that an organisation could use when providing a notice to data subjects, which may either provide further information about the reasoning behind a decision or provide further information about the governance and management of the AI system. The six main categories of explanations are:

  1. The rationale for the decision
  2. Who is responsible for the development, management and implementation of the AI system
  3. The data used in the decision and how it is used
  4. Steps in the design and development of the system to ensure fairness in the decision
  5. Steps in the design and development of the system to ensure its safety reliability
  6. Steps in the design and development of the system to monitor the impact of the decision

The guidance explains the six types of explanation in detail, including what information should be included and when a particular type of explanation would be useful to provide in notices by a specific organisation or in connection with a particular decision.

Part 2 – Explaining AI in practice

Part 2 is mainly directed at technical teams, but it may also be useful for DPOs and compliance teams.

This part details how an organisation can design and deploy appropriately explainable AI systems and deliver suitable, audience-specific explanations in transparency notices.

This part provides six detailed tasks for organisations to follow, starting with the inception and design of the AI system and finishing with building an explanation. The guidance recommends that organisations first consider the types of explanation that may be needed before starting the design process for, or procurement of, the AI system, and prioritising which explanations are the most important in the context of the proposed AI system. A detailed case study is also provided in this part and at annexe 1 of the guidance.

Part 3 – What explaining AI means for your organisation

Part 3 is mainly directed at senior management teams, but it may also be useful for DPOs, compliance teams and technical teams.

This part outlines the various individuals who may be involved in drafting an explanation, and it examines the functions these individuals may have in the drafting process. The guidance notes that these are generic descriptions and may not fit each specific case.

The guidance also reviews the internal policies and procedures an organisation should have for ensuring consistency and standardisation in explanations, such as awareness, training and impact assessments. Again, the guidance notes that the suggestions are only a guide, and an organisation may choose to have more or less detail in certain areas.

Comment

While this guidance is not a statutory code of practice, it serves as a useful, practical guide for good practice when explaining AI-based decisions, both with and without human input. The guidance provides detailed examples of explanations, and it considers examples of the uses and interpretability of various algorithms, which can help organisations to ensure they adhere to GDPR standards.