In March 2019, the Information Commissioner’s Office (ICO) released a Call for Input on developing the ICO’s framework for artificial intelligence (AI). The ICO simultaneously launched its AI Auditing Framework blog to provide updates on the development of the framework and encourage organisations to engage on this topic with the ICO.

On 23 October 2019, the ICO released the latest in its series of blogs (here). The blog outlines the key elements that organisations should focus on when carrying out a Data Protection Impact Assessment (DPIA) for AI systems that process personal data.

We have outlined below some of the key takeaways.

DPIAs and AI

Under Article 35(1) of the GDPR, organisations are required to carry out a DPIA where a type of processing is likely to result in a high risk to the rights and freedoms of individuals.

Article 35(3) of the GDPR sets out three types of processing which will always require a DPIA:

  • systematic and extensive evaluation of personal aspects relating to individuals which is based on automated processing, including profiling, and on which decisions are based that produce legal effects concerning the individual or similarly significantly affect the individual;
  • large scale processing of special category data; or
  • systematic monitoring of a publicly accessible area on a large scale.

The ICO outlines that the use of AI for processing personal data will therefore usually meet the legal requirement for completing a DPIA. In conducting a DPIA, organisations should consult their data protection officers and, where appropriate, individuals and relevant experts. The ICO must be consulted if a high risk is identified and it cannot be mitigated.

What needs to be assessed in a DPIA

The ICO reiterates that a DPIA needs to describe the nature, scope, context and purposes of any processing of personal data. It also needs to detail how and why AI is going to be used to process the data.

A DPIA should be undertaken by organisations at the early stages of development of any project which involves AI and should feature the following:

  1. A systematic description of the processing activity: data flows and the stages at which AI and automated decisions may affect individuals should be outlined. The ICO suggests that, due to the complexity of AI systems, organisations maintain two versions of the DPIA. The first would be a thorough technical description for specialist audiences and the second, a high-level description of the processing and an explanation of the logic of how the personal data inputs relate to the outputs affecting individuals. The DPIA should also set out the roles and obligations of data controllers and processors. If the AI system is outsourced to an external provider, the organisations should assess whether they are joint controllers, and if so, collaborate in the DPIA process as appropriate.
  2. Assessment of necessity and proportionality: the use of AI for processing personal data needs to be based on a legitimate purpose. Organisations need to demonstrate that the processing of personal data by an AI system is a proportionate activity. Here, organisations should undertake a balancing exercise between the interests of the organisation and the rights and freedoms of individuals. In particular, organisations need to consider any detriment to individuals that could follow from bias or inaccuracy in the algorithms and data sets being used.
  3. Identifying risks to rights and freedoms: organisations should consider other relevant legal frameworks beyond data protection. For example, AI may result in discrimination based upon historical patterns in data, which could fall foul of equalities legislation.
  4. Measures to address the risks: data protection officers and other information governance professionals should be involved in AI projects as early as possible to ensure that risks can be identified and addressed early in the AI system lifecycle. The DPIA should include any safeguards put in place to mitigate the identified risks and it should document the residual levels of risk posed by the processing. These risks must be referred to the ICO for prior consultation if they remain high.
  5. DPIA – a ‘living’ document: while DPIAs must be carried out before the processing of personal data begins, they should be considered a ‘live’ document. DPIAs should be subject to regular review and re-assessment if the nature, scope, context or purpose of the processing changes.


As AI becomes increasingly prevalent, regulators are continuing to perform a balancing act, ensuring compliance with data protection laws without stifling innovation. The interaction between AI and the GDPR engages a number of complex legal issues. It comes as no surprise that the ICO listed AI as one of its three strategic priorities.

The ICO blog provides useful guidance for organisations when conducting a DPIA for projects involving AI. The ICO plans to publish its final consultation paper on the AI Auditing Framework no later than January 2020. Keep an eye on this blog for news on the final consultation paper!