On 15 October 2019, the Information Commissioner’s Office (ICO) released the latest in its series of blogs on developing its framework for auditing artificial intelligence (AI). The blog (here) focuses on AI systems and how data subjects can exercise their rights of access, rectification and erasure in relation to such systems. Below, we summarise some of the key takeaways and our thoughts on the subject.
Rights relating to training data
Organisations need data in order to train machine learning models. While it may be difficult to identify the individual to whom the training data relates, it may still be personal data for the purposes of the General Data Protection Regulation (GDPR), and so will still need to be considered when responding to data subject rights requests under the GDPR. Provided no exception applies and reasonable steps have been taken to verify the identity of the data subject, organisations are obliged to respond to data subject access requests in relation to training data. The right of rectification may also apply but, as an individual inaccuracy is less likely to have a direct effect on an individual data subject that is part of a large data set, organisations should prioritise rectifying personal data that may have a direct effect on the individual.
Complying with requests from data subjects to erase training data may prove more challenging. If an organisation no longer needs the personal data as the machine learning model has already been trained, the ICO advises that the organisation must fulfil the request to erase. However, organisations may need to retain training data where the machine learning model has not yet been trained. The ICO advises that organisations should consider such requests on a case-by-case basis, but do not provide clarity on the factors organisations should consider.
Rights relating to personal data involved in AI systems during deployment
Organisations should be mindful that, in some situations, how they deal with data subject rights requests regarding personal data involved in AI systems during deployment will differ from how they deal with training data. For example, if an AI system output contains personal data, the accuracy of the personal data will be important as the AI system could make a decision based on inaccurate personal data that adversely affects the data subject. The ICO therefore advises treating requests to rectify personal data in such a circumstance with a higher priority than training data.
Rights relating to the model itself
Personal data may be contained within the AI model itself, either by design or by accident. Where by design, an organisation may be able to fulfil an access request from a data subject without altering the model. However, if a data subject exercises their right to rectification or erasure, it may be necessary to re-train the model with an updated data set. Where personal data has been contained within an AI model by accident, third parties may be able to analyse the way the AI model behaves and infer the data subject. It will be difficult for organisations to give effect to data subjects’ rights in such a scenario. The ICO recommends that organisations proactively evaluate the possibility that personal data may be inferred so as to minimise the risk of accidental disclosure.
Comment
Organisations developing AI systems should consider how they would deal with a data subject rights request at different stages of the development of the systems. A case-by-case assessment may be needed, but organisations that have considered this question and implemented processes to help deal with such requests will be better placed than organisations that fail to think about the question in advance.
On 28 October 2019, the ICO drew the consultation period for the ICO Auditing Framework for AI to a close. The ICO will use the feedback it has collected over the last eight months to conduct further research and finalise its draft guidance. The ICO plans to publish its formal consultation paper in January 2020. Keep an eye on this blog for updates.