Artificial Intelligence

Artificial intelligence (AI) is a key area of focus for the Information Commissioner’s Office (ICO). The ICO is already working on a related AI project that focuses on building the ICO’s Auditing Framework. One of the goals of the ICO is to increase the public’s trust and confidence in how data is used and made available. In line with this, on 2 December 2019, the ICO published a blog on explaining decisions made by AI (here). The ‘Explaining decisions made with AI’ guidance (Guidance) has been prepared in collaboration with the UK’s national institute for data science and artificial intelligence, the Alan Turing Institute. The Guidance seeks to help organisations explain how AI decisions are made to those affected by them.

We have outlined some of the key takeaways below. Continue Reading ICO publishes draft guidance on explaining decisions made with AI

In March 2019, the Information Commissioner’s Office (ICO) released a Call for Input on developing the ICO’s framework for artificial intelligence (AI). The ICO simultaneously launched its AI Auditing Framework blog to provide updates on the development of the framework and encourage organisations to engage on this topic with the ICO.

On 23 October 2019, the ICO released the latest in its series of blogs (here). The blog outlines the key elements that organisations should focus on when carrying out a Data Protection Impact Assessment (DPIA) for AI systems that process personal data.

We have outlined below some of the key takeaways.Continue Reading AI Auditing Framework: data protection impact assessment

On 15 October 2019, the Information Commissioner’s Office (ICO) released the latest in its series of blogs on developing its framework for auditing artificial intelligence (AI). The blog (here) focuses on AI systems and how data subjects can exercise their rights of access, rectification and erasure in relation to such systems. Below, we summarise some of the key takeaways and our thoughts on the subject.

Rights relating to training data

Organisations need data in order to train machine learning models. While it may be difficult to identify the individual to whom the training data relates, it may still be personal data for the purposes of the General Data Protection Regulation (GDPR), and so will still need to be considered when responding to data subject rights requests under the GDPR. Provided no exception applies and reasonable steps have been taken to verify the identity of the data subject, organisations are obliged to respond to data subject access requests in relation to training data. The right of rectification may also apply but, as an individual inaccuracy is less likely to have a direct effect on an individual data subject that is part of a large data set, organisations should prioritise rectifying personal data that may have a direct effect on the individual.

Complying with requests from data subjects to erase training data may prove more challenging. If an organisation no longer needs the personal data as the machine learning model has already been trained, the ICO advises that the organisation must fulfil the request to erase. However, organisations may need to retain training data where the machine learning model has not yet been trained. The ICO advises that organisations should consider such requests on a case-by-case basis, but do not provide clarity on the factors organisations should consider.Continue Reading ICO blogs on AI and data subject rights

On 12 September 2019, the Committee of Ministers of the Council of Europe announced that an Ad hoc Committee on Artificial Intelligence (CAHAI) will be set up to consider the feasibility of a legal framework for the development, design and application of Artificial intelligence (AI). On the same day, the United Kingdom’s data protection supervisory authority, the Information Commissioner’s Office (ICO), released the latest in its series of blogs on developing its framework for auditing AI. The blog (here), published on 12 September 2019, focuses on privacy attacks on AI models. With interest in the development of an AI legal framework increasing, what does the ICO consider to be the data security risks associated with AI?
Continue Reading Artificial intelligence: ICO considers security risks and the need for a new legal framework

The Information Commissioner’s Office (ICO) and the Alan Turing Institute have recently released an interim report (Report) outlining their approach to best practices in explaining artificial intelligence (AI) to users. The Report is of particular relevance to operators of AI systems who may be considering their duties under the General Data Protection Regulation 2016/679 (GDPR).

The Council of Europe Commissioner for Human Rights has recently published recommendations for improving compliance with human rights regulations by parties developing, deploying or implementing artificial intelligence (AI).

The recommendations are addressed to Member States. The principles concern stakeholders who significantly influence the development and implementation of an AI system.

The Commissioner has focussed on 10 key areas of action:

    1. Human rights impact assessment (HRIA) – Member States should establish a legal framework for carrying out HRIAs. HRIAs should be implemented in a similar way to other impact assessments, such as data protection impact assessments under GDPR. HRIAs should review AI systems in order to discover, measure and/or map human rights impacts and risks. Public bodies should not procure AI systems from providers that do not facilitate the carrying out of or publication of HRIAs.
    2. Member States public consultations – Member States should allow for public consultations at various stages of engaging with an AI system, and at a minimum at the procurement and HRIA stages. Such consultations would require the publication of key details of AI systems, including details of the operation, function and potential or measured impacts of the AI system.
    3. Human rights standards in the private sector – Member States should clearly set out the expectation that all AI actors should “know and show” their compliance with human rights principles. This includes participating in transparent human rights due diligence processes that may identify the human rights risks of their AI systems.
    4. Information and transparency – Individuals subject to decision making by AI systems should be notified of this and have the option of recourse to a professional without delay. No AI system should be so complex that it does not allow for human review and scrutiny.
    5. Independent oversight – Member States should establish a legislative framework for independent and effective oversight over the human rights compliance of AI systems. Independent bodies should investigate compliance, handle complaints from affected individuals and carry out periodic reviews of the development of AI system capabilities.
      Continue Reading Council of Europe publish recommendations for the regulation of AI to protect human rights

The Centre for Data Ethics and Innovation (CDEI) is inviting submissions to help inform its review of online targeting and bias in algorithmic decision making.

Online targeting

Online targeting refers to providing individuals with relevant and engaging content, products, and services. Typically users experience targeting in the form of online advertising or personalised social media

Researchers at the Information Commissioner’s Office (ICO) have started a series of blogs discussing the ICO’s work in developing a framework for auditing artificial intelligence (AI). In the first blog of the series, the discussion revolves around the degree and quality of human review in AI systems, specifically, in what circumstances human involvement can be

On April 10, U.S. lawmakers introduced the Algorithmic Accountability Act (the AAA). The AAA empowers the Federal Trade Commission (FTC) to promulgate regulations requiring covered entities to conduct impact assessments of algorithmic “automated decision systems” (including machine learning and artificial intelligence) to evaluate their “accuracy, fairness, bias, discrimination, privacy and security.” The bill is evocative

The European Union Agency for Network and Information Security (ENISA) recently published its report on ‘Security and privacy considerations in autonomous agents’.

Artificial intelligence (AI) and complex algorithms offer unlimited opportunities for innovation and interaction, but they also bring a number of challenges that should be addressed by future policy frameworks at the EU level – especially in light of the amount of available data.

One of the objectives of the study was to provide relevant insights for both security and privacy for future EU policy-shaping initiatives. We have summarised some of the key security and privacy recommendations from the report below.Continue Reading ENISA tackles AI head on