The Council of Europe Commissioner for Human Rights has recently published recommendations for improving compliance with human rights regulations by parties developing, deploying or implementing artificial intelligence (AI).
The recommendations are addressed to Member States. The principles concern stakeholders who significantly influence the development and implementation of an AI system.
The Commissioner has focussed on 10 key areas of action:
- Human rights impact assessment (HRIA) – Member States should establish a legal framework for carrying out HRIAs. HRIAs should be implemented in a similar way to other impact assessments, such as data protection impact assessments under GDPR. HRIAs should review AI systems in order to discover, measure and/or map human rights impacts and risks. Public bodies should not procure AI systems from providers that do not facilitate the carrying out of or publication of HRIAs.
- Member States public consultations – Member States should allow for public consultations at various stages of engaging with an AI system, and at a minimum at the procurement and HRIA stages. Such consultations would require the publication of key details of AI systems, including details of the operation, function and potential or measured impacts of the AI system.
- Human rights standards in the private sector – Member States should clearly set out the expectation that all AI actors should “know and show” their compliance with human rights principles. This includes participating in transparent human rights due diligence processes that may identify the human rights risks of their AI systems.
- Information and transparency – Individuals subject to decision making by AI systems should be notified of this and have the option of recourse to a professional without delay. No AI system should be so complex that it does not allow for human review and scrutiny.
- Independent oversight – Member States should establish a legislative framework for independent and effective oversight over the human rights compliance of AI systems. Independent bodies should investigate compliance, handle complaints from affected individuals and carry out periodic reviews of the development of AI system capabilities.
- Non-discrimination and equality – AI systems should be subject to the highest level of scrutiny in the context of avoiding discrimination, in particular for groups that have a higher risk of their rights being disproportionately impacted by AI (e.g. children, the elderly and persons with disabilities). This is especially important for law enforcement to avoid profiling.
- Data protection and privacy – AI systems should fairly balance the legal bases for processing with the rights of individuals. Member States should impose higher safeguards for the processing of special categories of data by AI systems.
- Freedom of expression, freedom of assembly and association, and the right to work – Member States should prevent technology monopolies to form, in order to avoid the concentration of AI expertise and power and subsequent negative impact on the free flow of information. Member States should also track the number and types of jobs created and lost because of AI developments and mitigate for job losses.
- Remedies – AI systems must always remain under human control. Responsibility and accountability for human rights violations associated with AI must always lie with a natural or legal person. At a minimum, individuals should be able to obtain human intervention. Effective remedies should be implemented to ensure individuals have redress for any harm suffered as a result of AI systems.
- Promoting ‘AI literacy’ – Member States should consider establishing a consultative body within government to advise on all AI-related matters.
These recommendations set ambitious targets for regulating AI in some respects. It is important to remember that while they create no obligation, these recommendations will likely inform the development of future regulation in this area. The topic of regulating AI has been of particular interest to EU legislators and supervisory bodies recently. See our TLD posts on: i) the draft Ethics Guideline published by the European Commission, here; ii) the Centre for Data Ethics and Innovation’s call in relation to its review on algorithmic bias and online targeting, here; and iii) the European Union Agency for Network and Information Security study on AI, here. Keep an eye on the TLD blog for further updates in this area.