On 18 December 2018, the European Commission published draft ethics guidelines for trustworthy AI. The guidelines are voluntary and constitute a working document to be updated over time. The guidelines have been opened up to a stakeholder consultation process.

The guidelines recognise that there are benefits to be gained from AI, but that humankind can only reap the benefits if we can trust the technology (in other words, that the technology contains trustworthy AI). An overarching principle in the guidelines is that AI should be human-centric, with the aim of increasing human well-being.

Trustworthy AI is defined as having two components:

  1. respect for fundamental rights, ethical principles and societal values – an “ethical purpose”, and
  2. be technically robust and reliable.

The guidelines set out a framework for implementing and operating trustworthy AI, aimed at stakeholders who develop, deploy or use AI.

Ethical purpose

AI should have an ethical purpose. Having an ethical purpose means that AI should be developed, deployed and used with respect for fundamental rights, principles and values.

The key fundamental rights to be respected by AI include:

  1. respect for human dignity;
  2. freedom of the individual;
  3. respect for democracy, justice and the rule of law;
  4. equality, non-discrimination and solidarity, including the rights of persons belonging to minorities; and
  5. citizens’ rights.

The guidelines also set out principles and values to be respected, which require AI to improve well-being, do no harm, preserve human autonomy, be just and fair, and operate transparently.

Specific ethical concerns are also highlighted in the guidelines. Input on these concerns is encouraged as part of the stakeholder consultation process.

The realisation of trustworthy AI

The guidelines set out a non-exhaustive list of 10 requirements, derived from the above rights, principles and values, that AI must meet in order to be considered trustworthy. These are accountability, data governance, design for all, governance of AI autonomy, non-discrimination, respect for and enhancement of human autonomy, respect for privacy, robustness, safety, and transparency.

As regards robustness, trustworthy AI must be able to deal with errors that may arise in all phases of the AI system. It must operate with accuracy, and this accuracy must be able to be confirmed and reproduced in order to prevent unintended discrimination. Trustworthy AI must also be resilient to attack, and have a fallback plan in the event that problems arise.

The guidelines go on to provide detailed and recommended technical and non-technical approaches to implementing trustworthy AI.

Assessment list

The guidelines expand on the 10 requirements above, suggesting questions that should be asked during the design, deployment and development phases, which ensure that the technology falls within the scope of trustworthy AI.

Next steps

The guidelines are currently open for consultation. Feedback and comments can be submitted here until 18 January 2019.

The final version of the guidelines are expected to be published in March 2019. Once they are finalised, stakeholders will be able to formally endorse the guidelines, and sign up on a voluntary basis.

Opinion

These guidelines are the first steps by the Commission to control and regulate technology, a topic that was discussed widely throughout 2018 particularly in light of the increase in personal data regulation via the introduction of the General Data Protection Regulation.

In the United Kingdom, this consultation will be closely watched by the House of Lords Select Committee on AI, which published a report on the subject in April 2018, and the UK government’s AI Council.