The UK Centre for Data Ethics and Innovation (CDEI) released its 2019/20 Work Programme and Two-year strategy to enhance the benefits of data and Artificial Intelligence (AI) for the UK society and economy on 20 March 2019.

What’s in scope?

CDEI is an advisory body founded by the UK government and is led by an independent board of experts. For the next two years, CDEI plans to shape a policy, regulatory and cultural environment in the UK that promotes constructive and ethical innovation in data and AI-driven technology. CDEI benefits from a prime spot to use the know-how and expertise of the UK, a country recognised as a global leader in data-enabled technology.

Under its two-year strategy, CDEI’s main objectives are to:

a. Promote policy and governance that enables data-driven technology to improve people’s lives;

b. Ensure the public’s views inform the governance of data-driven technology;

c. Ensure the governance of data-driven technology can safely support its rapid development (this means not only addressing issues from recent years but also continuing to be alert to emerging problems); and

d. Foster effective partnerships between civil society, government, research organisations and industry players.Continue Reading UK’s two-year strategy to boost data and AI

On 18 December 2018, the European Commission published draft ethics guidelines for trustworthy AI. The guidelines are voluntary and constitute a working document to be updated over time. The guidelines have been opened up to a stakeholder consultation process.

The guidelines recognise that there are benefits to be gained from AI, but that humankind can only reap the benefits if we can trust the technology (in other words, that the technology contains trustworthy AI). An overarching principle in the guidelines is that AI should be human-centric, with the aim of increasing human well-being.

Trustworthy AI is defined as having two components:

  1. respect for fundamental rights, ethical principles and societal values – an “ethical purpose”, and
  2. be technically robust and reliable.

The guidelines set out a framework for implementing and operating trustworthy AI, aimed at stakeholders who develop, deploy or use AI.Continue Reading Draft ethics guidelines for trustworthy artificial intelligence published by the European Commission

Last month, the European Commission (Commission) announced plans to bolster the future of artificial intelligence (AI) across the bloc. In a paper on ‘Artificial Intelligence for Europe’, the Commission proposed a three-pronged approach to: (i) increase public and private investment in AI; (ii) prepare for socio-economic changes; and (iii) ensure an appropriate ethical and legal framework for AI. This blog will look at what AI is and the Commission’s proposed strategy.

What is AI?

The Commission defines AI as “systems that display intelligent behaviour by analysing their environment and taking actions – with some degree of autonomy – to achieve specific goals”. AI can be software-based, in the virtual world (such as image-analysis software, search engines or recognition systems) or embedded in hardware (for example, self-driving cars, Internet of Things applications, and advanced robots).

AI is increasingly prominent in our society and used on a near daily basis by most people. Many AI technologies utilize data to improve their performance and guide automated decision-making. The number of technological and commercial AI applications continues to increase, enabling AI to have a transformative effect on society as a whole.Continue Reading European Commission outlines plans to boost artificial intelligence

Earlier this year the UK Department for Digital, Culture, Media & Sport published its new Digital Charter. This short document outlines a UK rolling programme of work designed to make the UK a friendly environment to start-up and grow digital businesses. It is also designed to make the UK a safe place to be online. The charter will be updated as the government’s programme of work changes in response to technological advancements.

The goal of the charter is to establish rules and norms for the online world that can be put into practice.

Digital Charter

The principles outlined in the charter, guiding the government’s work, are:

  • the internet should be free, open and accessible;
  • people should understand the rules that apply to them when they are online;
  • personal data should be respected and used appropriately;
  • protections should be in place to help keep people safe online, especially children;
  • the same rights that people have offline must be protected online; and
  • social and economic benefits brought by new technologies should be fairly shared.

Continue Reading UK government publishes the Digital Charter and reaffirms creation of the Centre for Data Ethics and Innovation