Amidst growing public attention on artificial intelligence (AI), the UK government recently published its white paper detailing its “pro-innovation” approach to AI. Other developments, showing the UK’s continued focus on this area, are also outlined below.

Evidence supporting the white paper

The Department for Science, Innovation and Technology (DSIT) and the Office for Artificial Intelligence have published evidence which outlines the regulatory framework options for AI governance. The option put forward in the white paper is to regulate AI through existing regulators with regulatory members of the Digital Regulation Cooperation Forum (DRCF) supporting the regulatory framework, helping regulators to develop appropriate guidance on key issues like algorithmic bias, safety and privacy. The DRCF comprises the Competition and Markets Authority, the Information Commissioner’s Office, the Ofcom and the Financial Conduct Authority.

White paper

In July 2022 the AI Regulation Policy Paper set out plans for a risk-based, adaptable regulatory framework. The white paper has confirmed that the following five cross-sectoral principles will initially form a non-statutory framework. The UK government may then decide to introduce a statutory duty on regulators, if needed.

  1. Appropriate transparency and explainability – parties directly affected by the use of an AI system should be able to access sufficient information to be able to enforce their rights, including how the AI system works and how it makes decisions. Bearing in mind that the logic and decision-making in AI systems cannot always be meaningfully explained, the level of explainability should be appropriate to the context, including the level of risk.
  2. Safety, security and robustness – regulators may need to consider technical standards, for example addressing testing and data quality.
  3. Accountability and governance – regulators will need to determine who is accountable for compliance with existing regulation and the principles, and provide guidance on how to demonstrate accountability.
  4. Contestability and redress – those affected should be able to contest an AI decision or outcome that is harmful or creates material risk of harm.
  5. Fairness – AI systems should not undermine legal rights, discriminate unfairly against individuals, or create unfair market outcomes.

The UK government is inviting responses to the questions set out in the consultation, which will close on 21 June 2023.

International Technology Strategy

The International Technology Strategy published last month sets out the UK’s ambition to be recognised as a leader on science and technology in Europe by 2030. AI is listed one of the priority technologies. The UK government advocates for AI that is “trustworthy”, with proportionate controls on sensitive technology, and with data used responsibly. Rather than attempting a definition of AI, the UK government refers to two core characteristics to define AI, its adaptivity and autonomy, which will be used to guide the scope of the regulatory framework.

ICO’s updated guidance on AI and Data Protection

The ICO issued updated its guidance on AI and Data Protection following requests to clarify requirements for fairness in AI. The guidance provides a roadmap to data protection compliance for developers and users of generative AI. The updates include new content on fairness, including how solely automated decision-making is linked to fairness, and on how to ensure transparency in AI.

Just last week, the ICO also published a response to the UK’s AI White Paper, welcoming the UK government’s intention to form a regulatory group to issue guidance or oversee a joint regulatory sandbox aiming to provide AI developers legal certainty. 


The UK government is taking a light touch approach to AI regulation, and unlike the EU does not currently plan to introduce legislation. In the meantime organisations developing or using AI that processes personal data should continue to take a data protection by design and by default approach, ensuring privacy and data protection are considered at the design phase and throughout the lifecycle of AI.