In March 2025, the Information Commissioner’s Office (‘ICO’) announced a series of measures to support the UK government’s growth agenda while maintaining strong data protection standards. These measures include a commitment to introduce a statutory code of practice for businesses developing or deploying AI with a focus on data protection safeguards.
The above initiative aligns with the government’s broader objectives set out in the AI Opportunities Action Plan by facilitating responsible use of data while removing regulatory uncertainty for businesses seeking to explore AI opportunities. Further details on the AI Opportunities Action Plan are outlined below.
What is the AI Opportunities Action Plan?
The UK government published the AI Opportunities Action Plan (the ‘Plan’) on 13 January 2025. The Plan aims to ensure Britain provides global leadership in the next phase of AI development, ramps up the adoption of AI nationally and effectively seizes the opportunities that will be made available by AI.
The ambitious Plan includes 50 recommendations that have been divided into three key areas:
a) Investing in the foundations of AI infrastructure;
b) Promoting the adoption of AI across the economy; and
c) Nurturing national champions so the UK benefits economically from AI advancement.
Further details on some of the key features of the Plan have been outlined below.
What infrastructural recommendations does the Plan include?
To lay the foundations of enabling AI growth, the Plan outlines the building of AI infrastructure through accessing a sufficient supply of compute and establishing AI Growth Zones to facilitate the growth of AI data centres. In February 2025, the government invited regional authorities to submit expressions of interest for these AI Growth Zones, which marks an important step in identifying opportunities and informing the next stage of development of these zones.
The Plan also encourages the government to sign international compute partnerships with global partners to increase the compute capability available for research collaborations. The above steps will ensure that there is a robust physical infrastructure in place for the building out of an AI ecosystem.
What does the Plan suggest for ensuring access to high-quality data?
As well as infrastructural recommendations, the Plan emphasises the need for access to high-quality data to facilitate the growth of AI. For example, the Plan outlines the establishment of a National Data Library to provide a repository of resources for startups and researchers to train their AI models on high quality data that is compliant with IP and data protection standards.
What does the Plan outline for the responsible use of data for AI projects?
Another important feature of the Plan is the need for the responsible development of AI. In particular, it suggests the AI Safety Institute is expanded to explore AI safety and regulation. The UK text and data mining regime is addressed as needing reform so that it facilitates AI innovation while allowing data rights holders to have control over the use of the content they produce.
How does the Plan facilitate AI innovation?
To accelerate innovation, the government is encouraged to adopt a ‘Scan – Pilot – Scale’ framework. This approach will identify opportunities where AI can meet public sector needs, rapidly test pilot projects, and provide funding for successful initiatives. Demonstrating its commitment to this initiative, the UK government announced the development of an AI tool to digitalise planning data to replace outdated paper systems in April 2025. Complementing these efforts, a new UK Sovereign AI unit is proposed to foster public-private partnerships, facilitate joint ventures and investment, and maximise the UK’s stake in frontier AI.
What has been the UK’s historic approach towards AI regulation?
Rather than imposing sweeping legislation, the UK has historically adopted a lax approach to AI regulation by making use of regulatory principles, sector-specific best practices and pre-existing legislation to shape AI regulation. Some recent examples include the ICO’s consultation series on how data protection laws apply to the use of generative AI. The regulator has also issued an audit outcomes report on the use of AI tools in recruitment, highlighting the risks associated with these tools such as bias and the excessive collection of personal information. Similarly, Ofcom has also issued guidance on the application of the Online Safety Act to generative AI and chatbots, emphasising the need for companies to implement risk assessments and enhanced protections to prevent children from encountering harmful materials.