On 25 March 2024, Ofcom called for evidence for the third phase of its online safety regulations. This call for evidence will culminate in Ofcom’s third consultation paper, which will act as guidance for service providers to ensure compliance with the Online Safety Act (“OSA”). 

The third phase of online regulations introduces further guidance on the extra duties that will arise under the OSA for category 1, 2A, and 2B services (explained here), which could include:


Additional Duties
ContentProtect news publisher, journalistic content and/or content of democratic importance
Terms of UseInclude certain additional terms of use
Specify a policy in the terms of use regarding disclosing information to a deceased child’s parents about their child’s use of the service
AdvertisingPrevent fraudulent advertising
TransparencyPublish a transparency report 
Additional featuresProvide user empowerment features
Provide user identity verification

Along with this, Ofcom has also published their advice to the UK Government on the thresholds to be used to decide whether or not a service will fall into Category 1, 2A or 2B.

Through this call for evidence, Ofcom is inviting industry stakeholders, expert groups, and other organisations to provide evidence that will help inform and shape Ofcom’s approach to the OSA regulations. The call for evidence will close on 20 May 2024, after which Ofcom will publish its third consultation paper in 2025.

Preceding this third consultation paper are two consultation papers that have already been finalised and published by Ofcom. The first paper acts as guidance for user-to-user (“U2U”) and search services on how best to approach their new duties under the OSA. The second paper is specific to service providers of pornographic content.

The proposed measures under the first consultation paper vary based on the size and risk profile of the service. A “large service” is any service with an average user base greater than 7 million per month in the UK, which is approximately equivalent to 10% of the UK population. Every other service is a “small service”.

Further, when assessing the risk profile, services are expected to conduct the risk assessments themselves, and classify their services into one of the following criteria:

  1. Low risk: low risk for all kinds of illegal harm
  2. Specific risk: faces the risk of a specific kind of harm/harms
  3. Multi risk: faces significant risks for illegal harm

Notably, for large companies that have a multi-risk profile, almost all the proposed measures apply, except those recommended for automated content moderation, enhanced user control, and certain reporting obligations. Online safety regulations are expected to affect more than 100,000 service providers, many of which will be small businesses based in the UK and overseas[RS4] . Ofcom offers a free self-assessment tool to assess if these regulations will affect your company. If your organisation is large and sophisticated and requires a tailored approach to ensure compliance with these regulations, we can assist with this.


Utah’s recent passage of updates to its consumer protection law and the Artificial Intelligence Policy Act (Utah AI Policy Act), which comes into effect on May 1, 2024, could mark an important moment in AI regulation. Notably, the updates to state consumer protection law emphasize holding companies that use generative AI (GenAI)—rather than developers—accountable if they knowingly or intentionally use generative AI in connection with a deceptive act or practice.  In other words, a company may not have a defense that:  “The AI generated the output that was deceptive, so we are not responsible.” For a violation, a court may issue an injunction for violations, order disgorgement of money received in violation of the section, as well as impose a fine of up to $2,500, along with any other relief that the court deems reasonable and necessary.

Laws that hold generative AI users, rather than developers, responsible for the accuracy of AI outputs are sure to increase discussion on AI governance teams about employees’ proper use of generative AI and the ongoing quality of AI outputs to reduce the risk from inaccurate or deceptive outputs. 

There are other noteworthy aspects of the recent updates to Utah law.

  • User Notice Requirements Upon Request: The updated consumer protection law requires a company making a GenAI feature available to users to make a clear and conspicuous disclosure to a user upon request that explains the person is interacting with GenAI and not a human.
  • Disclosure Requirements for Regulated Occupations: When a company is performing regulated occupations (i.e., those that require a state license or certification), the updated state law requires that the company prominently disclose when the company is using GenAI as part of that service. The disclosure is to be provided “verbally at the start of an oral exchange or conversation; and through electronic messaging before a written exchange.”
  • Additional Provisions: The Utah AI Policy Act establishes the Office of Artificial Intelligence Policy to potentially regulate AI and foster responsible AI innovation.  It also creates the Artificial Intelligence Learning Laboratory Program aimed at analyzing AI’s risks and benefits to inform regulatory activity and encourage development of AI technology in Utah. Additionally, the law permits companies to apply for temporary waivers of certain regulatory requirements during AI pilot testing to facilitate innovation while ensuring regulatory oversight.

Utah’s updates to its consumer protection laws highlight some of the issues companies may face as they adopt GenAI. To reduce risk, companies will want to ensure their AI governance program includes ongoing monitoring of employee use of GenAI and the quality of the outputs from GenAI. While it may not be surprising that companies are responsible for how they use GenAI outputs, the constant innovation in AI technology and the difficulty in ensuring GenAI outputs are appropriate will be a compliance challenge.

Although it’s been 2 years since the Dobbs v. Jackson Women’s Health decision from the Supreme Court, various state legislatures and courts have tried to define the new post-Roe landscape. This effort includes new and revised laws to amend existing privacy laws to protect consumer health data. You can find out more on our blog post from Health Industry Washington Watch.

Additionally, Reed Smith’s San Francisco office will be hosting a comprehensive hybrid-CLE event on April 10, where Sarah Bruno, James Hennessey and Monique Bhargava will provide an overview of recent legislation from Washington state and California as well as what to expect going forward with regard to health data privacy.

Reed Smith will continue to follow developments in health care privacy laws. If you have any questions, please reach out to the authors or to the health care lawyers at Reed Smith.

The German Federal Ministry for Digital and Transport (Bundesministerium für Digitales und Verkehr – BMDV) has drawn up a new draft bill which shall introduce:

  • (i) a statutory obligation for providers of number-independent interpersonal communication services (e.g. instant messaging services) to allow their users to use end-to-end encryption (“E2EE”), and (ii) a statutory transparency obligation for such providers to inform their users accordingly; and
  • a statutory transparency obligation for providers of certain cloud services to inform their users about how to use continuous and secure encryption (“Draft Bill”).

The Draft Bill (status 7 February 2024), which does not have any basis in EU law, is available here (German content).

Continue Reading Germany’s government plans to introduce a statutory ‘right to encryption’ for users of messaging and cloud storage services

On 19 December 2023, the Information Commissioner’s Office (ICO) published its updated guide on UK Binding Corporate Rules (BCRs), introducing the UK BCR Addendum for controllers and processors (the Addendum). It will enable organisations with existing EU BCRs to include data transfers from the UK.

Continue Reading Introduction of a UK BCR Addendum

With cybersecurity becoming a board-level issue, compliance officers, lawyers, board members, and business drivers are looking for official guidance or recommendations on cybersecurity measures to protect business, customers, and the wider economy.

Continue Reading Cybersecurity preparedness: What guidance to follow?

On Monday, January 29th, we celebrated Global Data Protection Day by delivering an exciting webinar highlighting the latest data protection laws and bills that might influence your business.

Please see below our webinar recording featuring our data protection specialists, and learn tips and tricks for successfully navigating the evolving landscape of data protection.

Download our presentation slides here.

On 26 November 2023, the US Cybersecurity and Infrastructure Security Agency (CISA), together with the UK’s National Cyber Security Centre (NCSC), published joint ‘Guidelines for Secure AI System Development’ (the Guidelines).

The Guidelines were formulated by CISA and the NCSC, in cooperation with 21 other international agencies and ministries, as well as industry experts.

Continue Reading UK & US cybersecurity agencies release new ‘Guidelines for Secure AI System Development’

On 17 October 2023, the First-Tier Tribunal of the General Regulatory Chamber – Information Rights (the Tribunal) handed down its decision in Clearview AI Inc v The Information Commissioner [2023] UKFTT 819, overturning the £7.5 million fine levied on Clearview AI Inc. (Clearview) by the ICO last year.

Continue Reading Clearview AI Inc., successfully appeals £7.5 million fine from the ICO but the ICO is fighting back!

On 26 October 2023, the UK adopted the Online Safety Act 2023, which introduces new obligations for online platforms to improve user safety online by ensuring content that is illegal and harmful is monitored and removed. We previously compared the Act in its draft form with the EU Digital Services Act here and will be updating the table soon.

Continue Reading The UK Online Harms Bill becomes the Online Safety Act