In a rapidly evolving technological landscape, the National Institute of Standards and Technology (NIST) has released crucial guidance on managing risks associated with generative AI (GenAI). Our latest client alert delves into the newly published GenAI Profile (NIST AI 600-1), which outlines 12 potential high-level risks and offers actionable strategies for mitigation by breaking down these mitigation strategies into four categories: govern, map, measure, and manage. From addressing data privacy concerns to ensuring compliance with future AI laws, this alert provides insights for organizations to navigate the complexities of GenAI responsibly. Read the full alert to understand how aligning with NIST’s recommendations can fortify your AI governance program and prepare you for upcoming regulatory changes.
European Commission plans a public consultation for the new standard contractual clauses for international data transfers coming up in 2025
The European Commission (the “Commission”) announced its plans to open a public consultation on the new Standard Contractual Clauses (“SCCs”) in the fourth quarter of 2024. The new SCCs will address the scenario where the data importer (controller or processor) is based outside of the European Economic Area (“EEA”) but is directly subject to the General Data Protection Regulation (“GDPR”) due to Art. 3(2) – offering goods and services to individuals in the EU or monitoring their behaviour within the EU.
Background
When the Commission adopted the 4 June 2021 SCCs for the transfers of personal data to third countries, the scope of these SCCs was limited to transfers from a data exporter subject to the GDPR to a data importer (controller or processor) who was not subject to the GDPR. The 2021 SCCs were not therefore designed for situations where both the data importer and exporter were directly subject to the GDPR. In its Guidelines 05/2021, the European Data Protection Board (“EDPB”) called for the Commission to prepare another set of SCCs to cover the gap.
Next steps
If the Commission will follow the EDPB commentary in Guidelines 05/2021, we expect the new SCCs to focus on the risks associated with the data importer being located in a third country: to address possible conflicting national laws and government access in the third country. The obligations under the new SCCs may incorporate the GDPR principles, information notice to data subjects about the transfers and the risks associated with transfers to a third country, detailed security measures for transfers, notification of data breaches, and provisions governing onward transfers.
What does this mean?
Recent EU supervisory authority decision(s) to impose significant financial penalties on organisations for failing to use appropriate safeguards when transferring personal data to third countries has shown that regulators are not afraid of rigorously enforcing compliance. Organisations cannot rely on the legal uncertainty that has been ongoing since the issuance of the 2021 SCCs to explain why they transfer personal data to third countries without adequate protections in place.
Until the new SCCs are published, organisations should ensure they carefully assess their data transfers, put in place appropriate safeguards either under the most-up-to-date version of the SCCs or use other transfer mechanisms that the GDPR provides for, and complete any transfer impact assessments, where necessary.
A recent decision by a data protection regulator confirms that derogations under Art. 49 GDPR must be relied upon on an exceptional basis. First of all, to rely on a derogation, transfers must not be repetitive. Further, when relying on the transfer for the performance of a contract with a data subject (Art. 49(1)(b) or for the conclusion or performance of a contract concluded in the interest of the data subject (Art. 49(1)(c), the exporter needs to ensure the necessity requirement is met, i.e. the main purpose of the contract could not be achieved without a transfer, and (2) there are no less intrusive alternatives available.
New Target in Sight: FTC Zeroes in on Algorithmic Pricing Models Based on Personal Data
Witnessing the race to harness the power of Artificial Intelligence (“AI”) by markets and businesses, the Federal Trade Commission (“FTC”), recently issued a warning over the emerging technology and its ever-widening use cases. Citing its authority under Section 6(b) of the FTC Act, the Commissioners voted 5-0 on July 19 in favor of issuing investigative orders to eight companies regarding the use of consumer data to set individualized prices for products and services, which the Commission refers to as “surveillance pricing.” In announcing the move, FTC Chair Lina Khan claimed that “the FTC’s inquiry will shed light on [the] shadowy ecosystem of pricing middlemen.”
- The agency’s inquiry focuses on four areas:
Types of products and services being offered: The types of products and services that each company has produced, developed, or licensed to a third party, as well as details about the technical implementation and current and intended uses of this technology which may facilitate pricing decisions;
- Data collection and inputs: Information on the data sources used for each product or service, including the data collection methods for each data source, the platforms and methods that were used to collect such data, and whether that data is collected by other parties (such as other companies or other third parties);
- Customer and sales information: Information about whom the products and services were offered to and what those customers planned to do with those products or services; and
- Impacts on consumers and prices: Information on the potential impact of these products and services on consumers including the prices they pay.
Price differentiation and “dynamic pricing” – the practice of offering different pricing to different segments of consumers – has been around for a long time. In a blog post accompanying its announcement of the orders, the FTC explains its new inquiry into this practice as a response, in part, to advancements in machine learning technology that make it easier to collect and process large volumes of personal data in service of algorithmic pricing models. Although the orders are framed as requests for information – with none of the companies accused of any wrongdoing – they serve as another reminder of the Commission’s increased focus on artificial intelligence and algorithmic decision making. This focus increasingly occurs at the intersection of the agency’s bureaus of competition and consumer protection. That the Commission vote was unanimous suggests a strong interest in studying the issue among the Commissioners.
In addition to concerns around AI, the announcement may suggest a further intent by the Commission to revisit the law of price differentiation. The FTC’s characterization of the issues could be read to suggest a view that price differentiation is not presumptively legal if it is predicated on businesses gathering information to determine the prices set in a given transaction.
Section 6(b) findings are typically confidential but may culminate in a report of findings and recommendations for policymakers and other stakeholders. Considering the speed in which AI-enabled business practices continue to emerge and evolve – coupled with the Commission’s clear desire to keep up – expect the FTC to prioritize this study going forward.
German Federal Supreme Court rules on “climate neutral” advertising claims
According to the German Federal Supreme Court (Bundesgerichtshof – “BGH”), companies must substantiate “climate neutral” advertising claims: Where such advertising claims lack sufficient substantiation in direct proximity to the claim, they will likely be considered misleading and, therefore, in breach of the statutory requirements of the German Act against Unfair Commercial Practices (Gesetz gegen den unlauteren Wettbewerb – UWG).
Background of the case
A leading German manufacturer of sweets (“Advertiser”) advertised in a magazine that all its products were produced in a “climate neutral” manner. The Advertiser’s manufacturing process was, in fact, not CO2 neutral. To reduce its CO2 footprint, the Advertiser supported climate protection projects carried out by a third party. German competition watchdog Wettbewerbszentrale considered the advertisement misleading and initiated legal action against the Advertiser
The BGH’s decision
In its third instance judgment of 27 June 2024, case no I ZR 98/23, the BGH sets strict substantiation standards for “climate neutral” claims. The BGH’s key considerations are summarised in its press release, while the fully reasoned judgment has not yet been published. The BGH ruled that the particular advertising claim would be misleading within the meaning of section 5(1) UWG and, therefore, prohibited.
In the BGH’s view, the advertising claim “climate neutral” is ambiguous as it can mean (i) reduction of CO2 emissions or (ii) offsetting of CO2 emissions. According to the BGH, reducing CO2 emissions on the one hand and offsetting CO2 emissions on the other cannot be considered equally suitable means for achieving climate neutrality. Rather, reducing CO2 emissions would take precedence over offsetting CO2 emissions. According to the BGH, vague environmental claims such as “climate neutral” may be legally permissible only if the specific meaning of the claims would be explained as part of the advertising itself. By contrast, in the BGH’s view, it shall not be sufficient to refer to information on an external website, including where such external website can be accessed through a QR-code displayed in close proximity to the advertising claim.
The reason for this strict view is that environmental claims – as with health claims – entail an increased risk of misleading consumers. Accordingly, there is a greater need to inform the target audience about the specific meaning of the claim.
The BGH’s press release suggests that the judgment does not impose a general ban on “climate neutral” claims. Nor does the BGH categorically prevent advertisers from supporting environmental claims with offsetting measures, such as third-party climate protection projects. However, it follows from the BGH judgment that advertisers must act diligently when making environmental claims. In particular, where the climate-friendly effects of the advertised products are achieved (only) by implementing offsetting measures, sufficient explanatory substantiation must be included “in the advertisement itself”. The press release does not reveal whether and how the judgment provides any guidance on (i) what standards advertisers must meet to comply with the requirement to substantiate their “climate neutral” claim “in the advertisement itself” and (ii) potential exemptions from this strict requirement. These aspects will be of particular relevance where the advertising is made online where easily accessible substantiation can be provided via hyperlinking, overlays and other technical means. Therefore, the fully reasoned judgment must be reviewed once published.
Interplay with upcoming EU legislation
In light of recent developments on the EU level, the BGH’s ruling appears to be relevant only for a transitional period ending 27 March 2026. The background is that Directive (EU) 2024/825 empowering consumers for the green transition through better protection against unfair practices and through better information (“Directive“) needs to be transposed into national laws of EU member states until this date. The Directive regulates, among other topics, advertising claims, which are based on the offsetting of greenhouse gas emissions, that a product has a neutral, reduced or positive impact on the environment in terms of greenhouse gas emissions. Such advertising claims will be prohibited under the Directive as misleading in all circumstances. This is a key difference compared to the BGH judgment.
Claims that fall under the Directive include “climate neutral”, “CO2 neutral certified”, “carbon positive”, ”climate net zero”, “climate compensated”, “reduced climate impact” and “limited CO2 footprint”. Such claims should only be allowed where they are not based on the offsetting of greenhouse gas emissions outside the product’s value chain but are instead based on the actual lifecycle impact of the product in question, as the former and the latter are not equivalent (Directive, Recital 12). Finally, it needs to be noted that the EU legislator emphasised that “such a prohibition should not prevent companies from advertising their investments in environmental initiatives, including carbon credit projects, as long as they provide such information in a way that is not misleading and that complies with the requirements laid down in Union law” (Directive, Recital 12). Clearly, it will be a challenge for the practice to identify what is not misleading in this particular context. In particular, the increased substantiation requirements under the proposed EU Green Claims Directive will need to be taken into account. For further information on the proposed EU Green Claims Directive, please see the Reed Smith in-depth article of 19 April 2023, “Greenwashing – EU proposes strict requirements for environmental claims: key points on the Green Claims Directive”.
Personal Jurisdiction Doctrine Shows Teeth as First Circuit Dismisses Class Action Wiretapping Claim
The First Circuit Court of appeals has affirmed that specific personal jurisdiction must be based on defendants’ intentional conduct. In affirming the dismissal of a consumer class action that alleged “wiretapping” claims based on ordinary website activity, the federal appeals court’s decision reflects growing judicial skepticism toward the proliferation of class action claims applying old statutes to ubiquitous Internet technologies.
The Recent Trend of Electronic Wiretapping Lawsuits
Rosenthal v. Bloomingdales.com, LLC exemplifies the flood of statutory wiretapping claims inundating website operators and federal courts these days. A website visitor claimed that when he navigated to the defendant retailer’s website, it recorded his site visit and shared that information with analytics vendors. The plaintiff argued that this activity represents electronic wiretapping—a criminal offense under state and federal law in most jurisdictions. Such statutory claims have gained in popularity in recent years because they authorize plaintiffs to seek liquidated damages without proving measurable economic harm.
This and similar recent claims seek to contort decades-old “wiretapping” statutes and other laws to impose serious criminal liability on thousands of businesses who use ordinary website analytics tools. Website operators have strongly objected to these suits. And in the last two years, skeptical courts have begun to dismiss these claims on a number of grounds.
Background
In this case, the U.S. District Court for the District of Massachusetts dismissed the plaintiff’s lawsuit without addressing the statutory wiretapping issue. Instead, it ruled that it did not have personal jurisdiction in Massachusetts over an out-of-state retail website operator whose related conduct did not target the state. The court held that in order for it to exercise specific jurisdiction over the retailer, the retailer had to voluntarily conduct activities in the state, and the lawsuit had to arise from those activities. Here, the allegedly harmful activities were the retailer’s incorporation of vendor-provided analytics tools into its website—an action that was not intentionally directed at Massachusetts. The district court distinguished between plaintiff’s intentional actions in accessing the website while in Massachusetts and the retailer’s maintenance of the site and related vendor contracts. Thus, it found no “demonstrable nexus” between the plaintiff’s claims and the retailer’s relevant contacts with Massachusetts. The Court of Appeals affirmed. Even where the retailer did business in the Bay State—both online and in stores—the court observed that such general contacts were unrelated to the plaintiff’s specific claims. The retailer’s website was available nationwide and configured in a way that it treated all visitors the same, wherever they were from. It did not intentionally target users in Massachusetts. Had the plaintiff shown otherwise or identified a connection to Massachusetts beyond the plaintiff’s mere browsing of the website, the result might have been different.
Applying Long-Established Jurisdictional Principles to Novel Theories of Liability
The Rosenthal decision comes closely on the heels of the U.S. Court of Appeals for the Ninth Circuit’s recent decision in Briskin v. Shopify, Inc., which affirmed a similar dismissal. Both cases demonstrate that fleeting Internet contacts—as opposed to substantial, intentional local contacts—face an uphill climb. Personal jurisdiction doctrine is rooted in basic notions of due process and fairness: where a defendant is not at home, it is unfair to be dragged into court in a place where the defendant did not target specific conduct. By protecting businesses from being sued in states where they did not direct specific, intentional conduct, courts may curtail the class action plaintiff’s bar’s efforts to funnel cases into courts they expect to be more receptive to their theories.
Online Safety Act – Keeping you updated
On 25 March 2024, Ofcom called for evidence for the third phase of its online safety regulations. This call for evidence will culminate in Ofcom’s third consultation paper, which will act as guidance for service providers to ensure compliance with the Online Safety Act (“OSA”).
The third phase of online regulations introduces further guidance on the extra duties that will arise under the OSA for category 1, 2A, and 2B services (explained here), which could include:
Additional Duties | |
Content | Protect news publisher, journalistic content and/or content of democratic importance |
Terms of Use | Include certain additional terms of use |
Specify a policy in the terms of use regarding disclosing information to a deceased child’s parents about their child’s use of the service | |
Advertising | Prevent fraudulent advertising |
Transparency | Publish a transparency report |
Additional features | Provide user empowerment features |
Provide user identity verification |
Along with this, Ofcom has also published their advice to the UK Government on the thresholds to be used to decide whether or not a service will fall into Category 1, 2A or 2B.
Through this call for evidence, Ofcom is inviting industry stakeholders, expert groups, and other organisations to provide evidence that will help inform and shape Ofcom’s approach to the OSA regulations. The call for evidence will close on 20 May 2024, after which Ofcom will publish its third consultation paper in 2025.
Preceding this third consultation paper are two consultation papers that have already been finalised and published by Ofcom. The first paper acts as guidance for user-to-user (“U2U”) and search services on how best to approach their new duties under the OSA. The second paper is specific to service providers of pornographic content.
The proposed measures under the first consultation paper vary based on the size and risk profile of the service. A “large service” is any service with an average user base greater than 7 million per month in the UK, which is approximately equivalent to 10% of the UK population. Every other service is a “small service”.
Further, when assessing the risk profile, services are expected to conduct the risk assessments themselves, and classify their services into one of the following criteria:
- Low risk: low risk for all kinds of illegal harm
- Specific risk: faces the risk of a specific kind of harm/harms
- Multi risk: faces significant risks for illegal harm
Notably, for large companies that have a multi-risk profile, almost all the proposed measures apply, except those recommended for automated content moderation, enhanced user control, and certain reporting obligations. Online safety regulations are expected to affect more than 100,000 service providers, many of which will be small businesses based in the UK and overseas[RS4] . Ofcom offers a free self-assessment tool to assess if these regulations will affect your company. If your organisation is large and sophisticated and requires a tailored approach to ensure compliance with these regulations, we can assist with this.
Utah’s GenAI Law Holds AI Users Accountable for Deceptive Outputs
Utah’s recent passage of updates to its consumer protection law and the Artificial Intelligence Policy Act (Utah AI Policy Act), which comes into effect on May 1, 2024, could mark an important moment in AI regulation. Notably, the updates to state consumer protection law emphasize holding companies that use generative AI (GenAI)—rather than developers—accountable if they knowingly or intentionally use generative AI in connection with a deceptive act or practice. In other words, a company may not have a defense that: “The AI generated the output that was deceptive, so we are not responsible.” For a violation, a court may issue an injunction for violations, order disgorgement of money received in violation of the section, as well as impose a fine of up to $2,500, along with any other relief that the court deems reasonable and necessary.
Laws that hold generative AI users, rather than developers, responsible for the accuracy of AI outputs are sure to increase discussion on AI governance teams about employees’ proper use of generative AI and the ongoing quality of AI outputs to reduce the risk from inaccurate or deceptive outputs.
There are other noteworthy aspects of the recent updates to Utah law.
- User Notice Requirements Upon Request: The updated consumer protection law requires a company making a GenAI feature available to users to make a clear and conspicuous disclosure to a user upon request that explains the person is interacting with GenAI and not a human.
- Disclosure Requirements for Regulated Occupations: When a company is performing regulated occupations (i.e., those that require a state license or certification), the updated state law requires that the company prominently disclose when the company is using GenAI as part of that service. The disclosure is to be provided “verbally at the start of an oral exchange or conversation; and through electronic messaging before a written exchange.”
- Additional Provisions: The Utah AI Policy Act establishes the Office of Artificial Intelligence Policy to potentially regulate AI and foster responsible AI innovation. It also creates the Artificial Intelligence Learning Laboratory Program aimed at analyzing AI’s risks and benefits to inform regulatory activity and encourage development of AI technology in Utah. Additionally, the law permits companies to apply for temporary waivers of certain regulatory requirements during AI pilot testing to facilitate innovation while ensuring regulatory oversight.
Utah’s updates to its consumer protection laws highlight some of the issues companies may face as they adopt GenAI. To reduce risk, companies will want to ensure their AI governance program includes ongoing monitoring of employee use of GenAI and the quality of the outputs from GenAI. While it may not be surprising that companies are responsible for how they use GenAI outputs, the constant innovation in AI technology and the difficulty in ensuring GenAI outputs are appropriate will be a compliance challenge.
The impact of states’ legislative reaction to Dobbs on consumer health data privacy
Although it’s been 2 years since the Dobbs v. Jackson Women’s Health decision from the Supreme Court, various state legislatures and courts have tried to define the new post-Roe landscape. This effort includes new and revised laws to amend existing privacy laws to protect consumer health data. You can find out more on our blog post from Health Industry Washington Watch.
Additionally, Reed Smith’s San Francisco office will be hosting a comprehensive hybrid-CLE event on April 10, where Sarah Bruno, James Hennessey and Monique Bhargava will provide an overview of recent legislation from Washington state and California as well as what to expect going forward with regard to health data privacy.
Reed Smith will continue to follow developments in health care privacy laws. If you have any questions, please reach out to the authors or to the health care lawyers at Reed Smith.
Germany’s government plans to introduce a statutory ‘right to encryption’ for users of messaging and cloud storage services
The German Federal Ministry for Digital and Transport (Bundesministerium für Digitales und Verkehr – BMDV) has drawn up a new draft bill which shall introduce:
- (i) a statutory obligation for providers of number-independent interpersonal communication services (e.g. instant messaging services) to allow their users to use end-to-end encryption (“E2EE”), and (ii) a statutory transparency obligation for such providers to inform their users accordingly; and
- a statutory transparency obligation for providers of certain cloud services to inform their users about how to use continuous and secure encryption (“Draft Bill”).
The Draft Bill (status 7 February 2024), which does not have any basis in EU law, is available here (German content).
Continue Reading Germany’s government plans to introduce a statutory ‘right to encryption’ for users of messaging and cloud storage servicesIntroduction of a UK BCR Addendum
On 19 December 2023, the Information Commissioner’s Office (ICO) published its updated guide on UK Binding Corporate Rules (BCRs), introducing the UK BCR Addendum for controllers and processors (the Addendum). It will enable organisations with existing EU BCRs to include data transfers from the UK.
Continue Reading Introduction of a UK BCR Addendum