Artificial intelligence (“AI”) is everywhere, and it continues to be a featured topic at many industry conferences. Sometimes this relatively new topic can feel a bit stale. However, a very interesting AI paper was recently published:  Artificial Intelligence, Scientific Discovery, and Product Innovation by Aidan Toner-Rodgers (November 6, 2024). While several previous papers have shown that AI may be useful in drug discovery (e.g., Merchant 2023), this is perhaps the first paper to provide “real world,” causal evidence that AI improves outcomes for scientific research and development.

The high-level takeaway from the paper is that “AI-assisted researchers discover 44% more materials, resulting in a 39% increase in patent filings and a 17% rise in downstream product innovation. These [discovered] compounds possess more novel chemical structures and lead to more radical inventions.” Id. at Cover Page. Moreover, highlighting the synergy between human skill and AI, the benefit of AI was more pronounced for top researchers at the firm – whose output nearly doubled – while the bottom third of scientists saw little benefit from AI. Id.

This human-AI synergy may be crucial for companies seeking to patent an AI-discovered drug molecule. Indeed, Courts have held that “only a natural person can be an inventor, so AI cannot be.”  Thaler v. Vidal, 43 F.4th 1207, 1213 (Fed. Cir. 2022), cert denied, 143 S. Ct. 1783 (2023). Moreover, according to The Inventorship Guidance for AI-Assisted Inventions issued by the U.S. Patent and Trademark Office, “[t]he patent system is designed to encourage human ingenuity.” See Federal Register, Vol. 89, No. 30, at 10046, available at (https://www.federalregister.gov/documents/2024/02/13/2024-02623/inventorship-guidance-for-ai-assisted-inventions). Accordingly, “inventorship analysis should focus on human contributions, as patents function to incentivize and reward human ingenuity. Patent protection may be sought for inventions for which a natural person provided a significant contribution to the invention[.]” Id. at 10044.

But what constitutes a “significant contribution”?  The closest analogy is likely the scenario in which multiple natural people are listed as inventors on a single patent application. According to the Guidance, Courts must evaluate several factors articulated in Pannu v. Iolab Corp., 155 F.3d 1344, 1351 (Fed. Cir. 1998), including ‘[did the individual] “(1) contribute in some significant manner to the conception or reduction to practice of the invention; (2) make a contribution to the claimed invention that is not insignificant in quality, when that contribution is measured against the dimension of the full invention, and (3) do more than merely explain to the real inventors well-known concepts and/or the current state of the art.’’ Federal Register, Vol. 89, No. 30, at 10047. Failing to meet any one of those factors “precludes that person from being named an inventor”. Id.

Further, according to the Guidance:

[A] natural person must have significantly contributed to each claim in a patent application or patent. In the event of a single person using an AI system to create an invention, that single person must make a significant contribution to every claim in the patent or patent application. Inventorship is improper in any patent or patent application that includes a claim in which at least one natural person did not significantly contribute to the claimed invention.

Id. at 10048.

Accordingly, companies should consider documenting human involvement in all phases of its materials discovery process. The goal, of course, would be to show that a human inventor “substantially” contributed to each claim in a patent application. To determine whether a human’s contribution was significant, the Guidance stated that “a person who takes the output of an AI system and makes a significant contribution to the output to create an invention may be a proper inventor.” Id. at 10048. Just as critically, the Guidance provides some examples of human contributions that are likely not considered significant enough to meet the Pannu test, including: (1) recognizing a problem and having a general goal or research plan (which is then presented to an AI tool); (2) reducing an AI-created output to practice; and (3) “overseeing” an AI system that eventually creates an output. Id. at 10048-49.

The full paper is linked for your reference, and the highlights are presented below. If you have questions about this study or AI more generally, please do not hesitate to reach out.

  • Study Design:  The author randomized the use of an AI tool for materials discovery in a cohort of 1,018 scientists at the R&D lab of a large U.S. firm, which specializes in materials science (specifically healthcare, optics, and industrial manufacturing). Toner-Rodgers at p. 1.
  • AI Tool:  The AI tool selected for the study was a set of graph neural networks (GNNs) trained on the “composition and characteristics of existing materials”, which then “generat[ed] ‘recipes’ for novel compounds predicted to possess specified properties.” Id. at 1, 9. The GNN architecture represents materials as multidimensional graphs of atoms and bonds, enabling it to learn physical laws and encode large-scale properties. Id. at 9. From there, scientists evaluated the outputs and synthesized the most promising options. Id. at 1.
  • Materials Discovery:  AI-assisted scientists discovered 44% more materials. These compounds possessed superior properties, revealing that the model also improves quality. This influx of materials leads to a 39% increase in patent filings and, several months later, a 17% rise in product prototypes incorporating the new compounds. Accounting for input costs, the AI tool boosted R&D efficiency by 13-15%. Id. at 1. The effects on materials discovery and patenting emerge after 5-6 months, while the impact on product innovation lags by more than a year. Id. at 16. No data was presented regarding commercialization of a discovered compound.
  • Quality: AI increased the quality of R&D as opposed to merely building out currently understood, low value outputs. For atomic properties, the tool increases average quality by 13%. Id. at 18. AI led to statistically significant improvements in average quality (9%) and the proportion of high-quality materials. Id.
  • Novelty:  The AI tool increased the “novelty of discoveries, leading to more creative patents and more innovative products.”  Id. at 20. In the absence of AI, scientists focus primarily on improvements to existing products, with only 13% of prototypes representing new lines. But this AI tool caused an increase in the treatment group (22%), engendering a shift toward more radical innovation. Id. at 18-20.
  • Synergy with Humans:  The high-quality scientists saw the most benefit from AI. The bottom third of researchers saw minimal gains but, the output of top-decile scientists increased by 81%, which suggests that AI and human expertise are complements in the innovation production function. Id. at 22. Moreover, top scientists leverage their expertise to identify promising AI suggestions, enabling them to investigate the most viable candidates first. Id at 21-22. Moreover, some highly skilled scientists can observe certain features of the materials design problem not captured by the AI tool. Id. at 33. In contrast, others waste significant resources investigating false positives. Id. at 26.
  • Impact on Human Labor:  AI dramatically changes the discovery process. The tool automates a majority of “idea generation” tasks, reallocating scientists to the new task of evaluating model-suggested candidate compounds. Id. at 38. In the absence of AI, researchers devote nearly half their time to conceptualizing potential materials. This falls to less than 16% after the tool’s introduction. Meanwhile, time spent assessing candidate materials increases by 74%. Id. at 2. Time spent in the experimentation phase also rises. Id. at 27.

In sum, AI is labor-replacing when it comes to identifying new materials, but labor-augmenting in the broader process because of the need for human judgment in evaluating potential compounds. Id. at 28. The author noted a slump in workplace satisfaction among scientists with 82% reporting reduced satisfaction with their work. Id. at 36. Further research is needed on this potential “brain drain”, as this paper highlights the benefits of combining the top human minds at a company with AI.

On 24 October 2024, the UK Department for Science Innovation and Technology announced new legislation to modernise the UK’s use of data and boost the UK economy.

The focus of the new Data Use and Access Bill (the “new Bill”) is not just on data protection – it covers wider topics, such as the rules for digital identity verification, minor amendments to the Online Safety Act, the National Underground Asset Registers, technical standards for IT services in health care, and others.

The data protection section is covered in part 5 and it contains some of the  changes to the data protection requirements proposed by the previous government in the Data Protection and Digital Information Bill (the “previous Bill”), which failed to pass before the general election. Please see a table showing what changes were kept from the previous Bill below. The new Bill also introduced new additions, such as the following:

  • Data Portability: The Secretary of State will be empowered to introduce provisions requiring data holders to provide customer data to a customer or third party at the customer’s request, reinforcing the data portability right afforded under the UK GDPR.
  • International transfers: the Agreement on Access to Electronic Data for the Purpose of Countering Serious Crime was called out as the basis for transfers in reliance on international law. It is between the Government of the United Kingdom of Great Britain and Northern Ireland and the Government of the United States of America.
  • Automated decision-making: there are further restrictions on automated decision-making involving sensitive processing. Such processing should be allowed only with explicit consent and the decision-making must be authorised or required by law.

The reforms should not pose any risk to the UK’s adequacy decision from the EU, which is due to be reconsidered in 2025. We expect the reforms to pass through the legislative process quickly. Businesses should monitor the Bill’s progress so that they are ready to comply with the reforms.

In a rapidly evolving technological landscape, the National Institute of Standards and Technology (NIST) has released crucial guidance on managing risks associated with generative AI (GenAI). Our latest client alert delves into the newly published GenAI Profile (NIST AI 600-1), which outlines 12 potential high-level risks and offers actionable strategies for mitigation by breaking down these mitigation strategies into four categories: govern, map, measure, and manage. From addressing data privacy concerns to ensuring compliance with future AI laws, this alert provides insights for organizations to navigate the complexities of GenAI responsibly. Read the full alert to understand how aligning with NIST’s recommendations can fortify your AI governance program and prepare you for upcoming regulatory changes.

The European Commission (the “Commission”) announced its plans to open a public consultation on the new Standard Contractual Clauses (“SCCs”) in the fourth quarter of 2024. The new SCCs will address the scenario where the data importer (controller or processor) is based outside of the European Economic Area (“EEA”) but is directly subject to the General Data Protection Regulation (“GDPR”) due to Art. 3(2) – offering goods and services to individuals in the EU or monitoring their behaviour within the EU.

Background

When the Commission adopted the 4 June 2021 SCCs for the transfers of personal data to third countries, the scope of these SCCs was limited to transfers from a data exporter subject to the GDPR to a data importer (controller or processor) who was not subject to the GDPR. The 2021 SCCs were not therefore designed for situations where both the data importer and exporter were directly subject to the GDPR.  In its Guidelines 05/2021, the European Data Protection Board (“EDPB”) called for the Commission to prepare another set of SCCs to cover the gap.

Next steps

If the Commission will follow the EDPB commentary in Guidelines 05/2021, we expect the new SCCs to focus on the risks associated with the data importer being located in a third country: to address possible conflicting national laws and government access in the third country. The obligations under the new SCCs may incorporate the GDPR principles, information notice to data subjects about the transfers and the risks associated with transfers to a third country, detailed security measures for transfers, notification of data breaches, and provisions governing onward transfers.

What does this mean?

Recent EU supervisory authority decision(s) to impose significant financial penalties on organisations for failing to use appropriate safeguards when transferring personal data to third countries has shown that regulators are not afraid of rigorously enforcing compliance. Organisations cannot rely on the legal uncertainty that has been ongoing since the issuance of the 2021 SCCs to explain why they transfer personal data to third countries without adequate protections in place.  

Until the new SCCs are published, organisations should ensure they carefully assess their data transfers, put in place appropriate safeguards either under the most-up-to-date version of the SCCs or use other transfer mechanisms that the GDPR provides for, and complete any transfer impact assessments, where necessary.

A recent decision by a data protection regulator confirms that derogations under Art. 49 GDPR must be relied upon on an exceptional basis. First of all, to rely on a derogation, transfers must not be repetitive. Further, when relying on the transfer for the performance of a contract with a data subject (Art. 49(1)(b) or for the conclusion or performance of a contract concluded in the interest of the data subject (Art. 49(1)(c), the exporter needs to ensure the necessity requirement is met, i.e. the main purpose of the contract could not be achieved without a transfer, and (2) there are no less intrusive alternatives available. 

Witnessing the race to harness the power of Artificial Intelligence (“AI”) by markets and businesses, the Federal Trade Commission (“FTC”), recently issued a warning over the emerging technology and its ever-widening use cases. Citing its authority under Section 6(b) of the FTC Act, the Commissioners voted 5-0 on July 19 in favor of issuing investigative orders to eight companies regarding the use of consumer data to set individualized prices for products and services, which the Commission refers to as “surveillance pricing.” In announcing the move, FTC Chair Lina Khan claimed that “the FTC’s inquiry will shed light on [the] shadowy ecosystem of pricing middlemen.”

  • The agency’s inquiry focuses on four areas:
    Types of products and services being offered: The types of products and services that each company has produced, developed, or licensed to a third party, as well as details about the technical implementation and current and intended uses of this technology which may facilitate pricing decisions;
  • Data collection and inputs: Information on the data sources used for each product or service, including the data collection methods for each data source, the platforms and methods that were used to collect such data, and whether that data is collected by other parties (such as other companies or other third parties);
  • Customer and sales information: Information about whom the products and services were offered to and what those customers planned to do with those products or services; and
  • Impacts on consumers and prices: Information on the potential impact of these products and services on consumers including the prices they pay.

Price differentiation and “dynamic pricing” – the practice of offering different pricing to different segments of consumers – has been around for a long time. In a blog post accompanying its announcement of the orders, the FTC explains its new inquiry into this practice as a response, in part, to advancements in machine learning technology that make it easier to collect and process large volumes of personal data in service of algorithmic pricing models. Although the orders are framed as requests for information – with none of the companies accused of any wrongdoing – they serve as another reminder of the Commission’s increased focus on artificial intelligence and algorithmic decision making.  This focus increasingly occurs at the intersection of the agency’s bureaus of competition and consumer protection.  That the Commission vote was unanimous suggests a strong interest in studying the issue among the Commissioners.

In addition to concerns around AI, the announcement may suggest a further intent by the Commission to revisit the law of price differentiation. The FTC’s characterization of the issues could be read to suggest a view that price differentiation is not presumptively legal if it is predicated on businesses gathering information to determine the prices set in a given transaction.

Section 6(b) findings are typically confidential but may culminate in a report of findings and recommendations for policymakers and other stakeholders. Considering the speed in which AI-enabled business practices continue to emerge and evolve – coupled with the Commission’s clear desire to keep up – expect the FTC to prioritize this study going forward.

According to the German Federal Supreme Court (Bundesgerichtshof – “BGH”), companies must substantiate “climate neutral” advertising claims: Where such advertising claims lack sufficient substantiation in direct proximity to the claim, they will likely be considered misleading and, therefore, in breach of the statutory requirements of the German Act against Unfair Commercial Practices (Gesetz gegen den unlauteren Wettbewerb – UWG).

Background of the case

A leading German manufacturer of sweets (“Advertiser”) advertised in a magazine that all its products were produced in a “climate neutral” manner. The Advertiser’s manufacturing process was, in fact, not CO2 neutral. To reduce its CO2 footprint, the Advertiser supported climate protection projects carried out by a third party. German competition watchdog Wettbewerbszentrale considered the advertisement misleading and initiated legal action against the Advertiser

The BGH’s decision

In its third instance judgment of 27 June 2024, case no I ZR 98/23, the BGH sets strict substantiation standards for “climate neutral” claims. The BGH’s key considerations are summarised in its press release, while the fully reasoned judgment has not yet been published. The BGH ruled that the particular advertising claim would be misleading within the meaning of section 5(1) UWG and, therefore, prohibited.

In the BGH’s view, the advertising claim “climate neutral” is ambiguous as it can mean (i) reduction of CO2 emissions or (ii) offsetting of CO2 emissions. According to the BGH, reducing CO2 emissions on the one hand and offsetting CO2 emissions on the other cannot be considered equally suitable means for achieving climate neutrality. Rather, reducing CO2 emissions would take precedence over offsetting CO2 emissions. According to the BGH, vague environmental claims such as “climate neutral” may be legally permissible only if the specific meaning of the claims would be explained as part of the advertising itself. By contrast, in the BGH’s view, it shall not be sufficient to refer to information on an external website, including where such external website can be accessed through a QR-code displayed in close proximity to the advertising claim.

The reason for this strict view is that environmental claims – as with health claims – entail an increased risk of misleading consumers. Accordingly, there is a greater need to inform the target audience about the specific meaning of the claim.

The BGH’s press release suggests that the judgment does not impose a general ban on “climate neutral” claims. Nor does the BGH categorically prevent advertisers from supporting environmental claims with offsetting measures, such as third-party climate protection projects. However, it follows from the BGH judgment that advertisers must act diligently when making environmental claims. In particular, where the climate-friendly effects of the advertised products are achieved (only) by implementing offsetting measures, sufficient explanatory substantiation must be included “in the advertisement itself”. The press release does not reveal whether and how the judgment provides any guidance on (i) what standards advertisers must meet to comply with the requirement to substantiate their “climate neutral” claim “in the advertisement itself” and (ii) potential exemptions from this strict requirement. These aspects will be of particular relevance where the advertising is made online where easily accessible substantiation can be provided via hyperlinking, overlays and other technical means. Therefore, the fully reasoned judgment must be reviewed once published.

Interplay with upcoming EU legislation

In light of recent developments on the EU level, the BGH’s ruling appears to be relevant only for a transitional period ending 27 March 2026. The background is that Directive (EU) 2024/825 empowering consumers for the green transition through better protection against unfair practices and through better information (“Directive“) needs to be transposed into national laws of EU member states until this date. The Directive regulates, among other topics, advertising claims, which are based on the offsetting of greenhouse gas emissions, that a product has a neutral, reduced or positive impact on the environment in terms of greenhouse gas emissions. Such advertising claims will be prohibited under the Directive as misleading in all circumstances. This is a key difference compared to the BGH judgment.

Claims that fall under the Directive include “climate neutral”, “CO2 neutral certified”, “carbon positive”, ”climate net zero”, “climate compensated”, “reduced climate impact” and “limited CO2 footprint”. Such claims should only be allowed where they are not based on the offsetting of greenhouse gas emissions outside the product’s value chain but are instead based on the actual lifecycle impact of the product in question, as the former and the latter are not equivalent (Directive, Recital 12). Finally, it needs to be noted that the EU legislator emphasised that “such a prohibition should not prevent companies from advertising their investments in environmental initiatives, including carbon credit projects, as long as they provide such information in a way that is not misleading and that complies with the requirements laid down in Union law” (Directive, Recital 12). Clearly, it will be a challenge for the practice to identify what is not misleading in this particular context. In particular, the increased substantiation requirements under the proposed EU Green Claims Directive will need to be taken into account. For further information on the proposed EU Green Claims Directive, please see the Reed Smith in-depth article of 19 April 2023, “Greenwashing – EU proposes strict requirements for environmental claims: key points on the Green Claims Directive”.

The First Circuit Court of appeals has affirmed that specific personal jurisdiction must be based on defendants’ intentional conduct. In affirming the dismissal of a consumer class action that alleged “wiretapping” claims based on ordinary website activity, the federal appeals court’s decision reflects growing judicial skepticism toward the proliferation of class action claims applying old statutes to ubiquitous Internet technologies.

The Recent Trend of Electronic Wiretapping Lawsuits

Rosenthal v. Bloomingdales.com, LLC exemplifies the flood of statutory wiretapping claims inundating website operators and federal courts these days. A website visitor claimed that when he navigated to the defendant retailer’s website, it recorded his site visit and shared that information with analytics vendors. The plaintiff argued that this activity represents electronic wiretapping—a criminal offense under state and federal law in most jurisdictions. Such statutory claims have gained in popularity in recent years because they authorize plaintiffs to seek liquidated damages without proving measurable economic harm.

This and similar recent claims seek to contort decades-old “wiretapping” statutes and other laws to impose serious criminal liability on thousands of businesses who use ordinary website analytics tools. Website operators have strongly objected to these suits. And in the last two years, skeptical courts have begun to dismiss these claims on a number of grounds.

Background

In this case, the U.S. District Court for the District of Massachusetts dismissed the plaintiff’s lawsuit without addressing the statutory wiretapping issue. Instead, it ruled that it did not have personal jurisdiction in Massachusetts over an out-of-state retail website operator whose related conduct did not target the state. The court held that in order for it to exercise specific jurisdiction over the retailer, the retailer had to voluntarily conduct activities in the state, and the lawsuit had to arise from those activities. Here, the allegedly harmful activities were the retailer’s incorporation of vendor-provided analytics tools into its website—an action that was not intentionally directed at Massachusetts. The district court distinguished between plaintiff’s intentional actions in accessing the website while in Massachusetts and the retailer’s maintenance of the site and related vendor contracts. Thus, it found no “demonstrable nexus” between the plaintiff’s claims and the retailer’s relevant contacts with Massachusetts. The Court of Appeals affirmed. Even where the retailer did business in the Bay State—both online and in stores—the court observed that such general contacts were unrelated to the plaintiff’s specific claims. The retailer’s website was available nationwide and configured in a way that it treated all visitors the same, wherever they were from. It did not intentionally target users in Massachusetts. Had the plaintiff shown otherwise or identified a connection to Massachusetts beyond the plaintiff’s mere browsing of the website, the result might have been different.

Applying Long-Established Jurisdictional Principles to Novel Theories of Liability

The Rosenthal decision comes closely on the heels of the U.S. Court of Appeals for the Ninth Circuit’s recent decision in Briskin v. Shopify, Inc., which affirmed a similar dismissal. Both cases demonstrate that fleeting Internet contacts—as opposed to substantial, intentional local contacts—face an uphill climb. Personal jurisdiction doctrine is rooted in basic notions of due process and fairness: where a defendant is not at home, it is unfair to be dragged into court in a place where the defendant did not target specific conduct. By protecting businesses from being sued in states where they did not direct specific, intentional conduct, courts may curtail the class action plaintiff’s bar’s efforts to funnel cases into courts they expect to be more receptive to their theories.

On 25 March 2024, Ofcom called for evidence for the third phase of its online safety regulations. This call for evidence will culminate in Ofcom’s third consultation paper, which will act as guidance for service providers to ensure compliance with the Online Safety Act (“OSA”). 

The third phase of online regulations introduces further guidance on the extra duties that will arise under the OSA for category 1, 2A, and 2B services (explained here), which could include:


Additional Duties
ContentProtect news publisher, journalistic content and/or content of democratic importance
Terms of UseInclude certain additional terms of use
Specify a policy in the terms of use regarding disclosing information to a deceased child’s parents about their child’s use of the service
AdvertisingPrevent fraudulent advertising
TransparencyPublish a transparency report 
Additional featuresProvide user empowerment features
Provide user identity verification

Along with this, Ofcom has also published their advice to the UK Government on the thresholds to be used to decide whether or not a service will fall into Category 1, 2A or 2B.

Through this call for evidence, Ofcom is inviting industry stakeholders, expert groups, and other organisations to provide evidence that will help inform and shape Ofcom’s approach to the OSA regulations. The call for evidence will close on 20 May 2024, after which Ofcom will publish its third consultation paper in 2025.

Preceding this third consultation paper are two consultation papers that have already been finalised and published by Ofcom. The first paper acts as guidance for user-to-user (“U2U”) and search services on how best to approach their new duties under the OSA. The second paper is specific to service providers of pornographic content.

The proposed measures under the first consultation paper vary based on the size and risk profile of the service. A “large service” is any service with an average user base greater than 7 million per month in the UK, which is approximately equivalent to 10% of the UK population. Every other service is a “small service”.

Further, when assessing the risk profile, services are expected to conduct the risk assessments themselves, and classify their services into one of the following criteria:

  1. Low risk: low risk for all kinds of illegal harm
  2. Specific risk: faces the risk of a specific kind of harm/harms
  3. Multi risk: faces significant risks for illegal harm

Notably, for large companies that have a multi-risk profile, almost all the proposed measures apply, except those recommended for automated content moderation, enhanced user control, and certain reporting obligations. Online safety regulations are expected to affect more than 100,000 service providers, many of which will be small businesses based in the UK and overseas[RS4] . Ofcom offers a free self-assessment tool to assess if these regulations will affect your company. If your organisation is large and sophisticated and requires a tailored approach to ensure compliance with these regulations, we can assist with this.


Utah’s recent passage of updates to its consumer protection law and the Artificial Intelligence Policy Act (Utah AI Policy Act), which comes into effect on May 1, 2024, could mark an important moment in AI regulation. Notably, the updates to state consumer protection law emphasize holding companies that use generative AI (GenAI)—rather than developers—accountable if they knowingly or intentionally use generative AI in connection with a deceptive act or practice.  In other words, a company may not have a defense that:  “The AI generated the output that was deceptive, so we are not responsible.” For a violation, a court may issue an injunction for violations, order disgorgement of money received in violation of the section, as well as impose a fine of up to $2,500, along with any other relief that the court deems reasonable and necessary.

Laws that hold generative AI users, rather than developers, responsible for the accuracy of AI outputs are sure to increase discussion on AI governance teams about employees’ proper use of generative AI and the ongoing quality of AI outputs to reduce the risk from inaccurate or deceptive outputs. 

There are other noteworthy aspects of the recent updates to Utah law.

  • User Notice Requirements Upon Request: The updated consumer protection law requires a company making a GenAI feature available to users to make a clear and conspicuous disclosure to a user upon request that explains the person is interacting with GenAI and not a human.
  • Disclosure Requirements for Regulated Occupations: When a company is performing regulated occupations (i.e., those that require a state license or certification), the updated state law requires that the company prominently disclose when the company is using GenAI as part of that service. The disclosure is to be provided “verbally at the start of an oral exchange or conversation; and through electronic messaging before a written exchange.”
  • Additional Provisions: The Utah AI Policy Act establishes the Office of Artificial Intelligence Policy to potentially regulate AI and foster responsible AI innovation.  It also creates the Artificial Intelligence Learning Laboratory Program aimed at analyzing AI’s risks and benefits to inform regulatory activity and encourage development of AI technology in Utah. Additionally, the law permits companies to apply for temporary waivers of certain regulatory requirements during AI pilot testing to facilitate innovation while ensuring regulatory oversight.

Utah’s updates to its consumer protection laws highlight some of the issues companies may face as they adopt GenAI. To reduce risk, companies will want to ensure their AI governance program includes ongoing monitoring of employee use of GenAI and the quality of the outputs from GenAI. While it may not be surprising that companies are responsible for how they use GenAI outputs, the constant innovation in AI technology and the difficulty in ensuring GenAI outputs are appropriate will be a compliance challenge.

Although it’s been 2 years since the Dobbs v. Jackson Women’s Health decision from the Supreme Court, various state legislatures and courts have tried to define the new post-Roe landscape. This effort includes new and revised laws to amend existing privacy laws to protect consumer health data. You can find out more on our blog post from Health Industry Washington Watch.

Additionally, Reed Smith’s San Francisco office will be hosting a comprehensive hybrid-CLE event on April 10, where Sarah Bruno, James Hennessey and Monique Bhargava will provide an overview of recent legislation from Washington state and California as well as what to expect going forward with regard to health data privacy.

Reed Smith will continue to follow developments in health care privacy laws. If you have any questions, please reach out to the authors or to the health care lawyers at Reed Smith.