The State Chancellery and the Ministry of Culture of Saxony-Anhalt have announced the establishment of the “Market Surveillance Authority of the States for the Accessibility of Products and Services” (MLBF). Starting June 28, 2025, this new institution in Magdeburg will ensure compliance with national accessibility requirements for various products and services under the Accessibility Strengthening Act (BFSG).

The MLBF will oversee products like smartphones, smart TVs, and e-book readers, as well as electronic commerce and banking services. This initiative, supported by all 16 federal states, aims for efficient and uniform implementation of accessibility standards across Germany. Companies affected by the BFSG should begin preparations to meet these new requirements.

For more details, read the full article.

Data protection authorities across Europe have recently imposed significant fines on companies for violations of data protection laws. We bring to your attention decisions related to breaches of direct marketing and profiling below.

A telecommunications company fined €50 million by the French Supervisory Authority

On 23 January 2025, the French Supervisory Authority (CNIL) fined a telecommunications operator €50 million for displaying advertisements via email to users without obtaining consent for direct marketing and placing cookies on user devices despite their rejection of cookies. CNIL found these activities violated the below:  

  • Article L. 34-5 of the French Post and Electronic Communications Code: obligations to obtain the consent of individuals to receive commercial prospecting by electronic means
  • Article 82 of the French Data Protection Act: requires cookies not to be read after a user has withdrawn consent.

As a result, CNIL imposed a fine of €50 million and an order for the company to stop reading cookies within three months of a user withdrawing consent, with a penalty of €100,000/day for non-compliance.

KASPR fined €200,000 by French Supervisory Authority

KASPR, a technology company that gives paying customers access to the contact details of LinkedIn professionals, was fined €200,000 by CNIL. The database contains around 160 million contacts which KASPR customers use to contact individuals for commercial prospecting, recruitment and identity verification.

CNIL investigated several complaints from individuals who had been canvassed by entities that had obtained their contact details via the KASPR extension. The investigation found the following GDPR violations:

  • Article 6 of the GDPR: LinkedIn users can limit the visibility of their profiles to their 1st and 2nd connections. However, CNIL found the indiscriminate collection of contact details by KASPR, without due consideration of this visibility limitation, a breach.
  • Article 5(1)(e) of the GDPR: KASPR’s retention of user details for five years after the individual had changed their job or employer was found to be disproportionately long and in breach.
  • KASPR delayed informing data subjects about the processing of the personal data for four years. When it finally did, the opt-out link was only available in English. CNIL found both points were in breach of articles requiring transparency in data processing.

As a result, CNIL ordered KASPR to do the following:

  • Cease collecting the personal data of individuals who chose to limit the visibility of their contact details and delete all data collected in this manner. If distinguishing the data of individuals who had limited the visibility of their contact details was not possible, KASPR must contact all concerned individuals, within 3 months of processing the data, offering them the chance to object to the processing.
  • Stop the automatic renewal of personal data storage.
  • Inform individuals whose data was collected about the processing activity.
  • Respond to subject access requests from individuals.

KASPR was fined €200,000 and given a six-month window to comply with these measures.

The European Commission (EC) has recently issued guidelines (“Guidelines”) on the definition of an AI system, as mandated by Article 96(1)(f) of the AI Act. The Guidelines aim to assist understanding of the scope and application of the AI Act, particularly for businesses and legal professionals navigating the regulatory landscape of AI technologies. The Guidelines, while helpful, lack – like many recent guidelines and papers from the EU-level – a practical approach and “grip” that helps organizations to evaluate their situation precisely and with legal certainty.

Non-binding nature of the Guidelines
It is important to note that the Guidelines are not legally binding. Only the Court of Justice of the European Union (CJEU) can provide an authoritative interpretation of the AI Act. Nevertheless, these guidelines can serve as an initial reference point for applying the AI system definition.  

Case-by-case assessment 
The Guidelines emphasize the necessity for a case-by-case assessment rather than a fixed, exhaustive list of AI systems. This approach ensures that the definition remains adaptable and relevant to the evolving landscape of AI technologies. At the same time, it makes the legal assessment for organizations more difficult and provides less legal certainty.

Key elements of an AI system 
The Guidelines outline along Art. 3 (1) of the AI-Act seven key elements that collectively define an AI system. The AI system’s definition is lifecycle-based, covering both the pre-deployment (building) phase and the post-deployment (use) phase. Not all elements need to be present in both phases, reflecting the complexity and diversity of AI systems and ensuring the definition aligns with the AI Act’s goals by being adaptable to various types of AI systems. However, all mandatory elements must appear at least once for the system to meet the definition of an AI system. The seven elements are:
 
1. Machine-based system: AI Systems are fundamentally machine-based, incorporating both hardware and software components. This includes advanced technologies such as quantum computing systems.  

2. Autonomy: Autonomy is a defining characteristic, referring to systems designed to operate with varying levels of independence from human intervention. Systems requiring full manual control are excluded from the AI system definition, while those with some degree of independent action qualify as autonomous. 

3. Adaptiveness: Adaptiveness after deployment, though not mandatory due to the AI Act’s wording in Art. 3(1) using “may”, refers to an AI system’s ability to exhibit self-learning capabilities and change behaviour based on new data or interactions.  

4. Objectives: AI systems are designed to achieve specific objectives, which can be explicitly encoded by developers or implicitly derived from the system’s behaviour and interactions. These objectives may differ from the intended purpose of the AI system. 

5. Inferencing: The ability to infer how to generate outputs using AI techniques is a key distinguishing feature. Inferencing must be interpreted broadly as this ability applies primarily to the use phase (when the AI system generates outputs) but also to the building phase (when the AI System derives outputs through AI techniques, enabling inferencing).  

The Guidelines explain that AI systems use different techniques that enable inferencing:  

a. Machine learning approaches:  AI systems learn from data to achieve objectives. Examples include: 
i. Supervised learning: AI systems learn from labelled data (e.g., email spam detection, medical diagnostics, image classification). 
ii. Unsupervised learning: AI systems learn patterns from unlabeled data (e.g., drug discovery, anomaly detection). 
iii. Self-supervised learning: AI systems generate their own labels from data (e.g., language models predicting the next word in a sentence). 
iii. Reinforcement learning: AI systems learn through trial and error based on a reward function (e.g., robotic arms, autonomous vehicles). 
v. Deep learning: AI systems use layered architectures (neural networks) for representation learning, allowing them to learn from raw data. 

b. Logic- and knowledge-based approaches: AI systems infer from encoded knowledge or symbolic representation of the task to be solved. These systems rely on predefined rules, facts, and logical reasoning rather than learning from data. 

Per the Guidelines, the following systems are not AI systems in the meaning of the AI Act and, consequently, do not fall in scope of the AI Act:  

a. Systems for improving mathematical optimization: Systems designed to improve mathematical optimization or to accelerate traditional optimization methods (e.g., linear or logistic regression) have the capacity to infer, but such systems do not exceed basic data processing.

b. Basic data processing: These systems execute predefined operations without learning, reasoning, or modelling – such systems simply present data in an informative way.   

c. Systems based on classical heuristics: These systems use rule-based approaches, pattern recognition, or trial-and-error strategies rather than data-driven learning.

d. Simple prediction systems: Even if these systems technically use machine learning approaches, their performance does not meet the threshold required to be considered an AI system as they are only using basic statistical estimation.

6. Outputs: The capability of AI systems to generate outputs, such as predictions, content, recommendations, or decisions, sets AI systems apart from other software. AI systems outperform traditional software by handling complex relationships in data and generating more nuanced, dynamic, and sophisticated outputs. The Guidelines also provide more details on the different categories of outputs:

a. Predictions: AI systems estimate unknown values based on given inputs. Unlike non-AI software, machine learning models can identify complex patterns and make highly accurate predictions in dynamic environments (e.g., self-driving cars, energy consumption forecasting). 

b. Content: AI systems can create new material, including text, images, and music.  

c. Recommendations: AI systems personalize suggestions for actions, products or services based on user behaviour and large-scale data analysis. Unlike static, rule-based non-AI systems, AI can adapt in real-time and provide more sophisticated recommendations (e.g., hiring suggestions in recruitment software). 

d. Decisions: AI systems autonomously make conclusions or choices, replacing human judgment in certain processes.  

7. Interaction with the environment: AI systems actively interact with and impact their deployment environments, including both tangible physical objects (e.g. robot arms) and virtual spaces (e.g. digital spaces, data flows, and software ecosystems).  

Implications  
While the Guidelines provide a useful starting point, including examples and explanations, they ultimately emphasize that each system must be assessed individually to be considered an AI system or not.   

On January 24, 2025, a three-judge panel in the U.S. Court of Appeals for the Eleventh Circuit held in Insurance Marketing Coalition v. FCC, No. 24-10277, that the Federal Communications Commission’s (FCC) one-to-one consent requirement rule (the “FCC Rule”) went beyond the FCC’s authority under the Telephone Consumer Protection Act (“TCPA”). The court held that the FCC exceeded its statutory authority, finding that the agency’s “new consent restrictions impermissibly conflict with the ordinary statutory meaning of ‘prior express consent.’”

The FCC’s “one seller at a time” consent rule

The FCC Rule would have required marketers to obtain prior express consent from consumers to receive telemarketing or advertising robocalls—from the consumer to “one seller at a time”—before the consumer received such robocalls. The court characterized the rule at the outset of its decision as “another sweeping rule affecting only telemarketing and advertising robocalls and robotexts,” which the court found fell outside the FCC’s statutory authority to prescribe regulations implementing the TCPA. More specifically, the final one-to-one consent rule required parties making marketing calls using a robocall system to obtain prior express written consent that was both clear and conspicuous and obtained from a consumer “one seller at a time.” Further, any following calls and texts would be required to be “logically and topically associated with the initial interaction that prompted the consent.”

In vacating the FCC Rule, the Eleventh Circuit explained that because the TCPA does not define “prior express consent,” the court was bound to evaluate the plain meaning of the term, including how it has been commonly understood in the common law. In analyzing the plain meaning of prior express consent, the court concluded that to receive a robocall, consumers need only clearly and unmistakably give consent. The FCC Rule would have gone beyond the clear-and-unmistakable consent required by the TCPA by requiring marketers obtain individual one-to-one “prior express consent” from the consumer and limiting any communication to that which was logically and topically associated with an individual’s initial token of consent to the marketer. Because of the inconsistency between the FCC Rule and the common law meaning of “prior express consent,” the court ruled that the FCC had exceeded its statutory authority in promulgating the final FCC Rule.

Many had wondered how this case was going to shake out—given the FCC Rule was set to take effect on January 27, 2025. However, the court’s ruling has vacated the “one-to-one consent” and “logically and topically related” requirements that were set to take effect, with the case itself being remanded to the FCC for further proceedings. Whether the new administration will appeal the decision or otherwise attempt to revive the rule is not yet known. Notably, a number of agencies during the Biden Administration faced challenges to their regulatory authority and exercise of it. And, heightened consent requirements in a number of contexts relating to marketing or data usage—for example, with location information at the Federal Trade Commission—could face similar scrutiny if challenged in court. It will be interesting to see the course set by the new administration as it works through previous consumer protection policy and enforcement activity.

On 8 January 2025, the European General Court (the Court) ruled on the lawfulness of transferring personal data to countries outside the European Union (EU), in particular the United States (case T‑354/22). The judgment (Judgment) caused a stir among both businesses and data protection experts. This blog post gives you an overview of the most important aspects of the Judgment and answers the question: Is it worth the hype?

A. Factual and legal background:

The plaintiff, a German national (who is also the managing director of a company that assists in the mass enforcement of General Data Protection Regulation (GDPR) damage claims in Germany, click here and here for more information on such claims) sued the European Commission (EC) for damages related to the website https://futureu.europa.eu (Website) which used a Content Delivery Network (CDN) operated by an EU-based subsidiary of an United States company. The Website also offered an option to log in using existing social media accounts via the EU Login system. As part of this process, the Website communicated with servers of a social media network in the United States to verify the plaintiff’s authentication and enable the login. The plaintiff visited the Website several times in 2021 and 2022 and used the EU Login system. He claimed that his personal data, including his IP address, was unlawfully transferred to the USA during Website visits and usage of the EU Login system in 2021 and 2022. In consequence, the plaintiff had exercised his right of access to his personal data and had asked for specific information regarding the processing and transfer of his data by the Website and its third-party providers. The plaintiff claims that the EC did not provide the requested information within the statutory time limit. The EC processed the data, so the Judgment is based on EU Regulation 2018/1725, not the GDPR. But the rules are similar.

B. Analysis of damage claims

The Court dismissed most of the plaintiff’s claims, but ordered the EC to pay 400 EUR in damages for the unlawful transfer of personal data in the context of the EU Login system.

In more detail:

I. Damages for right to access

The Court dismissed the claim for damages related to the allegedly delayed response to the plaintiff’s data subject access request. It emphasized that non-compliance with the statutory time limit for granting access to information, on its own, does not necessarily constitute a qualified breach of Regulation 2018/1725. To succeed in such a claim, the plaintiff must also demonstrate that the EC’s failure to meet the deadline was likely to have caused the alleged damages. In this case, the plaintiff failed to do so.

II. Damages for unlawful data transfer to third-countries

The plaintiff alleged that his personal data was transferred to servers in the United States during three different occasions: when visiting the Website on 30 March 2022 and 08 June 2022, and when registering on EU Login on 30 March 2022. The Court rejected the first two claims, but upheld the third one:

i. Data transfer when visiting the Website on 30 March 2022

The Court found that there was no data transfer to a third country, as the plaintiff’s IP address and browser and device information were transmitted to a server in Munich. The Court also ruled that the mere risk of a data transfer does not constitute a data transfer. The fact that the operator of the CDN was a subsidiary of an United States company did not mean that the personal data was accessible by United States authorities. The Court also pointed out that the arguments relating to the Schrems II judgment were irrelevant, as this judgment dealt with the conditions for data transfers to the United States, not the processing of personal data in the EU by subsidiaries of United States companies.

ii. Data transfer when visiting the Website on 08 June 2022

The Court found that even if the transfer to the USA was a breach, it could not be causal for any damage. The plaintiff had visited the Website several times on that day when his IP address was connected to servers in Munich, London, Hillsboro, Newark and Frankfurt. However, the Court found that this was because the plaintiff was in Germany, but used technical settings to change his apparent location, by pretending online to be someone who was in places near Munich, London, Hillsboro, Newark and Frankfurt am Main on the same day (e.g. via VPN). The Court stated that a claim for damages requires direct causality between the breach and the damage, and that the behavior of the controller must be the immediate cause of the asserted non-material damage, the loss of control over personal data. In this case, the Court assumed that the direct and immediate cause of the alleged damage was not the alleged violation of the EC, but the behavior of the plaintiff. The plaintiff had deliberately provoked the transfer to the USA, in order to claim damages afterwards, which could not rightly cause any causal damage.

iii. Data transfer when registering on EU Login on 30 March 2022

The Court ordered the EC to pay 400 EUR in non-material damages due to the unlawful transfer of the plaintiff’s personal data to the United States without having an adequate transfer mechanism in place. The EU Login system allowed users to log in to the EC’s websites using their existing social media accounts. When users clicked the “Sign in” button (a hyperlink), they were redirected to an external page of the social media network, during which their IP address was transmitted to the United States. At the time of the transfer, no adequacy decision or alternative legal basis for such a data transfer existed, as this occurred during the transitional period following the invalidation of the Privacy Shield and before appropriate mechanisms under the GDPR were implemented. The Court considered the EC to be responsible as a controller for data protection, as it had created the conditions for the transmission of the plaintiff’s IP address by placing the hyperlink on the Website. The Court also deemed the requested 400 EUR as appropriate, as the data transmission had placed the plaintiff in a situation where he was uncertain how his personal data was being processed.

 Take aways

The Judgment shows that any claim for damages due to a breach of data protection law requires direct causality between the breach and the damage. If the person concerned interferes with the ‘normal’ course of events in such a way that their actions are a necessary condition or prerequisite for the alleged damage to occur, causality is ruled out from the outset. The Judgment also highlights the importance of ensuring compliance with data transfers, especially to the United States, and the responsibility for sign-up links, such as those for third-party authentication services. Companies should clearly inform users about data collection and transfer, and regularly review their data protection processes, especially when using third-party services, to minimize liability risks.

However, the Judgment leaves several important questions unresolved, such as the potential joint controllership between the social network and the EC, and the meaning of “loss of control” over data transfers. Companies should closely monitor these developments and adjust their data protection strategies accordingly.

Update from March 19, 2025:

As announced by the plaintiff on their website, both the European Commission and the plaintiff have filed appeals against the judgment. 

The European Union (EU) is introducing new regulations for online and tech businesses to create a consistent legal framework across various sectors. By 2025, several European and German laws will come into effect. Want to know which ones? Keep reading! This alert provides a quick overview of what these 2025 frameworks are about, who they may concern and when they will apply.

The EU General Product Safety Regulation

  • What? The EU General Product Safety Regulation (GPSR) replaces the old General Product Safety Directive and includes various safety requirements for products. The regulation covers product safety analysis, labeling requirements, and rules for product recalls.
  • Who? It impacts all economic operators and online marketplaces dealing with products that are intended or likely to be used by consumers. The GPSR also applies if the product is manufactured or sold online from outside the EU, provided the product is intended for consumers in the EU.
  • When? The GPSR has been in effect since December 13, 2024.

The EU DORA Regulation

  • What? The Regulation on Digital Operational Resilience for the Financial Sector (DORA) establishes a harmonized legal framework for managing cybersecurity and ICT risks in financial markets. It aims to ensure resilient operations during major business interruptions that could threaten network and information system security. The Regulation focuses on ICT risk management, reporting requirements, digital resilience testing, and third-party risk management.
  • Who? The DORA covers a wide range of EU financial entities, e.g. credit institutions, investment firms or management companies.
  • When? The DORA applies from January 17, 2025.

The NIS2 Directive

  • What? With the introduction of the Directive on measures for a high common level of cybersecurity across the Union (NIS2), the EU aims to improve cybersecurity in critical sectors in response to growing threats. Affected companies are required to implement risk management measures, registration obligations and incident management.
  • Who? The scope of NIS2 is significantly broader compared to the previous NIS1 Directive. It covers all companies that meet quantitative thresholds and provide or carry out their activities in critical sectors in the EU.
  • When? Member states were required to implement this regulation into national law by October 17, 2024. Germany is behind schedule, but it is expected that the German implementation law will come into effect in the second quarter of 2025. The timetable depends to a large extent on the composition of the future federal government.

The German Accessibility Strengthening Law

  • What? The German Accessibility Strengthening Act (Barrierefreiheitsstärkungsgesetz – BFSG) implements the European Accessibility Act (EAA). The law aims to ensure the accessibility of products and services, enabling people with disabilities to participate in society.
  • Who? It requires various economic operators to meet specific accessibility requirements for products and services offered in Germany, including e-commerce offerings like webshops and consumer terminals with interactive services.
  • When? The BFSG was enacted on July 16, 2021, and will be applied from June 28, 2025.

The EU AI Act

  • What? The Regulation on Artificial Intelligence (AI Act) is the first legislation to set specific rules for developing and providing artificial intelligence. It classifies AI Systems into different risk categories, each with its own set of requirements.
  • Who? The AI Act primarily applies to operators, providers, importers, and distributors of AI systems. It is sufficient for the application if the AI system is placed on the market in the European Union.
  • When? While the AI Act itself will fully apply on August 2, 2025, the regulations on prohobited AI systems will already apply from February 2, 2025. The regulations on high-risk AI models, will then apply from August 2, 2027.

The European Media Freedom Act

  • What? The European Media Freedom Act (EMFA) introduces a new framework to protect media pluralism and independence. It includes various information obligations, such as disclosing the names of beneficial owners.
  • Who? The EMFA applies particularly to media services and media service providers that offer information, entertainment, or education to the public.
  • When? The EMFA will be effective from August 8, 2025. However, some articles will apply earlier, starting from November 8, 2024, February 8, 2025, and May 8, 2025.

The EU Data Act

  • What? The EU Data Act (DA) introduces new rules for the exchange, distribution, and use of data, including non-personal data. The DA also enhances data interoperability and data-sharing mechanisms and services.
  • Who? It targets manufacturers of connected products (IoT), providers of related services, their users, and data holders. The DA applies in particular to products placed on the market in the Union and providers of related services; irrespective of the place of establishment of those manufacturers and providers.
  • When? The DA entered into force on January 11, 2024, and its rules will mainly apply from September 12, 2025.

The EU Product Liability Directive

  • What? The new EU Product Liability Directive (PLD) aims to update European product liability law to address digitalization challenges and business developments in recent years. The liability regime for economic operators is expected to become significantly stricter.
  • Who? The PDL covers all movable or immovable products placed on the market or put into service in the EU. One of the significant changes is that the directive now also covers digital products like software.
  • When? Member states must implement this directive into national law by December 9, 2026. Due to its significant impact, businesses are advised to understand its effects on their business models early on.

What’s next?

The European Union’s regulatory landscape is evolving, with significant projects slated for the year 2025. A key focus will be on regulating product compliance and e-commerce platforms. Although the list of legislative acts is not exhaustive, several important laws are already in the pipeline for the upcoming years. The Cyber Resilience Act, the Machinery Regulation, and the e-Evidence Package are only some examples. Online and tech companies will continue to face new challenges as these regulations come into effect.

EU data strategy: Stay up to date on Data Act, AI Act, Digital Services Act, NIS2, Cyber Resilience Act, European Health Space and others with our blog series.

UK NIS and critical national infrastructure updates

The UK government recently created a page on the new Cybersecurity and Resilience Bill updating the Network and Information Systems (NIS) Regulations 2018. There is no draft of the bill available yet, but it is confirmed the Bill will cover five sectors (transport, energy, drinking water, health, and digital infrastructure) and digital services (online marketplaces, online search engines, and cloud computing services). It will add obligations on cyber incident reporting and expand to include cybersecurity risk in the supply chains. There will be at least twelve regulators in the UK responsible for implementing the updated NIS Regulations and they will be given greater powers. The Bill will be introduced to Parliament in 2025.

At present, the cybersecurity risks in the supply chain are managed via a government-backed cybersecurity certification scheme called Cyber Essentials (based on self-assessment) and Cyber Essentials Plus (assessed by a third party). It is a voluntary scheme and is not restricted to a specific industry. Cyber Essentials or Cyber Essentials Plus are often a condition for the provision of ICT services to the UK government. ENISA has not included Cyber Essentials in its mapping of NIS2 requirements to international standards and frameworks in its draft NIS2 guidance. The government reported on its ongoing talks with the EU on cybersecurity legal framework which hopefully means there may be further alignment between the two regimes.

When it comes to critical national infrastructure, the UK government’s approach is reflected in the Resilience Framework. The UK government plans to develop critical infrastructure standards by 2030. In the meantime, the UK government appears to focus on industry-specific cybersecurity requirements, for example cybersecurity requirements were set out in the Telecommunications (Security) Act 2021 for entities in telecoms. It has also created the National Protective Security Authority to provide support to critical national infrastructure entities, which at present cover 13 sectors1 (data centres were added as a sub-group in September 2024).

EU NIS2 and CER updates

In the meantime, the EU member states are continuing to implement NIS2 which became enforceable on 18 October this year. Although the progress varies from EU member state to member state, the majority either have implemented the local legislation on NIS2 or published a proposal. In Germany, for example, the current draft is in internal discussion in committees of the German parliament. The latest draft reflects the suggested changes of the committee for home affairs. Certain regulators in EU member states published their own guidelines on the NIS2 regulations and provided a platform for self-reporting for organisations that fall within the scope of NIS2 (e.g. the Italian National Cybersecurity Agency opened an online portal from 1 December 2024).

Organisations are still working on ensuring their compliance with NIS2 requirements and should look into the available guidelines in the EU member state they operate in. There may be more clarity in terms of what is required from service providers in the digital infrastructure and managed security services2 from the European Commission given their cross-border operations. The European Commission’s implementing act provides details on what their policies should look like and sets thresholds for significant incidents for each type of organisation. ENISA published draft guidelines for such organisations in support of the implementing act that is open for consultation until 9 January 2025.

As for the EU Critical Entities Resilience Directive (CER), EU member states were to adopt and publish local implementing acts by 18 October 2024. The CER covers 11 sectors: energy, transport, banking, financial market infrastructure, health, drinking water, wastewater, digital infrastructure, public administration, space, production, processing and distribution of food. EU member state competent authorities are to carry out risk assessments by 17 January 2026 using the list of essential services in the CER and identify critical entities to whom CER will apply by 17 July 2026.
Updates on DORA and the UK Operational Resilience rules.


Cybersecurity requirements for financial entities in the UK and the EU

As for financial entities, they are also working towards compliance with the new cybersecurity compliance rules both in the UK and the EU. In the UK, the operational resilience (usually abbreviated as opres) rules become enforceable by 31 March 2025 whereas the EU Digital Operational Resilience Act (DORA), a sector-specific cybersecurity legislation, will become enforceable on 17 January 2025. Although the two regimes are aimed to achieve the same purpose, they differ in terms of how risk is to be determined and when to notify competent authorities. DORA requirements are clarified in the relevant regulatory and implementing technical standards which are nearly 1000 pages long and are legally binding.

The scope of the UK opres rules is limited to banks, building societies, PRA-designated investment firms, insurers, recognized investment exchanges, enhanced scope senior managers, certification regime firms, firms authorised or registered under the Payment Services Regulations 2017 or Electronic Money Regulations 2011, whereas DORA applies to almost all regulated financial entities. Financial entities will continue looking for efficient ways of complying with both regimes if they are present both in the UK and the EU.

Please let us know if you need assistance in identifying whether your services fall within the scope of NIS2 and we can help navigate the plethora of local requirements.

1 Chemicals, civil nuclear, communications (including data centres), defence, emergency services, energy, finance, food, government, health, space, transport, and water.

2 DNS service providers, TLD name registries, cloud computing service providers, data centre service providers, content delivery network providers, managed service providers, managed security service providers, providers of online market places, online search engines, and of social networking services platforms, and trust service providers.

Artificial intelligence (“AI”) is everywhere, and it continues to be a featured topic at many industry conferences. Sometimes this relatively new topic can feel a bit stale. However, a very interesting AI paper was recently published:  Artificial Intelligence, Scientific Discovery, and Product Innovation by Aidan Toner-Rodgers (November 6, 2024). While several previous papers have shown that AI may be useful in drug discovery (e.g., Merchant 2023), this is perhaps the first paper to provide “real world,” causal evidence that AI improves outcomes for scientific research and development.

The high-level takeaway from the paper is that “AI-assisted researchers discover 44% more materials, resulting in a 39% increase in patent filings and a 17% rise in downstream product innovation. These [discovered] compounds possess more novel chemical structures and lead to more radical inventions.” Id. at Cover Page. Moreover, highlighting the synergy between human skill and AI, the benefit of AI was more pronounced for top researchers at the firm – whose output nearly doubled – while the bottom third of scientists saw little benefit from AI. Id.

This human-AI synergy may be crucial for companies seeking to patent an AI-discovered drug molecule. Indeed, Courts have held that “only a natural person can be an inventor, so AI cannot be.”  Thaler v. Vidal, 43 F.4th 1207, 1213 (Fed. Cir. 2022), cert denied, 143 S. Ct. 1783 (2023). Moreover, according to The Inventorship Guidance for AI-Assisted Inventions issued by the U.S. Patent and Trademark Office, “[t]he patent system is designed to encourage human ingenuity.” See Federal Register, Vol. 89, No. 30, at 10046, available at (https://www.federalregister.gov/documents/2024/02/13/2024-02623/inventorship-guidance-for-ai-assisted-inventions). Accordingly, “inventorship analysis should focus on human contributions, as patents function to incentivize and reward human ingenuity. Patent protection may be sought for inventions for which a natural person provided a significant contribution to the invention[.]” Id. at 10044.

But what constitutes a “significant contribution”?  The closest analogy is likely the scenario in which multiple natural people are listed as inventors on a single patent application. According to the Guidance, Courts must evaluate several factors articulated in Pannu v. Iolab Corp., 155 F.3d 1344, 1351 (Fed. Cir. 1998), including ‘[did the individual] “(1) contribute in some significant manner to the conception or reduction to practice of the invention; (2) make a contribution to the claimed invention that is not insignificant in quality, when that contribution is measured against the dimension of the full invention, and (3) do more than merely explain to the real inventors well-known concepts and/or the current state of the art.’’ Federal Register, Vol. 89, No. 30, at 10047. Failing to meet any one of those factors “precludes that person from being named an inventor”. Id.

Further, according to the Guidance:

[A] natural person must have significantly contributed to each claim in a patent application or patent. In the event of a single person using an AI system to create an invention, that single person must make a significant contribution to every claim in the patent or patent application. Inventorship is improper in any patent or patent application that includes a claim in which at least one natural person did not significantly contribute to the claimed invention.

Id. at 10048.

Accordingly, companies should consider documenting human involvement in all phases of its materials discovery process. The goal, of course, would be to show that a human inventor “substantially” contributed to each claim in a patent application. To determine whether a human’s contribution was significant, the Guidance stated that “a person who takes the output of an AI system and makes a significant contribution to the output to create an invention may be a proper inventor.” Id. at 10048. Just as critically, the Guidance provides some examples of human contributions that are likely not considered significant enough to meet the Pannu test, including: (1) recognizing a problem and having a general goal or research plan (which is then presented to an AI tool); (2) reducing an AI-created output to practice; and (3) “overseeing” an AI system that eventually creates an output. Id. at 10048-49.

The full paper is linked for your reference, and the highlights are presented below. If you have questions about this study or AI more generally, please do not hesitate to reach out.

  • Study Design:  The author randomized the use of an AI tool for materials discovery in a cohort of 1,018 scientists at the R&D lab of a large U.S. firm, which specializes in materials science (specifically healthcare, optics, and industrial manufacturing). Toner-Rodgers at p. 1.
  • AI Tool:  The AI tool selected for the study was a set of graph neural networks (GNNs) trained on the “composition and characteristics of existing materials”, which then “generat[ed] ‘recipes’ for novel compounds predicted to possess specified properties.” Id. at 1, 9. The GNN architecture represents materials as multidimensional graphs of atoms and bonds, enabling it to learn physical laws and encode large-scale properties. Id. at 9. From there, scientists evaluated the outputs and synthesized the most promising options. Id. at 1.
  • Materials Discovery:  AI-assisted scientists discovered 44% more materials. These compounds possessed superior properties, revealing that the model also improves quality. This influx of materials leads to a 39% increase in patent filings and, several months later, a 17% rise in product prototypes incorporating the new compounds. Accounting for input costs, the AI tool boosted R&D efficiency by 13-15%. Id. at 1. The effects on materials discovery and patenting emerge after 5-6 months, while the impact on product innovation lags by more than a year. Id. at 16. No data was presented regarding commercialization of a discovered compound.
  • Quality: AI increased the quality of R&D as opposed to merely building out currently understood, low value outputs. For atomic properties, the tool increases average quality by 13%. Id. at 18. AI led to statistically significant improvements in average quality (9%) and the proportion of high-quality materials. Id.
  • Novelty:  The AI tool increased the “novelty of discoveries, leading to more creative patents and more innovative products.”  Id. at 20. In the absence of AI, scientists focus primarily on improvements to existing products, with only 13% of prototypes representing new lines. But this AI tool caused an increase in the treatment group (22%), engendering a shift toward more radical innovation. Id. at 18-20.
  • Synergy with Humans:  The high-quality scientists saw the most benefit from AI. The bottom third of researchers saw minimal gains but, the output of top-decile scientists increased by 81%, which suggests that AI and human expertise are complements in the innovation production function. Id. at 22. Moreover, top scientists leverage their expertise to identify promising AI suggestions, enabling them to investigate the most viable candidates first. Id at 21-22. Moreover, some highly skilled scientists can observe certain features of the materials design problem not captured by the AI tool. Id. at 33. In contrast, others waste significant resources investigating false positives. Id. at 26.
  • Impact on Human Labor:  AI dramatically changes the discovery process. The tool automates a majority of “idea generation” tasks, reallocating scientists to the new task of evaluating model-suggested candidate compounds. Id. at 38. In the absence of AI, researchers devote nearly half their time to conceptualizing potential materials. This falls to less than 16% after the tool’s introduction. Meanwhile, time spent assessing candidate materials increases by 74%. Id. at 2. Time spent in the experimentation phase also rises. Id. at 27.

In sum, AI is labor-replacing when it comes to identifying new materials, but labor-augmenting in the broader process because of the need for human judgment in evaluating potential compounds. Id. at 28. The author noted a slump in workplace satisfaction among scientists with 82% reporting reduced satisfaction with their work. Id. at 36. Further research is needed on this potential “brain drain”, as this paper highlights the benefits of combining the top human minds at a company with AI.

Since this post was published, MIT issued the following statement, which calls into question the validity of Toner-Rodgers’ findings.  That said, this Post’s discussion of USPTO’s Guidance is still applicable.  Full MIT statement can be found here:  https://economics.mit.edu/news/assuring-accurate-research-record.

On 24 October 2024, the UK Department for Science Innovation and Technology announced new legislation to modernise the UK’s use of data and boost the UK economy.

The focus of the new Data Use and Access Bill (the “new Bill”) is not just on data protection – it covers wider topics, such as the rules for digital identity verification, minor amendments to the Online Safety Act, the National Underground Asset Registers, technical standards for IT services in health care, and others.

The data protection section is covered in part 5 and it contains some of the  changes to the data protection requirements proposed by the previous government in the Data Protection and Digital Information Bill (the “previous Bill”), which failed to pass before the general election. Please see a table showing what changes were kept from the previous Bill below. The new Bill also introduced new additions, such as the following:

  • Data Portability: The Secretary of State will be empowered to introduce provisions requiring data holders to provide customer data to a customer or third party at the customer’s request, reinforcing the data portability right afforded under the UK GDPR.
  • International transfers: the Agreement on Access to Electronic Data for the Purpose of Countering Serious Crime was called out as the basis for transfers in reliance on international law. It is between the Government of the United Kingdom of Great Britain and Northern Ireland and the Government of the United States of America.
  • Automated decision-making: there are further restrictions on automated decision-making involving sensitive processing. Such processing should be allowed only with explicit consent and the decision-making must be authorised or required by law.

The reforms should not pose any risk to the UK’s adequacy decision from the EU, which is due to be reconsidered in 2025. We expect the reforms to pass through the legislative process quickly. Businesses should monitor the Bill’s progress so that they are ready to comply with the reforms.

In a rapidly evolving technological landscape, the National Institute of Standards and Technology (NIST) has released crucial guidance on managing risks associated with generative AI (GenAI). Our latest client alert delves into the newly published GenAI Profile (NIST AI 600-1), which outlines 12 potential high-level risks and offers actionable strategies for mitigation by breaking down these mitigation strategies into four categories: govern, map, measure, and manage. From addressing data privacy concerns to ensuring compliance with future AI laws, this alert provides insights for organizations to navigate the complexities of GenAI responsibly. Read the full alert to understand how aligning with NIST’s recommendations can fortify your AI governance program and prepare you for upcoming regulatory changes.