Artificial Intelligence

On 26 November 2023, the US Cybersecurity and Infrastructure Security Agency (CISA), together with the UK’s National Cyber Security Centre (NCSC), published joint ‘Guidelines for Secure AI System Development’ (the Guidelines).

The Guidelines were formulated by CISA and the NCSC, in cooperation with 21 other international agencies and ministries, as well as industry experts.Continue Reading UK & US cybersecurity agencies release new ‘Guidelines for Secure AI System Development’

The House of Commons Committee on Science, Innovation and Technology (the Committee), embarked on an inquiry in October 2022 to assess the impact of artificial intelligence (AI) on various sectors, AI regulation, and the UK Government’s AI governance proposals. The resulting interim report, published on 31 August 2023, offers valuable insights, particularly from a legal standpoint, on the challenges and approaches related to AI governance in the UK.Continue Reading AI, a Double-Edged Sword: Recommendations from the Committee’s Interim Report on AI

On 28 September 2022, the European Commission published the proposed AI Liability Directive. The Directive joins the Artificial Intelligence (AI) Act (which we wrote about here) as the latest addition to the EU’s AI focused legislation. Whilst the AI Act proposes rules that seek to reduce risks to safety, the liability rules will apply where such a risk materialises and damage occurs.

In a European enterprise survey, 33% of companies considering adopting AI quoted ‘liability for potential damages’ as a major external challenge. The proposed Directive hopes to tackle this challenge by establishing EU-wide rules to ensure consumers obtain the same level of protection as they would if they issued a claim for damages from using any other product.Continue Reading What happens when AI goes wrong? The proposed EU AI Liability Directive

On 18 October 2021, the European Commission (the Commission) launched a public consultation on adapting the civil liability rules for the digital age, with a specific focus on challenges arising from the adoption of artificial intelligence (AI).

The consultation builds on the Commission’s inception impact assessment roadmap (IIA) on this topic and is part of the Commission’s wider effort to modernise EU regulations for the digital age.

Why the civil liabilities rules need to change

While Product Liability Directive 85/374/EEC (Directive) sets out rules aimed to ensure that injured parties are compensated for damage caused by defective products, the Commission has previously noted in a report in 2018 and the IIA that the Directive is no longer fit for the digital age. Challenges include:

  • Whether and how intangible digital elements such as software can be classified as products
  • The lack of clarity on who should be liable for defects after products are put into circulation
  • Significant obstacles for injured parties to obtain compensation, especially given the difficulties in establishing causal links where the behaviours of AI systems are partially or wholly opaque

Continue Reading Civil liability rules in the digital age: EC launches consultation

AI is a hot topic, particularly in the area of patent law and inventorship.

On Tuesday 21 September 2021, the UK Court of Appeal ruled that artificial intelligence (AI) cannot be listed as an inventor on a patent application (Thaler v Comptroller General of Patents Trade Marks and Designs [2021] EWCA Civ 1374).

Background

The present case related to two patent applications submitted to the UK Intellectual Property Office (IPO) by Dr Stephen Thaler. Both applications listed the inventor as ‘DABUS’, an AI machine built for the purpose of inventing, which had successfully come up with two patentable inventions. The UK IPO had refused to process either application (considering them withdrawn) as they failed to comply with the requirement to list an inventor and Dr Thaler was not entitled to apply for the patents. According to the Patents Act 1977, an inventor must be a ‘person’.

At the Court of First Instance, Mr. Justice Marcus Smith had upheld the IPO’s decision.Continue Reading UK Court of Appeal rules AI is not an inventor

On April 21, 2021, a draft proposed European regulation on artificial intelligence (AI) (Regulation) was released following the European Commission’s white paper “On Artificial Intelligence – A European approach to excellence and trust”, published in February 2020. The regulation shows that the European Union is seeking to establish a legal framework for AI by laying

Artificial intelligence, or AI, has the ability to process large sets of data. The term “AI” describes algorithms that can be taught to identify patterns or predict outcomes. If the algorithm is primed with a teaching set of data, then it can evaluate new sets of data based on the desired outcome. AI has been

Late last year, we reported that the Information Commissioner’s Office (ICO) had published draft guidance for assisting organisations with explaining decisions made about individuals using with AI. Organisations that process personal data using AI systems are required under the GDPR to provide an explanation of the logic involved, as well as the significance and the envisaged consequences of such processing in the form of a transparency notice to the data subjects.

On 20 May 2020, followings its open consultation, the ICO finalised the guidance (available here). This is the first guidance issued by the ICO that focuses on the governance, accountability and management of several different risks arising from the use of AI systems when making decisions about individuals.

As with the draft guidance, the final guidance is split into three parts. We have outlined the key takeaways for each part below.Continue Reading ICO finalises guidance on explaining decisions made with AI

The European Union Blockchain Observatory and Forum, on 21 April, published a report examining how blockchain can be combined with two other important emerging technologies – the Internet of Things (IoT) and artificial intelligence (AI) – to complement each other and build new kinds of platforms, products, and services.

The report first looks at the interplay of blockchain with the IoT, addressing how blockchain can aid its functioning by providing a decentralised platform to the otherwise centralised approach of the IoT. This centralisation poses a number of challenges while monitoring, controlling, and facilitating communication between the millions of heterogeneous devices. The report highlights how blockchain can provide a more robust, more scalable, and more direct platform to overcome these challenges.

The report similarly delves into the potential relationship between blockchain and AI. It explains some concerns surrounding AI, like how it is currently concentrated in the hands of a few large companies due to the high cost of gathering, storing, and processing the large amounts of data, as well as engaging AI experts. It then illustrates how blockchain can mitigate such concerns so that access to AI models is more readily available to individuals and small companies.Continue Reading EU Blockchain Observatory and Forum explores the convergence of blockchain, AI, and the IoT

With the Artificial Intelligence Video Interview Act (effective January 1, 2020), or “AI Video Act,” Illinois has passed a groundbreaking new law regulating the use of artificial intelligence (“AI”) in video recruitment practices.

Background
Employers increasingly seek tech-enabled tools to facilitate the hiring, evaluation, retention and development of their workforces. However, as the implementation of