Artificial intelligence (“AI”) is everywhere, and it continues to be a featured topic at many industry conferences. Sometimes this relatively new topic can feel a bit stale. However, a very interesting AI paper was recently published: Artificial Intelligence, Scientific Discovery, and Product Innovation by Aidan Toner-Rodgers (November 6, 2024). While several previous papers have
What happens when AI goes wrong? The proposed EU AI Liability Directive
On 28 September 2022, the European Commission published the proposed AI Liability Directive. The Directive joins the Artificial Intelligence (AI) Act (which we wrote about here) as the latest addition to the EU’s AI focused legislation. Whilst the AI Act proposes rules that seek to reduce risks to safety, the liability rules will apply where such a risk materialises and damage occurs.
In a European enterprise survey, 33% of companies considering adopting AI quoted ‘liability for potential damages’ as a major external challenge. The proposed Directive hopes to tackle this challenge by establishing EU-wide rules to ensure consumers obtain the same level of protection as they would if they issued a claim for damages from using any other product.Continue Reading What happens when AI goes wrong? The proposed EU AI Liability Directive
UK regulators publish two discussion papers on algorithmic systems
On 28 April 2022, the UK Digital Regulation Cooperation Forum (DRCF) published two discussion papers on the benefits and harms of algorithms and on the landscape of algorithmic auditing and the role of regulators, respectively.
About DRCF
The DRCF consists of four UK regulators: the Competition and Markets Authority, Ofcom, the Information Commissioner’s Office and the Financial Conduct Authority, to support regulatory cooperation in digital markets.Continue Reading UK regulators publish two discussion papers on algorithmic systems
UK Court of Appeal rules AI is not an inventor
AI is a hot topic, particularly in the area of patent law and inventorship.
On Tuesday 21 September 2021, the UK Court of Appeal ruled that artificial intelligence (AI) cannot be listed as an inventor on a patent application (Thaler v Comptroller General of Patents Trade Marks and Designs [2021] EWCA Civ 1374).
Background
The present case related to two patent applications submitted to the UK Intellectual Property Office (IPO) by Dr Stephen Thaler. Both applications listed the inventor as ‘DABUS’, an AI machine built for the purpose of inventing, which had successfully come up with two patentable inventions. The UK IPO had refused to process either application (considering them withdrawn) as they failed to comply with the requirement to list an inventor and Dr Thaler was not entitled to apply for the patents. According to the Patents Act 1977, an inventor must be a ‘person’.
At the Court of First Instance, Mr. Justice Marcus Smith had upheld the IPO’s decision.Continue Reading UK Court of Appeal rules AI is not an inventor
NICE AI: A health data opportunity
The UK National Institute for Health and Care Excellence (NICE), along with the Care Quality Commission (CQC), Health Research Authority (HRA) and Medicines and Healthcare products Regulatory Agency (MHRA) have partnered to promote the use of artificial intelligence (AI) in health and care. The agencies are calling this initiative the “Multi-Agency Advisory Service for AI…
ICO finalises guidance on explaining decisions made with AI
Late last year, we reported that the Information Commissioner’s Office (ICO) had published draft guidance for assisting organisations with explaining decisions made about individuals using with AI. Organisations that process personal data using AI systems are required under the GDPR to provide an explanation of the logic involved, as well as the significance and the envisaged consequences of such processing in the form of a transparency notice to the data subjects.
On 20 May 2020, followings its open consultation, the ICO finalised the guidance (available here). This is the first guidance issued by the ICO that focuses on the governance, accountability and management of several different risks arising from the use of AI systems when making decisions about individuals.
As with the draft guidance, the final guidance is split into three parts. We have outlined the key takeaways for each part below.Continue Reading ICO finalises guidance on explaining decisions made with AI
EU Blockchain Observatory and Forum explores the convergence of blockchain, AI, and the IoT
The European Union Blockchain Observatory and Forum, on 21 April, published a report examining how blockchain can be combined with two other important emerging technologies – the Internet of Things (IoT) and artificial intelligence (AI) – to complement each other and build new kinds of platforms, products, and services.
The report first looks at the interplay of blockchain with the IoT, addressing how blockchain can aid its functioning by providing a decentralised platform to the otherwise centralised approach of the IoT. This centralisation poses a number of challenges while monitoring, controlling, and facilitating communication between the millions of heterogeneous devices. The report highlights how blockchain can provide a more robust, more scalable, and more direct platform to overcome these challenges.
The report similarly delves into the potential relationship between blockchain and AI. It explains some concerns surrounding AI, like how it is currently concentrated in the hands of a few large companies due to the high cost of gathering, storing, and processing the large amounts of data, as well as engaging AI experts. It then illustrates how blockchain can mitigate such concerns so that access to AI models is more readily available to individuals and small companies.Continue Reading EU Blockchain Observatory and Forum explores the convergence of blockchain, AI, and the IoT
ICO publishes draft guidance on explaining decisions made with AI
Artificial intelligence (AI) is a key area of focus for the Information Commissioner’s Office (ICO). The ICO is already working on a related AI project that focuses on building the ICO’s Auditing Framework. One of the goals of the ICO is to increase the public’s trust and confidence in how data is used and made available. In line with this, on 2 December 2019, the ICO published a blog on explaining decisions made by AI (here). The ‘Explaining decisions made with AI’ guidance (Guidance) has been prepared in collaboration with the UK’s national institute for data science and artificial intelligence, the Alan Turing Institute. The Guidance seeks to help organisations explain how AI decisions are made to those affected by them.
We have outlined some of the key takeaways below. Continue Reading ICO publishes draft guidance on explaining decisions made with AI
Council of Europe publish recommendations for the regulation of AI to protect human rights
The Council of Europe Commissioner for Human Rights has recently published recommendations for improving compliance with human rights regulations by parties developing, deploying or implementing artificial intelligence (AI).
The recommendations are addressed to Member States. The principles concern stakeholders who significantly influence the development and implementation of an AI system.
The Commissioner has focussed on 10 key areas of action:
-
- Human rights impact assessment (HRIA) – Member States should establish a legal framework for carrying out HRIAs. HRIAs should be implemented in a similar way to other impact assessments, such as data protection impact assessments under GDPR. HRIAs should review AI systems in order to discover, measure and/or map human rights impacts and risks. Public bodies should not procure AI systems from providers that do not facilitate the carrying out of or publication of HRIAs.
- Member States public consultations – Member States should allow for public consultations at various stages of engaging with an AI system, and at a minimum at the procurement and HRIA stages. Such consultations would require the publication of key details of AI systems, including details of the operation, function and potential or measured impacts of the AI system.
- Human rights standards in the private sector – Member States should clearly set out the expectation that all AI actors should “know and show” their compliance with human rights principles. This includes participating in transparent human rights due diligence processes that may identify the human rights risks of their AI systems.
- Information and transparency – Individuals subject to decision making by AI systems should be notified of this and have the option of recourse to a professional without delay. No AI system should be so complex that it does not allow for human review and scrutiny.
- Independent oversight – Member States should establish a legislative framework for independent and effective oversight over the human rights compliance of AI systems. Independent bodies should investigate compliance, handle complaints from affected individuals and carry out periodic reviews of the development of AI system capabilities.
Continue Reading Council of Europe publish recommendations for the regulation of AI to protect human rights
ICO blogs on meaningfulness of human involvement in AI systems
Researchers at the Information Commissioner’s Office (ICO) have started a series of blogs discussing the ICO’s work in developing a framework for auditing artificial intelligence (AI). In the first blog of the series, the discussion revolves around the degree and quality of human review in AI systems, specifically, in what circumstances human involvement can be …