On 28 April 2022, the UK Digital Regulation Cooperation Forum (DRCF) published two discussion papers on the benefits and harms of algorithms and on the landscape of algorithmic auditing and the role of regulators, respectively.

About DRCF

The DRCF consists of four UK regulators: the Competition and Markets Authority, Ofcom, the Information Commissioner’s Office and the Financial Conduct Authority, to support regulatory cooperation in digital markets.Continue Reading UK regulators publish two discussion papers on algorithmic systems

The Council of Europe Commissioner for Human Rights has recently published recommendations for improving compliance with human rights regulations by parties developing, deploying or implementing artificial intelligence (AI).

The recommendations are addressed to Member States. The principles concern stakeholders who significantly influence the development and implementation of an AI system.

The Commissioner has focussed on 10 key areas of action:

    1. Human rights impact assessment (HRIA) – Member States should establish a legal framework for carrying out HRIAs. HRIAs should be implemented in a similar way to other impact assessments, such as data protection impact assessments under GDPR. HRIAs should review AI systems in order to discover, measure and/or map human rights impacts and risks. Public bodies should not procure AI systems from providers that do not facilitate the carrying out of or publication of HRIAs.
    2. Member States public consultations – Member States should allow for public consultations at various stages of engaging with an AI system, and at a minimum at the procurement and HRIA stages. Such consultations would require the publication of key details of AI systems, including details of the operation, function and potential or measured impacts of the AI system.
    3. Human rights standards in the private sector – Member States should clearly set out the expectation that all AI actors should “know and show” their compliance with human rights principles. This includes participating in transparent human rights due diligence processes that may identify the human rights risks of their AI systems.
    4. Information and transparency – Individuals subject to decision making by AI systems should be notified of this and have the option of recourse to a professional without delay. No AI system should be so complex that it does not allow for human review and scrutiny.
    5. Independent oversight – Member States should establish a legislative framework for independent and effective oversight over the human rights compliance of AI systems. Independent bodies should investigate compliance, handle complaints from affected individuals and carry out periodic reviews of the development of AI system capabilities.
      Continue Reading Council of Europe publish recommendations for the regulation of AI to protect human rights

The Centre for Data Ethics and Innovation (CDEI) is inviting submissions to help inform its review of online targeting and bias in algorithmic decision making.

Online targeting

Online targeting refers to providing individuals with relevant and engaging content, products, and services. Typically users experience targeting in the form of online advertising or personalised social media

The President has made artificial intelligence technology a policy priority. On February 11, 2019, the President issued an Executive Order to direct most federal executive agencies to promote and protect American advancements in artificial intelligence while working with private industry. The order recognized that public trust in artificial intelligence is an important factor in the development and use of the technologies, and highlights the need to “protect civil liberties, privacy, and American values in their application in order to fully realize the potential of AI technologies for the American people.”>

Specifically, the President ordered the agencies to consider artificial intelligence as a research and development priority and

  • Invest in artificial intelligence (for example, machine learning) research and development.
  • Enhance access to data, models, algorithms, and computing resources to promote artificial intelligence research and development (consistent with obligations to maintain safety, security, privacy, and confidentiality).
  • Reduce barriers to the use of artificial intelligence (for example, machine learning) technologies.
  • Help develop technical standards that minimize vulnerability to attacks and “reflect Federal priorities for innovation, public trust, and public confidence in systems that use AI technologies.”
  • Train a workforce that can develop and take advantage of developments in artificial intelligence.
  • Develop an action plan to “to protect the advantage of the United States in AI and technology critical to United States economic and national security interests against strategic competitors and foreign adversaries.”

Continue Reading President prioritizes research, development, and deployment of artificial intelligence technology

Companies that employ algorithms, machine learning and artificial intelligence (AI) in their day-to-day business may face increased attention from federal antitrust and consumer protection regulators in the future. On November 13–14,  the Federal Trade Commission (FTC) addressed this topic in their hearings on “Competition and Consumer Protection in the 21st Century.” The panelists, an assembly