On 28 April 2022, the UK Digital Regulation Cooperation Forum (DRCF) published two discussion papers on the benefits and harms of algorithms and on the landscape of algorithmic auditing and the role of regulators, respectively.
The DRCF consists of four UK regulators: the Competition and Markets Authority, Ofcom, the Information Commissioner’s Office and the Financial Conduct Authority, to support regulatory cooperation in digital markets.
One of the DRCF’s objectives is to strengthen a shared understanding of, and expertise in, algorithmic systems. In its workplan for 2022/23, the DRCF has set “supporting improvements in algorithmic transparency” as one of its top collaboration priorities, in order to promote benefits of algorithmic systems and mitigate risks to individuals and competition.
The discussion papers
Algorithmic systems in the discussion paper are defined as the processing of data by automated systems, including artificial intelligence applications and machine learning techniques.
The DRCF acknowledged that algorithmic systems, particularly modern machine learning approaches, can both boost innovation, and pose significant risks and harms if deployed without care. Such harms include amplifying biases, distorting competition and compromising privacy rights.
To tackle such potential risks and harms, the DRCF has identified the following six shared focus areas and noted that the regulators will approach future regulatory guidance in these areas in a more joined up way:
- transparency of algorithmic processing;
- fairness for individuals affected by algorithmic processing;
- access to information, products, services and rights;
- resilience of infrastructure and algorithmic systems;
- individual autonomy for informed decision-making;
- participating in the economy; and
- healthy competition to foster innovation and better outcomes for consumers.
More specifically, the DRCF has highlighted a few areas that call for regulatory coordination and collaboration, including:
- identifying best practice in different areas of algorithmic design, testing and governance, to help industry innovate responsibly;
- supporting the development of algorithmic assessment practices and auditing practices to help identify inadvertent harms and improve transparency;
- exploring opportunities to help firms understand the strengths and limitations of incorporating human operators as part of an algorithmic system.
To ensure that the benefits of algorithmic systems are realised and risks are addressed, the DRCF noted that regulators need a way to assess what businesses are doing with algorithms and how these systems operate, for instance through assessments and audits.
The DRCF expects the algorithm audit ecosystem to grow in the coming years, especially to address existing challenges, including a lack of specific rules and standards other than in highly regulated sectors such as health and aviation.
The DRCF acknowledge that industry will likely have an important role in shaping the landscape, but it is also keen to support the healthy development of the auditing landscape. Potential measures some regulators within the forum may be able to adopt include:
- stating when audits should happen;
- establishing standards and best practices;
- acting as an enabler for better audits;
- ensuring action is taken to address harms identified in an audit; and
- identifying and tackling misleading claims about what algorithmic systems can do.
The discussion paper further sets out four hypotheses related to the potential role for regulators in the algorithmic audit landscape and has sought for industry input on the pros and cons of these hypothesis.
Next steps and takeaways
Businesses have until 8 June 2022 to submit input on the two papers, which will inform the DRCF on future areas of focus.
It may take several years before the outcome of these discussion papers are transposed into industry standards, policies and regulations. However, the discussion papers do signal that the creation and deployment of algorithms will continue to receive regulatory scrutiny and be subject to industry standards. Even though the UK is not currently taking the same regulatory approach as the EU to regulating AI (see our previous summary on the EU’s draft AI Act), enhancing transparency, explainability and accountability are common themes emphasised by both the UK and EU regulators.