The Centre for Data Ethics and Innovation (CDEI) is inviting submissions to help inform its review of online targeting and bias in algorithmic decision making.
Online targeting
Online targeting refers to providing individuals with relevant and engaging content, products, and services. Typically users experience targeting in the form of online advertising or personalised social media feeds.
CDEI identified online targeting as a particular issue due to the complex and opaque flows of data that are involved, which may undermine data protection rights. The concentration of data in certain organisations could also have an effect on competition in critical markets. CDEI is particularly interested in ensuring that online targeting does not cross the line from legitimate persuasion into illegitimate manipulation.
CDEI intends to investigate the issue of online targeting and undue influencing of users; in particular, the effect of online targeting on vulnerable users and the extent to which user autonomy is undermined by online targeting. First, CDEI will analyse gaps in the governance of online targeting. It will then conduct a public dialogue exercise to gather evidence before analysing the evidence and issuing a report with recommendations for governance. Other outputs of this review will include the results of public engagement; an analysis of governance frameworks; and recommendations for the government, regulators, and the industry.
Bias in algorithmic decision making
Machine-learning algorithms often work by identifying patterns in data and making recommendations accordingly. Although they may support good decision making and prevent human error, issues can arise if the algorithm reinforces problematic biases. These biases can be caused by errors in the design of the algorithm or biases in the underlying data sets used by the algorithm. Such biases have the potential to cause serious harm. CDEI wishes to investigate if this is an issue in four key sectors which involve decision making with a high impact on individuals and in which there is historic evidence of bias:
- Financial services – especially on i) credit and insurance decisions about individuals; and ii) eradicating bias in technologies employed by financial services companies.
- Crime and justice – in particular, the use of predictive algorithms in decision making by the police and judiciary.
- Recruitment – historic data sets and practices often contain embedded biases and these should be identified and remedied.
- Local government – algorithmic decision making has been used to identify instances of potential child abuse and neglect. Given the sensitivity, technologies in this area must meet the highest ethical standards.
CDEI proposes to first understand current practices by engaging with relevant stakeholders in the identified areas. CDEI will then produce a variety of outputs, including operational codes of practice, for trialling decision-making tools; bias tests, to be used by companies to mitigate bias; procurement guidelines, to be followed when purchasing algorithms from technology providers; and a final report, summarising CDEI’s work in each sector.
Comment
These reviews will feed into CDEI’s two–year strategy on enhancing the benefits of data and artificial intelligence in the UK. Regulatory interest in these areas is not limited to CDEI, or indeed the UK; for example, see our previous TLD posts on the UK government’s White Paper on tackling online harms here and the Algorithmic Accountability Act proposed by U.S. lawmakers here. If you would like to submit evidence, more information can be found on the review of i) online targeting here, and ii) algorithmic decision making here. The deadline for the first set of responses is 14 June 2019.