Responding to news reports that journalists were able to purchase advertising on Facebook targeted to ethnic groups, Facebook announced several new changes to the company’s advertising products. The move highlights heightened scrutiny of advertising practices surrounding the increasing use of big data in many aspects of marketing and advertising.
Facebook’s response grew out of a ProPublica report published on October 28, 2015 detailing how journalists were able to purchase ads targeted to house hunters on Facebook,, all while excluding specific “Ethnic Affinities,” such as African-American, Asian-American or Hispanic people. The report raised significant ethical and legal questions on how the features that enable advertisers to target their ads can be misused for discriminatory purposes. The potential for interactive computer service providers to violate anti-discrimination laws has drawn attention for several years, especially following the decision of the Ninth Circuit Court of Appeals in the Roommates decision, which held that the that immunity provided by the Communications Decency Act (CDA) for online operators did not apply to an online service that offered questionnaires and selections to online participants that could facilitate discrimination against protected classes. See Fair Hous. Council v. Roommates.com, LLC, 521 F.3d 1157, 1166 (9th Cir.2008) (en banc).
Risks for Interactive Computer Service Providers
Like the Roommates case, the news report triggered reactions from news outlets, policymakers and civil rights leaders. Facebook responded by soliciting input from members of Congress and civil liberties organizations while it reviewed its anti-discrimination policies. On November 11, 2016, Facebook’s Chief Privacy Officer and VP of US Public Policy published a blog post on the company website to introduce changes to the company’s advertising products. The changes will include tools to detect and disable the use of ethnic affinity marketing for certain types of ads, such as those concerning housing, employment or credit. The company is also updating its advertising policies to reflect a more proactive stance on prohibiting discriminatory advertising practices. That said, the company will continue to allow advertisers to use such targeting features in other non-specified contexts.
FTC and White House Interests
The potential for data misuse in online advertising has been a growing concern of state and federal authorities. In January 2016, the Federal Trade Commission (“FTC”) issued a report entitled Big Data, A Tool for Inclusion or Exclusion? The report identified a number of legal and ethical risks that companies should consider when handling consumers’ personal information, specifically warning against practices that could violate the Fair Credit Reporting Act (“FCRA”) or any of the equal opportunity laws that prohibit discrimination based on protected characteristics such as race, color, sex or gender, religion, age, disability status, national origin, marital status, and genetic information. The FTC also broadly noted that, any unfair or deceptive practices could be pursued under its authority in Section 5 of the FTC Act. The FTC Report also highlighted a 2014 White House report that sought to create a set of best practices around information collection and usage and placed a spotlight on potential dangers of misuse of consumer information. The White House published a follow up report in May 2016.
Inclusion v. Exclusion—Legitimate Advertising Purposes?
At the same time, there are a number of legitimate purposes for which companies or advertisers may want to target specific audiences based on demographic information, such as marketing specific products and services to individuals most likely to purchase them, or directing political ads to those most likely to be receptive to particular campaign agendas. From a privacy perspective, these uses often highlight how and whether these users have self-identified and whether they know, or could know, that this type of information might be used in these contexts. For nearly a decade, leading privacy professionals have highlighted that acting in ways consistent with consumer expectations is an important way to create and maintain trust (see, e.g., C-SPAN AMP Summit).
Therefore, while companies continue to use Big Data analytics and personal information to facilitate more tailored interactions with individuals, it increasingly makes sense to evaluate how such data can be utilized and take measures to prevent misuse of data, thinking in particular about the potential for allegations of discrimination or similar claims that may arise from use of information that may reflect on character, reputation or fitness of consumers for a particular good or service. State and municipal laws frequently provide traps for unwary technology enterprises seeking to operate nationally, if not globally. Some of the steps that companies increasingly consider as part of privacy-by-design may include:
- Consumer Context. Understand the context in which personal information is collected and will be (or can be) used thinking from the perspective of consumers;
- Notice and Consent. Ensure users are given adequate notice of the company’s data practices, which may evolve over time, and seek affirmative consent where warranted;
- Training and Education. Insist that internal product teams and data owners understand the company’s privacy policies and collaborate with legal/privacy advisors;
- Data Supply Chain Reviews. Assess the potential for third parties to misuse company data, and the risks to the company in providing access—even indirectly—to personal information;
- Focus on “Sensitive” Information Accountability. Take extra precautions when handling sensitive personal information (e.g., race, religion, disability); and
- Be Prepared. Issues will inevitably arise. Preparing for how to respond to criticism and public discussion of a company’s data practices is one of the easiest ways to anticipate and manage privacy-related data risks.