Data portability and other initiatives introduced in Singapore to promote innovation and strengthen accountability

On May 22, 2019, Singapore’s Personal Data Protection Commission introduced three new initiatives:

a)   A public consultation on data portability. The corresponding consultation paper also proposes to introduce data innovation provisions as part of the ongoing review of the Personal Data Protection Act (PDPA). The consultation is open for six weeks and will close on July 3, 2019.

b)   A guide on active enforcement.

c)   An updated guide on managing data breaches.

Public consultation on data portability

The commission is proposing to introduce a data portability obligation, with the aim of giving individuals greater control over their personal data and enhancing innovation to support the growth of the digital economy.

The following impacts were considered:

  • Consumer impact: Where consumers are able to move their data from one service provider to another, they are empowered to try out new service offerings and this will in turn incentivize organizations to provide more competitive offers.
  • Market impact: Data portability provides a means of reducing barriers to entry, particularly for start-ups or small players in sectors that are heavily reliant on consumer data. The consultation paper cited the Open Banking initiative in the UK, which has enabled the creation of an app that allows consumers to consolidate accounts from multiple banks, and the Data Transfer Project, which is an industry-led initiative that provides users with the ability to move their data between different online platforms. At the same time, the commission acknowledged that overly burdensome requirements and the increase in compliance costs could result in a first mover losing its incentive to innovate, as a follower could simply emulate its business model and acquire consumer data through the portability obligation. Hence, a balance would need to be struck to create the right competitive landscape and reap the most benefits for consumers and the economy.
  • International developments: Data portability has been introduced in the EU, Australia and the Philippines, and other jurisdictions including India, Japan, New Zealand and the United States (California) are also considering its introduction into their domestic laws. It was therefore important that Singapore keeps pace with international data protection developments in alignment with other key jurisdictions.

The consultation paper proposed that the scope of the obligation be as follows:

  • Covered organizations: The obligation will not apply to exempted organizations under the PDPA (including individuals acting in a personal or domestic capacity, employees acting in the course of employment and public agencies). It will also not apply to data intermediaries.
  • Receiving organizations: The obligation will only apply if the receiving organization is in Singapore. However, it will not prevent voluntary arrangements by organizations to transmit data to overseas entities with an individual’s consent. Where the data is irrelevant or excessive in relation to a service or product offered to an individual, a receiving organization may choose not to accept the data or retain only a portion of the data.
  • Requesting individual: Any individual can make a data portability request, regardless of whether they are in Singapore.
  • Covered data: This will only apply to data in the possession or control of an organization that is held in electronic form. The obligation applies to two types of data:

(a)   data that is provided by an individual to the organization (“user provided data”); and

(b)   data that is generated by an individual’s activities in using the organization’s products or services (“user activity data”).

The obligation does not, however, apply to “derived data,” which refers to new data elements created through the processing of other data by applying business-specific rules.

The obligation applies to “business contact information” as defined in the PDPA.

It would also apply to the personal data of third parties. The receiving organization would only be allowed to process such personal data where the data is under the control of the requesting individual and used only for their personal or domestic purposes. The receiving organization must obtain fresh consent to use the data for any other purposes.

  • Handling portability requests: The paper sets out details of the key responsibilities of the porting organization in relation to:

(a)   receiving the request;

(b)   verifying the request;

(c)   verifying the data to be ported;

(d)   porting the data, where the following information would need to be provided to the individual:

a.   fees payable by the requesting individual; and

b.   when the data will be ported;

(e)   the format of the ported data;

(f)   informing the individual of a rejection;

(g)   preserving the data; and

(h)   responding to a request withdrawal by the individual.

The data portability obligation is intended to be complementary to the access obligation under the PDPA. Exceptions to the portability obligation will be aligned to exceptions to the access obligation except where access could reveal the personal data of another individual, or reveal the identity of the individual who has provided the personal data and that individual does not consent to the disclosure of their identity.

In terms of enforcement, the commission will have powers to review an organization’s:

  • refusal to port data;
  • failure to port data within a reasonable period of time; and
  • fees for porting data pursuant to an individual’s request.

The commission will also have the power to issue binding codes of practice on data portability to take into account more specific sectoral requirements. Matters that will be addressed in these codes of practice will include:

  • consumer safeguards;
  • counterparty assurance;
  • interoperability; and
  • security of data.

Public consultation on data innovation provisions

The commission is proposing to allow organizations to use personal data for business innovation purposes, which refers to any of the following:

  • operational efficiency and service improvements;
  • product and service development; and
  • knowing customers better.

In relation to the collection or disclosure of such personal data for business innovation purposes, however, organizations must still notify the individual concerned and seek their consent, unless an exception in the PDPA applies. Also, the business innovation purposes provision does not extend to the use of personal data for direct marketing to consumers.  The commission also proposes to exempt derived personal data, which is new data created through the processing of other data by applying business-specific logic or rules, from the following obligations under the PDPA:

  • the access obligation under section 21;
  • the correction obligation under section 22; and
  • the proposed data portability obligation mentioned above.

Guide to active enforcement

The commission has introduced a new expedited decision-making process to bring investigations on clear-cut breaches to a conclusion quickly. This process can be applied where:

  • the nature of the breach is similar to precedent cases with similar facts; and
  • there is an upfront admission of liability for breaching the PDPA (which would be considered a mitigating factor).

Examples include common forms of breaches such as URL manipulation, poor password management or printing errors resulting in unauthorized disclosures to the wrong recipients.

Importantly, an organization can request to make an undertaking to implement a plan to resolve a breach, in place of a full investigation, where:

  • the commission assesses that such undertaking would achieve a similar or better enforcement outcome than a full investigation.

Guide to Managing Data Breaches 2.0

The commission has updated its guide on managing data breaches. It makes recommendations in two main areas:

  • Threshold for notifying the commission and individuals of a data breach: this is now 500 or more affected individuals, or where significant harm to or other impact on individuals is likely; and
  • Timeliness of notification: internal investigations and assessments should take no more than 30 days from an organization becoming aware of a potential breach and notification no later than 72 hours from completion of the assessment.

Given the potential significance of the proposed data portability obligation and data innovation provisions to businesses in Singapore when these take effect, organizations may wish to consider submitting their feedback on the various issues raised in the consultation paper.

Businesses should also take note of the two guides mentioned above, particularly the one on managing data breaches, as this is a timely precursor to the mandatory breach reporting requirement that will soon be introduced in Singapore.

Reed Smith LLP is licensed to operate as a foreign law practice in Singapore under the name and style, Reed Smith Pte Ltd (hereafter collectively, Reed Smith). Where advice on Singapore law is required, we will refer the matter to and work with Reed Smith’s Formal Law Alliance partner in Singapore, Resource Law LLC, where necessary.

OCR releases new FAQs on use of health apps

The U.S. Department of Health and Human Services Office for Civil Rights (OCR) released a new set of Health Insurance Portability and Accountability Act (HIPAA) FAQs  building upon prior guidance from OCR. The new FAQs discuss the applicability of HIPAA to covered entities and business associates that interact with health apps and explain when HIPAA regulated entities may be held vicariously liable for data breaches experienced by the health app providers.

The new FAQs reiterate that a covered entity will not be liable for a breach of health information if the health app is not provided by or on behalf of the covered entity. Determining an app was developed for, or provided for or on behalf of a HIPAA regulated entity can be difficult given increasingly complicated business structures in the health care industry and the variety of technology solutions available in the market. For example, it is unclear how customized a technology solution must be for it to be “developed for, or provided for or on behalf of” a HIPAA regulated entity. For this reason, it is important to fully understand the relationship of the parties and the technology involved to properly analyze potential HIPAA risk exposure from using third-party technology.

To read more on the new HIPAA FAQs and the potential impact on the use of third-party technology solutions, click here.

Is 2019 the year for GDPR certification and codes of conduct?

The UK’s Information Commissioner’s Office (ICO) has published new guidance on certification and codes of conduct for data processing as well as expected timetables for finalising its revised guidelines on these topics.

Certification

Certification is a voluntary mechanism for organisations to validate their compliance with the General Data Protection Regulation 2016/679 (GDPR). Once the submissions process for certification schemes opens, controllers and processors will be encouraged to achieve a data protection certificate for their activities of personal data processing. This will demonstrate their GDPR compliance to regulators, businesses and the public. Certification by an accredited certification body will demonstrate enhanced transparency and accountability. Certification also represents an independent assurance that a business’ specific processing activities can be trusted.

Expected timeline: Summer 2019

As discussed in our recent blog, the European Data Protection Board (EDPB) published revised Guidelines 1/2018 on certification and identifying certification criteria in accordance with Articles 42 and 43 of the Regulation 2016/679 as well as Guidelines on the accreditation of certification bodies under Article 43 of the GDPR (2016/679). The EDPB is now considering responses to follow-up consultations and is expected to publish final certification and accreditation guidelines this coming summer. The ICO will then submit its own additional requirements to EDPB for its opinion. Following final approval by the EDPB, the ICO will start accepting GDPR certification schemes for approval.

 Codes of Conduct

Codes of conduct are created by trade associations or other bodies in consultation with relevant stakeholders in a particular sector, including the public where necessary. Their purpose is to enable sectors to resolve key data protection challenges with assurance from the ICO that the respective code, and its monitoring, are appropriate and comply with GDPR requirements.

Codes of conduct will be voluntary accountability tools for controllers and processors in a particular sector to sign up to. They will demonstrate that organisations apply the GDPR to personal data processing effectively. By adhering to a code of conduct, controllers and processors can ensure that they follow rules to achieve good practice within their sector.

Expected timeline: Autumn 2019

The EDPB has also published the Guidelines 1/2019 on Codes of Conduct and Monitoring Bodies under Regulation 2016/679 for consultation. Responses are currently being considered so that revised EDPB guidelines can be finalised this summer. The ICO will then submit accreditation requirements for monitoring bodies to the EDPB for its opinion. The ICO expects to formally accept codes of conduct starting this autumn.

Comment

Given these timetables, coming months will bring significant developments on certification and codes of conduct guidelines from both the EDPB and the ICO. Make sure you stay up to date by keeping an eye on our blog for upcoming alerts.

Transparency requirements for influencers in Germany, the United Kingdom and the United States

They are the stars of the young generation, brand ambassadors for organizations and leaders on social media: influencers. With their strong presence on social media channels such as Facebook, Instagram or Twitter, influencers have a power that pays off. Thousands of users follow the day-to-day posts of their role models. Influencers are becoming increasingly important for organizations as well and are developing into indispensable communication carriers of marketing. Accordingly, many organizations cooperate with influencers as part of their marketing strategies.

To read more on the legal framework that applies to influencers when posting content with promotional character in Germany, the United Kingdom and the United States, click here.

CDEI calls for evidence to inform its review of online targeting and bias in algorithmic decision making

The Centre for Data Ethics and Innovation (CDEI) is inviting submissions to help inform its review of online targeting and bias in algorithmic decision making.

Online targeting

Online targeting refers to providing individuals with relevant and engaging content, products, and services. Typically users experience targeting in the form of online advertising or personalised social media feeds.

CDEI identified online targeting as a particular issue due to the complex and opaque flows of data that are involved, which may undermine data protection rights. The concentration of data in certain organisations could also have an effect on competition in critical markets. CDEI is particularly interested in ensuring that online targeting does not cross the line from legitimate persuasion into illegitimate manipulation.

CDEI intends to investigate the issue of online targeting and undue influencing of users; in particular, the effect of online targeting on vulnerable users and the extent to which user autonomy is undermined by online targeting. First, CDEI will analyse gaps in the governance of online targeting. It will then conduct a public dialogue exercise to gather evidence before analysing the evidence and issuing a report with recommendations for governance. Other outputs of this review will include the results of public engagement; an analysis of governance frameworks; and recommendations for the government, regulators, and the industry.

Bias in algorithmic decision making

Machine-learning algorithms often work by identifying patterns in data and making recommendations accordingly. Although they may support good decision making and prevent human error, issues can arise if the algorithm reinforces problematic biases. These biases can be caused by errors in the design of the algorithm or biases in the underlying data sets used by the algorithm. Such biases have the potential to cause serious harm. CDEI wishes to investigate if this is an issue in four key sectors which involve decision making with a high impact on individuals and in which there is historic evidence of bias:

  • Financial services – especially on i) credit and insurance decisions about individuals; and ii) eradicating bias in technologies employed by financial services companies.
  • Crime and justice – in particular, the use of predictive algorithms in decision making by the police and judiciary.
  • Recruitment – historic data sets and practices often contain embedded biases and these should be identified and remedied.
  • Local government – algorithmic decision making has been used to identify instances of potential child abuse and neglect. Given the sensitivity, technologies in this area must meet the highest ethical standards.

CDEI proposes to first understand current practices by engaging with relevant stakeholders in the identified areas. CDEI will then produce a variety of outputs, including operational codes of practice, for trialling decision-making tools; bias tests, to be used by companies to mitigate bias; procurement guidelines, to be followed when purchasing algorithms from technology providers; and a final report, summarising CDEI’s work in each sector.

Comment

These reviews will feed into CDEI’s two-year strategy on enhancing the benefits of data and artificial intelligence in the UK. Regulatory interest in these areas is not limited to CDEI, or indeed the UK; for example, see our previous TLD posts on the UK government’s White Paper on tackling online harms here and the Algorithmic Accountability Act proposed by U.S. lawmakers here. If you would like to submit evidence, more information can be found on the review of i) online targeting here, and ii) algorithmic decision making here. The deadline for the first set of responses is 14 June 2019.

Final guideline for Internet personal information protection published by Chinese Ministry of Public Security

After soliciting public comments since last November, the Chinese Ministry of Public Security (MPS) published the finalized Guideline for Internet Personal Information Security Protection (Guideline) on April 10, 2019. The Guideline applies to Personal Information Holders, defined as entities or individuals that “control and process personal information” through their provision of services using the Internet, private networks, or offline. As China’s primary cybersecurity regulator under China’s Cybersecurity Law (CSL), MPS previously issued regulations specific to network operators’ multi-level protection scheme, as well as procedures for China’s Public Security Bureaus (PSB) to inspect Internet service providers. The voluntary Guideline sets forth MPS-recommended best practices to “protect cybersecurity and individuals’ legitimate interests” that will likely inform PSB cybersecurity inspections. Businesses with interests in China are likely to face continued challenges to comply with the expanding implementation of the CSL (especially with respect to broadening definitions of Personal Information Holders and data localization requirements).

To read more on the guidelines, click here.

Washington becomes the latest state to amend its data breach notification law

On May 7, 2019, Governor Jay Inslee of Washington signed HB 1071 into law, which strengthens the state’s data breach notification law. Washington joins the growing list of states that have recently amended their breach notification laws. Although Washington’s law was amended in 2015, the law was initially enacted nearly 14 years ago. This amendment, like those of other states, is designed to better align with the way in which consumers interact with technology today. As consumers share more information about themselves via the internet, states continue to place the onus on the companies and organizations collecting that information to guard against its loss or misuse.

Washington’s amendment expands upon the breach notification law in the following key ways:

  • First, it shortens the period between the discovery of a breach of consumers’ personal information (as defined by the law) and the time in which notification of the breach must be provided to those consumers from 45 days to 30 days. This change also applies to notifications to the attorney general, who now must be notified within 30 days after the breach was discovered, also down from 45 days (the requirement to notify the attorney general still only applies if notification must be provided to more than 500 Washington residents).
  • Second, the notification to the attorney general must now also include:
    • A list of the types of personal information implicated in the breach;
    • The timeframe of exposure, if known, including the date of the breach and the date of its discovery;
    • A summary of steps taken to contain the breach; and
    • A sample copy of the breach notification letter without any personally identifiable information.

In the event that more information becomes known as the investigation into the breach progresses, updates must be provided to the attorney general under the amended law. Continue Reading

The Highest French administrative Court slightly reduces the amount of a penalty imposed by the CNIL: is this the tip of the iceberg ?

A few days before the entry into force of the GDPR, the CNIL imposed a 250,000 euros penalty to the company Optical Center for failure to secure personal data on its website – where a breach occurred, allowing access to invoices and purchases orders containing personal and sensitive data of customers. Further to Optical Center’s appeal, the French Highest administrative Court (“Council of State”), confirmed the sanction but reassessed the amount of the penalty to 200,000 euros in a recent decision dated 17 April 2019.

Contrary to the U.S in particular, the sanctions pronounced for data breaches remain in France in the hands of the regulator, the CNIL. Given that the sanctions pronounced took place before the entering into force of the GDPR, the CNIL was limited in its sanction powers, which, compared to applicable standards at that time, can be seen as severe. Another factor played a role: Optical Center had already been imposed a 50,000 euros penalty for a similar data breach on 5 November 2015, which was confirmed on 19 June 2017 by the Council of State.

Continue Reading

ICO blogs on meaningfulness of human involvement in AI systems

Researchers at the Information Commissioner’s Office (ICO) have started a series of blogs discussing the ICO’s work in developing a framework for auditing artificial intelligence (AI). In the first blog of the series, the discussion revolves around the degree and quality of human review in AI systems, specifically, in what circumstances human involvement can be truly “meaningful” so as to create non-solely automated AI systems.

Risks inherent in complex AI systems

The ICO and European Data Protection Board (EDPB) have both published guidance on automated individual decision-making and profiling. The main takeaways are that human reviewers must actively check a system’s recommendation, consider all available input data, weigh up and interpret the recommendation, consider any additional factors and even use their authority and competence to challenge the recommendation if necessary.

In some circumstances, human input should also consider the likelihood of additional risk factors which may cause a system to be regarded as solely-automated under the GDPR. More often, these risks appear in complex AI systems and can lead to (1) automation bias and (2) a lack of interpretability.

  • Automation bias occurs in complex AI models because human users often trust the computer-generated output as being an objective, and therefore accurate, result of mathematics and data-crunching. When human users stop using their judgment or stop questioning whether the AI’s result may be wrong, that is when the system could become solely automated.

How to address this concern: Design requirements to reduce automation bias and to support a meaningful human review must be developed during the design and build phase of AI systems. Organisations (in particular, front-end interface developers) need to consider how human reviewers think and behave so as to give them the chance to intervene. It may also be helpful to consult and test options with human reviewers early on.

  • A lack of interpretability may occur when the human reviewers, again, stop judging or challenging a system’s recommendation, but this time because of the intrinsic difficulty of interpreting the recommendation of the AI system. This would trump any human effort to meaningfully review the output, leading to the decision becoming ‘solely automated’.

How to address this concern: The challenge of interpretability should also be considered from the initial design phase. Organisations should define and explain how to measure interpretability in the specific context of their AI system. This could include, for example, an explanation of a specific output rather than the model in general or the use of a confidence score attached to each output, which would indicate to the reviewer that more involvement is needed for a final decision.

Comment – solely vs non-solely automated AI systems

ICO recommends that an organisation decides at the outset of its design phase if its AI application is intended (i) to enhance human decision-making or (ii) to make solely automated decisions. This decision requires management or board members to fully understand the risk implications of choosing one way or the other. Additionally, they need to ensure that accountability and effective risk management policies are in place from the outset.

Other key recommendations to take away include: the training of human reviewers to ensure they understand the mechanisms and limitations of AI systems and how their own expertise enhances the systems; and the monitoring of reviewers’ inclinations to accept or reject the AI’s output and the analysis of such approaches.

 

 

 

Trade Secrets Act now a national law in Germany

On April 26, the Geschäftsgeheimnisgesetz (Trade Secrets Act, “Act”) came into effect. It took Germany over a year from the implementation deadline to transpose the Trade Secrets Directive (“Directive”) into national law. The Act replaces the provisions of the Unfair Competition Act on misappropriation of trade secrets and introduces new procedural rules for trade secret litigation. It applies to any company doing business in Germany, no matter the size or the industry sector. The Act came into effect without a transitional period. Therefore, companies need to take appropriate steps now – if not, their trade secrets will lose protection.

To read more on the new Act, click here.

LexBlog