How big is the risk to operate Facebook fan pages in Germany?

On 5 June 2018, the Court of Justice of the European Union (CJEU) handed down its long-awaited Facebook fan page judgement (Case C-210/16), holding that the operator of a fan page on Facebook is jointly responsible with Facebook for processing the data of visitors to the page. Only a day later, the Conference of German Data Protection Authorities (German DPAs) released a statement, titled ‘Time is up for not being responsible’ (Statement, available in German here), arguing that organisations do not meet data protection standards when operating a fan page on Facebook. Marketers in Germany and Europe are now uncertain whether they should take down their Facebook fan pages and any other social media presence. In this blog, we provide you with a first interpretation and a ‘first aid kit’.

Background

Wirtschaftsakademie Schleswig-Holstein GmbH (Wirtschaftsakademie) operates a Facebook fan page and was ordered by the Schleswig-Holstein Data Protection Authority to deactivate the fan page. Neither Facebook Ireland Ltd nor Wirtschaftsakademie had been informing visitors of the functioning of cookies and subsequent processing of their data. Wirtschaftsakademie took this case to court, arguing essentially that it was not responsible for the processing of data by Facebook or cookies installed by Facebook.

CJEU decision

The CJEU ruled that the operator of a fan page hosted on a social network must be considered a ‘data controller’.

The court began by noting that the concept of controller must be defined broadly as an entity that alone or jointly with others determines the purposes and means of the processing of personal data. It observed that, for the European Union, Facebook Ireland must be regarded as controller responsible for the processing of personal data of Facebook users and persons visiting the fan pages hosted on Facebook.

Next, the CJEU stated that the operator of a fan page hosted on Facebook is also a (co-) controller. The operator contributes to the processing of the visitors’ personal data by defining parameters in the creation of the fan page. In particular, the operator can request the processing of demographic data relating to its target audience (for example, age, sex, information on lifestyle and interests) and geographical data that allow the operator to target best the information it offers.

The case has now been referred back to the German Federal Administrative Court, which will decide whether the specific use of Facebook fan pages by Wirtschaftsakademie was compliant.

Continue Reading

Data Protection Act 2018 comes into force

On 23 May 2018, the Data Protection Act 2018 (DPA) received royal assent and became UK law. The DPA implements the EU’s General Data Protection Regulation (GDPR), while providing for certain permitted derogations, additions and UK-specific provisions.

The DPA:

  • Repeals and replaces the previous Data Protection Act 1998 (the 1998 Act) as the primary piece of data protection legislation in the UK
  • Is designed to ensure that UK and EU data protection regimes are aligned post-Brexit
  • Implements the EU Law Enforcement Directive, establishing rules on the processing of personal data by law enforcement agencies and intelligence services

This blog looks at key issues of interest in the DPA relating to liability, compliance and enforcement.

DPA offences

Under the GDPR, EU Member States have the freedom to apply certain exemptions or provide for their own national rules regarding certain types of personal data processing. The DPA creates additional data protection offences and provides additional information about the Information Commissioner’s Office’s (ICO) powers and enforcement abilities.

UK-specific data protection offences include:

  • Knowingly or recklessly obtaining or disclosing personal data without the consent of the data controller, or procuring such disclosure, or retaining data obtained without consent.
  • Selling, or offering to sell, personal data knowingly or recklessly obtained or disclosed.
  • Where an access or data portability request has been received, obstructing the provision of information that an individual would be entitled to receive.
  • Taking steps, knowingly or recklessly, to re-identify information that has been “de-identified” (although this action can be defended when it is justified in the public interest).
  • Knowingly or recklessly processing personal data that has been re-identified (which is a separate offence), without the consent of the controller responsible for the de-identification.

Continue Reading

Asserting the defense of lack of personal jurisdiction in privacy class actions

In the wake of the U.S. Supreme Court’s decision in Spokeo v. Robins, 136 S. Ct. 1540 (2016), there has been a plethora of litigation in privacy class actions over whether federal courts can exercise subject-matter jurisdiction over the asserted statutory or common law claims. However, in addition to considering whether a court has subject-matter jurisdiction, entities hit with a putative privacy class action should also consider whether the court can exercise personal jurisdiction over the parties and claims.

There are two types of personal jurisdiction: general and specific. Over the course of the last decade, the U.S. Supreme Court has limited the forums in which a court can exercise general – or all purpose – jurisdiction over a defendant. In most cases, those forums will be only an entity’s state of incorporation and principal place of business. The result has been an increased focus on whether courts have specific – or case-linked – jurisdiction. Now, entities – even those that conduct business in all 50 states – may be able to successfully bring a motion to dismiss for lack of personal jurisdiction where the entity’s contacts with the forum did not give rise to the claims against it.

In addition, the Supreme Court’s decision in Bristol-Myers Squibb Co. v. Superior Court of California, San Francisco Cty., 137 S. Ct. 1773 (2017) (Bristol-Myers) opened the door to an additional use of the lack of personal jurisdiction defense in nationwide privacy class actions. Relying on Bristol-Myers, several district courts have permitted entities hit with nationwide class actions to limit the putative class where the absent class members’ claims did not arise from the entity’s contacts with the forum state.

Continue Reading

ICO and NCSC issue guidance on security outcomes under GDPR

The General Data Protection Regulation ((EU) 2016/9679) (GDPR) came into effect on 25 May 2018. One of the key principles centres on integrity and confidentiality of personal data. Article 5(1)(f) of the GDPR provides that personal data shall be:

“processed in a manner that ensures appropriate security of the personal data, including protection against unauthorised or unlawful processing and against accidental loss, destruction or damage, using appropriate technical or organisational measures (emphasis added)”

The GDPR goes a little further than the previous data protection framework (that is, under the EU Data Protection Directive 95/46/EC) and provides some description of the technical and organisational measures expected to achieve a level of security appropriate to the risk associated with the processing of personal data (see Article 32 of the GDPR). Inevitably, however, decisions around security will need to be made by the controller and/or processor – and it will therefore be for them to determine what is “appropriate”.

We have seen that the National Cyber Security Centre (NCSC) and the Information Commissioner’s Office (ICO) have published ‘security outcomes’ aiming to provide some further guidance on the security of processing personal data.

On 18 May 2018, the NCSC and ICO published a set of technical security outcomes considered to represent “appropriate measures” under Article 5(1)(f). This guidance describes an overall set of outcomes that are considered ‘appropriate’ to prevent personal data being accidentally or deliberately compromised.

Continue Reading

Unregistered patent transactions will cost you

The findings from a recent UK Patents Court judgment have clarified the extent of litigation sanctions imposed when there is a failure to record a patent transaction at the Patent Office. The UK patents system may deprive a successful plaintiff of its entitlement to recover its full legal costs in these circumstances. This case will also affect trademarks and registered designs, as these statutes have identical provisions. This decision will impact potential litigants and future commercial transactions; they must make sure they register an interest and make sure their due diligence in corporate and commercial transactions consider this potential situation. The article by Jonathan Radcliffe, published by Bloomberg, here, examines this little-known trap for the unwary on the UK’s approach to the consequences flowing from a failure to record such a transaction.

German authorities: tracking and profiling cookies require opt-in consent

On 26 April 2018, the Conference of German Data Protection Authorities (German DPAs) released a highly criticised position paper on the applicability of the German Telemedia Act (TMA) after 25 May 2018 (Position Paper, available in German here). The Position Paper clearly states that tracking and profiling cookies now require informed prior opt-in consent.

Position Paper

Webtracking is governed by the General Data Protection Regulation (GDPR) as well as the ePrivacy Directive. The ePrivacy Directive is currently being revised. A new ePrivacy Regulation was supposed to enter into force in tandem with the GDPR on 25 May 2018, but it is delayed and we do not expect it to enter into force before the end of 2019. The German legislator has not updated the TMA due to the upcoming ePrivacy Regulation.

The Position Paper outlines the German DPAs’ view on the relationship of the GDPR and the TMA and its consequences on the use of cookies. The Position Paper states that the GDPR shall take precedent unless national law prevails because of an opening clause or conflict of law rule. Article 95 of the GDPR is such a conflict of law rule. It provides that the GDPR shall not impose additional obligations regarding processing data in connection with the provision of publicly available electronic communications services in public communication networks in relation to matters for which they are subject to specific obligations with the same objective set out in the ePrivacy Directive. However, the German DPAs explain that Article 95 of the GDPR does not apply with regard to the provisions in the TMA that govern tracking and reach measurement.

Continue Reading

European Parliament publishes a corrigendum to the GDPR

On 25 April 2018, the European Parliament’s Civil Liberties, Justice & Home Affairs Committee published a corrigendum (an error to be corrected in a printed work after publication) to the European General Data Protection Regulation ((EU 2016/679) (GDPR).

There are 26 “official” language versions of the GDPR (all European Economic Area countries plus Norway and Iceland). This can create differences in interpretation, with potentially serious ramifications for enforcement and compliance, so harmonising the legislation is a key concern for the EU Parliament. The corrigendum deals mainly with typographical and clerical errors for all language versions of the GDPR. Many of these had previously been requested by Member States for their own language versions.

Continue Reading

Should the President’s tweets create a “public forum”?

You might be aware that the President of the United States has a Twitter account. You might not be aware that each time he uses the account to post information about government business, the President opens a new “public forum” for assembly and debate. According to District Judge Naomi Reice Buchwald’s decision in Knight First Amendment Institute v. Trump, the government controls the “interactive space” associated with the President’s tweets and may not exercise that control so as to exclude other users based on the content of their speech. In other words, the District Court wrote, the First Amendment regulates the President’s conduct on Twitter and prohibits him from blocking other users from replying to his political tweets. Unfortunately, this ruling could backfire, so that a decision intended to promote free speech may instead degrade or limit it.

It works like this: the President or his aides sign in to his account, @realDonaldTrump, and submit content to Twitter – text, photographs and videos. Twitter serves that content to anyone who requests it via a web browser, i.e., it is visible to everyone with Internet access. If another user has signed in to their Twitter account, they may “reply” to the President’s tweets. A third user who clicks on the tweet will see the reply beneath the original tweet, along with all other replies. If the President has “blocked” a user, however, the blocked user cannot see the President’s tweets or reply to them as long as the blocked user is signed in to their account. The blocked user can still reply to other replies to the original tweet, and those “replies to replies” will be visible to other users in the comment thread associated with the tweet. The blocked user can still view the President’s tweets by signing out of their account. And they can still comment on the President’s tweets in connection with their own account or any other user’s account that has not blocked them from replying.

Continue Reading

European Commission proposes draft Whistleblowing Directive

On 23 April 2018, the European Commission published a proposal for a Directive on the protection of whistleblowers reporting on breaches of EU law, accompanied by an explanatory memorandum.

The Directive

The intention behind the proposal is to harmonise the minimum level of protection available to whistleblowers across the EU. It reflects the Commission’s view that whistleblowers can play an important role in exposing breaches of EU law, but they will often resist coming forward for fear of the legal and financial consequences which may occur. At present, legal protection for whistleblowers is fragmented and, in the Commission’s view, insufficient. In its explanatory memorandum, the Commission talks of ‘missed opportunities’ for preventing and detecting breaches of EU law where certain Member States currently have a lack of protection and argues that the harmonisation brought about by the draft Directive will contribute toward improving the business environment, increasing fairness in taxation and promoting labour rights.

The draft Directive applies to reports of breaches across a wide range of EU areas of law, including the protection of privacy and personal data, and security of network and information. It creates an obligation to establish internal channels and procedures to handle reports made by whistleblowers, which applies to entities that meet the prescribed thresholds. For those entities in the private sector, the threshold is 50 or more employees, or an annual turnover of EUR 10 million or more, although this does not apply to businesses offering financial services, for which there is no minimum threshold. Entities in the public sector will be caught if they are involved in state or regional administration, if they are responsible for municipalities with more than 10,000 inhabitants or if they are otherwise governed by public law.

Continue Reading

European Commission outlines plans to boost artificial intelligence

Last month, the European Commission (Commission) announced plans to bolster the future of artificial intelligence (AI) across the bloc. In a paper on ‘Artificial Intelligence for Europe’, the Commission proposed a three-pronged approach to: (i) increase public and private investment in AI; (ii) prepare for socio-economic changes; and (iii) ensure an appropriate ethical and legal framework for AI. This blog will look at what AI is and the Commission’s proposed strategy.

What is AI?

The Commission defines AI as “systems that display intelligent behaviour by analysing their environment and taking actions – with some degree of autonomy – to achieve specific goals”. AI can be software-based, in the virtual world (such as image-analysis software, search engines or recognition systems) or embedded in hardware (for example, self-driving cars, Internet of Things applications, and advanced robots).

AI is increasingly prominent in our society and used on a near daily basis by most people. Many AI technologies utilize data to improve their performance and guide automated decision-making. The number of technological and commercial AI applications continues to increase, enabling AI to have a transformative effect on society as a whole.

Continue Reading

LexBlog