With antitrust investigations looming, Apple reverses course on bans of parental control apps

With congressional probes and greater scrutiny from federal regulators on the horizon, Apple has abruptly reversed course on its bans of parental control apps available in its app store.

As reported by The New York Times, Apple quietly updated its App Store guidelines to reverse its decision to ban certain parental control apps.

The battle between Apple and certain app developers dates back to last year when the iPhone maker first put companies on notice that it would cut their access to the app store if they didn’t make changes to their monitoring technologies.

The heart of the issue is the use of mobile device management (MDM) technologies in the parental control apps that Apple has removed from the App Store, Apple said in a statement earlier this year.

These device management tools give to a third party control and access over a device’s user location, app use, email accounts, camera permissions and browsing history.

“We started exploring this use of MDM by non-enterprise developers back in early 2017 and updated our guidelines based on that work in mid-2017,” the company said.

Apple acknowledged that the technology has legitimate uses in the context of businesses looking to monitor and manage corporate devices to control proprietary data and hardware, but, the company said, it is “a clear violation of App Store policies — for a private, consumer-focused app business to install MDM control over a customer’s device.”

Last month, developers of these parental monitoring tools banded together to offer a solution. In a joint statement issued by app developers including OurPact, Screen Time, Kidslox, Qustodio, Boomerang, Safe Lagoon and FamilyOrbit, the companies said simply, “Apple should release a public API granting developers access to the same functionalities that Apple’s native ‘Screen Time’ uses.”

By providing access to its Screen Time app, Apple would obviate the need for the kind of controls that developers had put in place to work around Apple’s restrictions.

“The API proposal presented here outlines the functionality required to develop effective screen time management tools. It was developed by a group of leading parental control providers,” the companies said. “It allows developers to create apps that go beyond iOS Screen Time functionality, to address parental concerns about social media use, child privacy, effective content filtering across all browsers and apps and more. This encourages developer innovation and helps Apple to back up their claim that ‘competition makes everything better and results in the best apps for our customers.’ ”

Now, Apple has changed its guidelines to indicate that apps using MDM “must request the mobile device management capability, and may only be offered by commercial enterprises, such as business organizations, educational institutions, or government agencies, and, in limited cases, companies utilizing MDM for parental controls. MDM apps may not sell, use, or disclose to third parties any data for any purpose, and must commit to this in their privacy policy.”

Essentially it just reverses the company’s policy without granting access to Screen Time, as the consortium of companies have suggested.

“It’s been a hellish roller coaster,” Dustin Dailey, a senior product manager at OurPact, told The New York Times. OurPact had been the top parental control app in the App Store before it was pulled in February. The company estimated that Apple’s move cost it around $3 million, a spokeswoman told the Times.

House Democrats release more than 3,500 Russian Facebook ads

Democrats from the House Intelligence Committee have released thousands of ads that were run on Facebook by the Russia-based Internet Research Agency.

The Democrats said they’ve released a total of 3,519 ads today from 2015, 2016 and 2017. This doesn’t include 80,000 pieces of organic content shared on Facebook by the IRA, which the Democrats plan to release later.

What remains unclear is the impact that these ads actually had on public opinion, but the Democrats note that they were seen by more than 11.4 million Americans.

You can find all the ads here, though it’ll take some time just to download them. As has been noted about earlier (smaller) releases of IRA ads, they aren’t all nakedly pro-Trump, but instead express a dizzying array of opinions and arguments, targeted at a wide range of users.

“Russia sought to weaponize social media to drive a wedge between Americans, and in an attempt to sway the 2016 election,” tweeted Adam Schiff, who is the Democrats’ ranking member on the House Intelligence Committee. “They created fake accounts, pages and communities to push divisive online content and videos, and to mobilize real Americans,”

Russia sought to divide us by our race, our country of origin, our religion, and our politics. They attempted to hijack legitimate events meant to do good – teaching self-defense, providing legal aid – as well as those events meant to widen a rift.

Here’s just some examples: pic.twitter.com/YMX2FTgPGU

— Adam Schiff (@RepAdamSchiff) May 10, 2018

He added, “By exposing these Russian-created Facebook advertisements, we hope to better protect legitimate political expression and safeguard Americans from having the information they seek polluted by foreign adversaries. Sunlight is always the best disinfectant.

In conjunction with this release, Facebook published a post acknowledging that it was “too slow to spot this type of information operations interference” in the 2016 election, and outlining the steps (like creating a public database of political ads) that it’s taking to prevent this in the future.

“This will never be a solved problem because we’re up against determined, creative and well-funded adversaries,” Facebook said. “But we are making steady progress.”

Google is banning Irish abortion referendum ads ahead of vote

Google is suspending adverts related to a referendum in Ireland on whether or not to overturn a constitutional clause banning abortion. The vote is due to take place in a little over two weeks time.

“Following our update around election integrity efforts globally, we have decided to pause all ads related to the Irish referendum on the eighth amendment,” a Google spokesperson told us.

The spokesperson said enforcement of the policy — which will cover referendum adverts that appear alongside Google search results and on its video sharing platform YouTube — will begin in the next 24 hours, with the pause remaining in effect through the referendum, with the vote due to take place on May 25.

The move follows an announcement by Facebook yesterday saying it had stopped accepting referendum related ads paid for by foreign entities. However Google is going further and pausing all ads targeting the vote.

Given the sensitivity of the issue a blanket ban is likely the least controversial option for the company, as well as also the simplest to implement — whereas Facebook has said it has been liaising with local groups for some time, and has created a dedicated channel where ads that might be breaking its ban on foreign buyers can be reported by the groups, generating reports that Facebook will need to review and act on quickly.

Given how close the vote now is both tech giants have been accused of acting too late to prevent foreign interests from using their platforms to exploit a loophole in Irish law to get around a ban on foreign donations to political campaigns by pouring money into unregulated digital advertising instead.

Speaking to the Guardian, a technology spokesperson for Ireland’s opposition party Fianna Fáil, described Google’s decision to ban the adverts as “too late in the day”.

“Fake news has already had a corrosive impact on the referendum debate on social media,” James Lawless TD told it, adding that the referendum campaign had made it clear Ireland needs legislation to restrict the activities of Internet companies’ ad products “in the same way that steps were taken in the past to regulate political advertising on traditional forms of print and broadcast media”.

We’ve asked Google why it’s only taken the decision to suspend referendum ad buys now, and why it did not act months earlier — given the Irish government announced its intention to hold a 2018 referendum on repealing the Eighth Amendment in mid 2017 — and will update this post with any response.

In a public policy blog post earlier this month, the company’s policy SVP Kent Walker talked up the steps the company is taking to (as he put it) “support… election integrity through greater advertising transparency”, saying it’s rolling out new policies for U.S. election ads across its platforms, including requiring additional verification for election ad buyers, such as confirmation that an advertiser is a U.S. citizen or lawful permanent resident.

However this U.S.-first focus leaves other regions vulnerable to election fiddlers — hence Google deciding to suspend ad buys around the Irish vote, albeit tardily.

The company has also previously said it will implement a system of disclosures for ad buyers to make it clear to users who paid for the ad, and that it will be publishing a Transparency Report this summer breaking out election ad purchases. It also says it’s building a searchable library for election ads.

Although it’s not clear when any of these features will be rolled out across all regions where Google ads are served.

Facebook has also announced a raft of similar transparency steps related to political ads in recent years — responding to political pressure and scrutiny following revelations about the extent of Kremlin-backed online disinformation campaigns that had targeted the 2016 US presidential election.

Brexit data transfer gaps a risk for UK startups, MPs told

The uncertainty facing digital businesses as a result of Brexit was front and center during a committee session in the UK parliament today, with experts including the UK’s information commissioner responding to MPs’ questions about how and even whether data will continue to flow between the UK and the European Union once the country has departed the bloc — in just under a year’s time, per the current schedule.

The risks for UK startups vs tech giants were also flagged, with concerns voiced that larger businesses are better placed to weather Brexit-based uncertainty thanks to greater resources at their disposal to plug data transfer gaps resulting from the political upheaval.

Information commissioner Elizabeth Denham emphasized the overriding importance of the UK data protection bill being passed. Though that’s really just the baby step where the Brexit negotiations are concerned.

Parliamentarians have another vote on the bill this afternoon, during its third reading, and the legislative timetable is tight, given that the pan-EU General Data Protection Act (GDPR) takes direct effect on May 25 — and many provisions in the UK bill are intended to bring domestic law into line with that regulation, and complete implementation ahead of the EU deadline.

Despite the UK referendum vote to pull the country out of the EU, the government has committed to complying with GDPR — which ministers hope will lay a strong foundation for it to secure a future agreement with the EU that allows data to continue flowing, as is critical for business. Although what exactly that future data regime might be remains to be seen — and various scenarios were discussed during today’s hearing — hence there’s further operational uncertainty for businesses in the years ahead.

“Getting the data policy right is of critical importance both on the commercial side but also on the security and law enforcement side,” said Denham. “We need data to continue to flow and if we’re not part of the unified framework in the EU then we have to make sure that we’re focused and we’re robust about putting in place measures to ensure that data continues to flow appropriately, that it’s safeguarded and also that there is business certainty in advance of our exit from the EU.

“Data underpins everything that we do and it’s critically important.”

Another witness to the committee, James Mullock, a partner at law firm Bird & Bird, warned that the Brexit-shaped threat to UK-EU data flows could result in a situation akin to what happened after the long-standing Safe Harbor arrangement between the EU and the US was struck down in 2015 — leaving thousands of companies scrambling to put in place alternative data transfer mechanisms.

“If we have anything like that it would be extremely disruptive,” warned Mullock. “And it will, I think, be extremely off-putting in terms of businesses looking at where they will headquarter themselves in Europe. And therefore the long term prospects of attracting businesses from many of the sectors that this country supports so well.”

“Essentially what you’re doing is you’re putting the burden on business to find a legal agreement or a legal mechanism to agree data protection standards on an overseas recipient so all UK businesses that receive data from Europe will be having to sign these agreements or put in place these mechanisms to receive data from the European Union which is obviously one of our very major senders of data to this country,” he added of the alternative legal mechanisms fall-back scenario.

Another witness, Giles Derrington, head of Brexit policy for UK technology advocacy organization, TechUK, explained how the collapse of Safe Harbor had saddled businesses with major amounts of bureaucracy — and went on to suggest that a similar scenario befalling the UK as a result of Brexit could put domestic startups at a big disadvantage vs tech giants.

“We had a member company who had to put in place two million Standard Contractual Clauses over the space of a month or so [after Safe Harbor was struck down],” he told the committee. “The amount of cost, time, effort that took was very, very significant. That’s for a very large company.

“The other side of this is the alternatives are highly exclusionary — or could be highly exclusionary to smaller businesses. If you look at India for example, who have been trying to get an adequacy agreement with the EU for about ten years, what you’ve actually found now is a gap between those large multinationals, who can put in place binding corporate rules, standard contractual clauses, have the kind of capital to be able to do that — and it gives them an access to the European market which frankly most smaller businesses don’t have from India.

“We obviously wouldn’t want to see that in a UK tech sector which is an awful lot of startups, scale-ups, and is a key part of the ecosystem which makes the UK a tech hub within Europe.”

Denham made a similar point. “Binding corporate rules… might work for multinational companies [as an alternative data transfer mechanism] that have the ability to invest in that process,” she noted. “Codes of conduct and certification are other transfer mechanisms that could be used but there are very few codes of practice and certification mechanisms in place at this time. So, although that could be a future transfer mechanism… we don’t have codes and certifications that have been approved by authorities at this time.”

“I think it would be easier for multinational companies and large companies, rather than small businesses and certainly microbusinesses, that make up the lion’s share of business in the UK, especially in tech,” she added of the fall-back scenarios.

Giving another example of the scale of the potential bureaucracy nightmare, Stephen Hurley, head of Brexit planning and policy for UK ISP British Telecom, told the committee it has more than 18,000 suppliers. “If we were to put in place Standard Contractual Clauses it would be a subset of those suppliers but we’d have to identify where the flows of data would be coming from — in particular from the EU to the UK — and put in place those contractual clauses,” he said.

“The other problem with the contractual clauses is they’re a set form, they’re a precedent form that the Commission issues. And again that isn’t necessarily designed to deal with the modern ways of doing business — the way flows of data occurs in practice. So it’s quite a cumbersome process. And… [there’s] uncertainty as well, given they are currently under challenge before the European courts, a lot of companies now are already doing a sort of ‘belt and braces’ where even if you rely on Privacy Shield you’ll also put in place an alternative transfer mechanism to allow you to have a fall back in case one gets temporarily removed.”

A better post-Brexit scenario than every UK business having to do the bureaucratic and legal leg-work themselves would be the UK government securing a new data flow arrangement with the EU. Not least because, as Hurley mentioned, Standard Contractual Clauses are subject to a legal challenge, with legal question marks now extended to Privacy Shield too.

But what shape any such future UK-EU data transfer arrangement could take remains tbc.

The panel of witnesses agreed that personal data flows would be very unlikely to be housed within any future trade treaty between the UK and the EU. Rather data would need to live within a separate treaty or bespoke agreement, if indeed such a deal can be achieved.

Another possibility is for the UK to receive an adequacy decision from the EC — such as the Commission has granted to other third countries (like the US). But there was consensus on the panel that some form of bespoke data arrangement would be a superior outcome — for legal reasons but also for reciprocity and more.

Mullock’s view is a treaty would be preferable as it would be at lesser risk of a legal challenge. “I’m saying a treaty is preferable to a decision but we should take what we can get,” he said. “But a treaty is the ultimate standard to aim for.”

Denham agreed, underlining how an adequacy decision would be much more limiting. “I would say that a bespoke agreement or a treaty is preferable because that implies mutual recognition of each of our data protection frameworks,” she said. “It contains obligations on both sides, it would contain dispute mechanisms. If we look at an adequacy decision by the Commission that is a one-way decision judging the standard of UK law and the framework of UK law to be adequate according to the Commission and according to the Council. So an agreement would be preferable but it would have to be a standalone treaty or a standalone agreement that’s about data — and not integrate it into a trade agreement because of the fundamental rights element of data protection.”

Such a bespoke arrangement could also offer a route for the UK to negotiate and retain some role for her office within EU data protection regulation after Brexit.

Because as it stands, with the UK set to exit the EU next year — and even if an adequacy decision was secured — the ICO will lose its seat at the table at a time when EU privacy laws are setting the new global standard, thanks to GDPR.

“Unless a role for the ICO was negotiated through a bespoke agreement or a treaty there’s no way in law at present that we could participate in the one-stop shop [element of GDPR, which allows for EU DPAs to co-ordinate regulatory actions] — which would bring huge advantages to both sides and also to British businesses,” said Denham.

“At this time when the GDPR is in its infancy, participating in shaping and interpreting the law I think is really important. And the group of regulators that sit around the table at the EU are the most influential blocs of regulators — and if we’re outside of that group and we’re an observer we’re not going to have the kind of effect that we need to have with big tech companies. Because that’s all going to be decided by that group of regulators.”

“The European Data Protection Board will set the weather when it comes to standards for artificial intelligence, for technologies, for regulating big tech. So we will be a less influential regulator, we will continue to regulate the law and protect UK citizens as we do now, but we won’t be at the leading edge of interpreting the GDPR — and we won’t be bringing British values to that table if we’re not at the table,” she added.

Hurley also made the point that if the ICO is not inside the GDPR one-stop shop mechanism then UK companies will have to choose another data protection agency within the EU to act as their lead regulator — describing this as “again another burden which we want to avoid”.

The panel was asked about opportunities for domestic divergence on elements of GDPR once the UK is outside the EU. But no one saw much advantage to be eked out outside a regulatory regime that is now responsible for the de facto global standard for data protection.

“GDPR is by no means perfect and there are a number of issues that we have with it. Having said that because GDPR has global reach it is now effectively being seen as we have to comply with this at an international level by a number of our largest members, who are rolling it out worldwide — not just Europe-wide — so the opportunities for divergence are quite limited,” said Derrington. “Particularly actually in areas like AI. AI requires massive amounts of data sets. So you can’t do that just from a UK only data-set of 60 million people if you took everyone. You need more data than that.

“If you were to use European data, which most of them would, then that will require you to comply with GDPR. So actually even if you could do things which would make it easier for some of the AI processes to happen by doing so you’d be closing off your access to the data-sets — and so most of the companies I’ve spoken to… see GDPR as that’s what we’re going to have to comply with. We’d much rather it be one rule… and to be able to maintain access to [EU] data-sets rather than just applying dual standards when they’re going to have to meet GDPR anyway.”

He also noted that about two-thirds of TechUK members are small and medium sized businesses, adding: “A small business working in AI still needs massive amounts of data.

“From a tech sector perspective, considering whether data protection sits in the public consciousness now, actually don’t see there being much opportunity to change GDPR. I don’t think that’s necessarily where the centre of gravity amongst the public is — if you look at the data protection bill, as it went through both houses, most of the amendments to the bill were to go further, to strengthen data protection. So actually we don’t necessarily see this is idea that we will significantly walk back GDPR. And bear in mind that any company which are doing any work with the EU would have to comply with GDPR anyway.”

The possibility for legal challenges to any future UK-EU data arrangement were also discussed during the hearing, with Denham saying that scrutiny of the UK’s surveillance regime once it is outside the EU is inevitable — though she suggested the government will be able to win over critics if it can fully articulate its oversight regime.

“Whether the UK proceeds with an adequacy assessment or whether we go down the road of looking at a bespoke agreement or a treaty we know, as we’ve seen with the Privacy Shield, that there will be scrutiny of our intelligence services and the collection, use and retention of data. So we can expect that,” she said, before arguing the UK has a “good story” to tell on that front — having recently reworked its domestic surveillance framework and included accepting the need to make amendments to the law following legal challenges.

“Accountability, transparency and oversight of our intelligence service needs to be explained and discussed to our [EU] colleagues but there is no doubt that it will come under scrutiny — and my office was part of the most recent assessment of the Privacy Shield. And looking at the US regime. So we’re well aware of the kind of questions that are going to be asked — including our arrangement with the Five Eyes, so we have to be ready for that,” she added.

White House will host tech industry for AI summit on Thursday

Artificial intelligence has been a mainstay of the conversation in Silicon Valley these past few years, and now the technology is increasingly being discussed in policy circles in DC. Washington types see opportunities for AI to improve efficiency and increase economic growth, while at the same time, they have growing concerns around job automation and competitive threats from China and other countries.

Now, it appears the White House itself is getting involved in bringing together key American stakeholders to discuss AI and those opportunities and challenges. According to Tony Romm and Drew Harwell of the Washington Post, the White House intends to bring executives from major tech companies and other large corporations together on Thursday to discuss AI and how American companies can cooperate to take advantage of new advances in these technologies.

Among the confirmed guests are Facebook’s Jerome Pesenti, Amazon’s Rohit Prasad, and Intel’s CEO Brian Krzanich. While the event has many tech companies present, a total of 38 companies are expected to be in attendance including United Airlines and Ford.

AI policy has been top-of-mind for many policymakers around the world. French President Emmanuel Macron has announced a comprehensive national AI strategy, as has Canada, which has put together a research fund and a set of programs to attempt to build on the success of notable local AI researchers such as University of Toronto professor George Hinton, who is a major figure in deep learning.

But it is China that has increasingly drawn the attention and concern of U.S. policymakers. The country and its venture capitalists are outlaying billion of dollars to invest in the AI industry, and it has made leading in artificial intelligence one of the nation’s top priorities through its Made in China 2025 program and other reports. These plans are designed to coordinate various constituencies such as university researchers, scientists, companies, venture capitalists, and anyone else who might be able to assist in building out China’s AI capabilities.

In comparison, the United States has been remarkably uncoordinated when it comes to AI. While the government has released some strategic plans, it has mostly failed to follow through on coordinating more dollars toward artificial intelligence. As the New York Times noted in February, the White House has been remarkably silent on AI, despite the growing discussions around the technology.

That lack of engagement from policymakers has been fine — after all, the United States is the world leader in AI research. But with other nations pouring resources and talent into the space, DC policymakers are worried that the U.S. could suddenly find itself behind the frontier of research in the space, with particular repercussions for the defense industry.

Expect more news on this front in the coming months as DC’s various think tanks and analysts get their policy processes in motion.

Tech watchdogs call on Facebook and Google for transparency around censored content

If a company like Facebook can’t even understand why its moderation tools work the way they do, then its users certainly don’t have a fighting shot. Anyway, that’s the idea behind what a coalition of digital rights groups are calling The Santa Clara Principles (PDF), “a set of minimum standards” aimed at Facebook, Google, Twitter and other tech companies that moderate the content published on their platforms.

The suggested guidelines grew out of a set of events addressing “Content Moderation and Removal at Scale,” the second of which is taking place today in Washington, D.C. The group participating in these conversations shared the goal of coming up with a suggested ruleset for how major tech companies should disclose which content is being censored, why it is being censored and how much speech is censored overall.

“Users deserve more transparency and greater accountability from platforms that play an outsized role — in Myanmar, Australia, Europe, and China, as well as in marginalized communities in the U.S. and elsewhere — in deciding what can be said on the internet,” Electronic Frontier Foundation (EFF) Director for International Freedom of Expression Jillian C. York said.

As the Center for Democracy and Technology explains, The Santa Clara principles (PDF) ask tech companies to disclose three categories of information:

  • Numbers (of posts removed, accounts suspended);
  • Notice (to users about content removals and account suspensions); and
  • Appeals (for users impacted by content removals or account suspensions).

“The Santa Clara Principles are the product of years of effort by privacy advocates to push tech companies to provide users with more disclosure and a better understanding of how content policing works,” EFF Senior Staff Attorney Nate Cardozo added.

“Facebook and Google have taken some steps recently to improve transparency, and we applaud that. But it’s not enough. We hope to see the companies embrace The Santa Clara Principles and move the bar on transparency and accountability even higher.”

Participants in drafting The Santa Clara Principles include the ACLU Foundation of Northern California, Center for Democracy and Technology, Electronic Frontier Foundation, New America’s Open Technology Institute and a handful of scholars from departments studying ethics and communications.

Facebook is still falling short on privacy, says German minister

Germany’s justice minister has written to Facebook calling for the platform to implement an internal “control and sanction mechanism” to ensure third-party developers and other external providers are not able to misuse Facebook data — calling for it to both monitor third party compliance with its platform policies and apply “harsh penalties” for any violations.

The letter, which has been published in full in local mediafollows the privacy storm that has engulfed the company since mid March when fresh revelations were published by the Observer of London and the New York Times — detailing how Cambridge Analytica had obtained and used personal information on up to 87 million Facebook users for political ad targeting purposes.

Writing to Facebook’s founder and CEO Mark Zuckerberg, justice minister Katarina Barley welcomes some recent changes the company has made around user privacy, describing its decision to limit collaboration with “data dealers” as “a good start”, for example.

However she says the company needs to do more — setting out a series of what she describes as “core requirements” in the area of data and consumer protection (bulleted below). 

She also writes that the Cambridge Analytica scandal confirms long-standing criticisms against Facebook made by data and consumer advocates in Germany and Europe, adding that it suggests various lawsuits filed against the company’s data practices have “good cause”.

Unfortunately, Facebook has not responded to this criticism in all the years or only insufficiently,” she continues (translated via Google Translate). “Facebook has rather expanded its data collection and use. This is at the expense of the privacy and self-determination of its users and third parties.”

“What is needed is that Facebook lives up to its corporate responsibility and makes a serious change,” she says at the end of the letter. “In interviews and advertisements, you have stated that the new EU data protection regulations are the standard worldwide for the social network. Whether Facebook consistently implements this view, unfortunately, seems questionable,” she continues, critically flagging Facebook’s decision to switch the data controller status of ~1.5BN international users this month so they will no longer be under the jurisdiction of EU law, before adding: “I will therefore keep a close eye on the further measures taken by Facebook.

Since revelations about Cambridge Analytica’s use of Facebook data snowballed into a global privacy scandal for the company this spring, the company has revealed a series of changes which it claims are intended to bolster data protection on its platform.

Although, in truth, many of the tweaks Facebook has announced were likely in train already — as it has been working for months (if not years) on its response to the EU’s incoming GDPR framework, which will apply from May 25.

Yet, even so, many of these measures have been roundly criticized by privacy experts, who argue they do not go far enough to comply with GDPR and will trigger legal challenges once the framework is being applied.

For example, a new consent flow, announced by Facebook last month, has been accused of being intentionally manipulative — and of going against the spirit of the new rules, at very least.

Barley picks up on these criticisms in her letter — calling specifically for Facebook to deliver:

  • More transparency for users
  • Real control of users’ data processing by Facebook
  • Strict compliance with privacy by default and consent in the entire ecosystem of Facebook
  • Objective, neutral, non-discriminatory and manipulation-free algorithms
  • More freedom of choice for users through various settings and uses

On consent, she emphasizes that under GDPR the company will need to obtain consent for each data use — and cannot bundle up uses to try to obtain a ‘lump-sum’ consent, as she puts it.

Yet this is pretty clearly exactly what Facebook is doing when it asks Europeans to opt into its face recognition technology, for example, by suggesting this could help protect users against strangers using their photos; and be an aid to visually impaired users on its platform; yet there’s absolutely no specific examples in the consent flow of the commercial uses to which Facebook will undoubtedly put the tech.

The minister also emphasizes that GDPR demands a privacy-by-default approach, and requires data collection to be minimized — saying Facebook will need to adapt all of its data processing operations in order to comply. 

Any data transfers from “friends” should also only take place with explicit consent in individual cases, she continues (consent that was of course entirely lacking in 2014 when Facebook APIs allowed a developer on its platform to harvest data on up to 87 million users — and pass the information to Cambridge Analytica).

Barley also warns explicitly that Facebook must not create shadow profiles, an especially awkward legal issue for Facebook which US lawmakers also questioned Zuckerberg closely about last month.

Facebook’s announcement this week, at its f8 conference, of an incoming Clear History button — which will give users the ability to clear past browsing data the company has gathered about them — merely underscores the discrepancies here, with tracked Facebook non-users not even getting this after-the-fact control, although tracked users also can’t ask Facebook never to track them in the first place.

Nor is it clear what Facebook does with any derivatives it gleans from this tracked personal data — i.e. whether those insights are also dissociated from an individual’s account.

Sure, Facebook might delete a web log of the sites you visited — like a gambling site or a health clinic — when you hit the button but that does not mean it’s going to remove all the inferences it’s gleaned from that data (and added to the unseen profile it holds of you and uses for ad targeting purposes).

Safe to say, the value of the Clear History button looks mostly as PR for Facebook — so the company can point to it and claim it’s offering users another ‘control’ as a strategy to try to deflect lawmakers’ awkward questions (just such disingenuousness was on ample show in Congress last month — and has also been publicly condemned by the UK parliament).

We asked Facebook our own series of questions about how Clear History operates, and why — for example — it is not offering users the ability to block tracking entirely. After multiple emails on this topic, over two days, we’re still waiting for the company to answer anything we asked.

Facebook’s processing of non-users’ data, collected via tracking pixels and social plugins across other popular web services, has already got Facebook into hot water with some European regulators. Under GDPR it will certainly face fresh challenges to any consent-less handling of people’s data — unless it radically rethinks its approach, and does so in less than a month. 

In her letter, Barley also raises concerns around the misuse of Facebook’s platform for political influence and opinion manipulation — saying it must take “all necessary technical and organizational measures to prevent abuse and manipulation possibilities (e.g. via fake accounts and social bots)”, and ensure the algorithms it uses are “objective, neutral and non-discriminatory”.

She says she also wants the company to disclose the actions it takes on this front in order to enable “independent review”.

Facebook’s huge sprawl and size — with its business consisting of multiple popular linked platforms (such as WhatsApp and Instagram), as well as the company deploying its offsite tracking infrastructure across the Internet to massively expand the reach of its ecosystem — “puts a special strain on the privacy and self-determination of German and European users”, she adds.

At the time of writing Facebook had not responded to multiple requests for comment about the letter.

Cambridge Analytica shuts down in light of ‘unfairly negative’ press coverage

Cambridge Analytica is done. In light of the sprawling controversy around its role in improperly obtaining data from Facebook users through a third party, the company will end its U.S. and U.K. operations.

In a press release confirming the decision, the company said that “unfairly negative media coverage” around the Facebook incident has “driven away virtually all of the Company’s customers and suppliers,” making its business no longer financially viable. The same goes for the SCL Elections, a CA-affiliated company:

Earlier today, SCL Elections Ltd., as well as certain of its and Cambridge Analytica LLC’s U.K. affiliates (collectively, the “Company” or “Cambridge Analytica”) filed applications to commence insolvency proceedings in the U.K.  The Company is immediately ceasing all operations…

Additionally, parallel bankruptcy proceedings will soon be commenced on behalf of Cambridge Analytica LLC and certain of the Company’s U.S. affiliates in the United States Bankruptcy Court for the Southern District of New York.

On Wednesday, just before the company went public with its news, Gizmodo reported that employees of Cambridge Analytica’s U.S. offices learned that their jobs were being terminated when they were ordered to hand over their company keycards.

Given its already fairly shadowy business practices, it remains to be seen if this is really the end for Cambridge Analytica or just a strategic rebrand while it waits for the “siege” of negative media coverage to cool off.

Probably the latter, since the U.K.-based SCL Group, the mothership in the constellation of associated companies, is not going out of business. Nor are its many other ventures, including a new one, Emerdata, which several former CA leaders have recently moved to.

Facebook denied a stay to Schrems II privacy referral

Facebook’s attempt to block a series of legal questions relating to a long-running EU privacy case from being referred to Europe’s top court has been throw out by Ireland’s High Court.

Earlier this week the company’s lawyers had asked the Irish High Court to stay the referral to the CJEU of a number of key legal questions pertaining to existing data transfer mechanisms that are being used by thousands of companies (Facebook included) to authorize flows of personal data outside the bloc.

Both the lawfulness of Standard Contractual Clauses and the EU-US Privacy Shield mechanism are now facing questions as a result of this challenge.

However in a ruling today the Irish High Court denied the company’s request for a stay on the CJEU referral — with the judge ordering the referral to be immediately delivered to the Court of Justice, and emphasizing the risk that “millions” of EU data subjects, including privacy campaigner and lawyer Max Schrems whose complaint triggered the court case and subsequent referral, could be having their data processed unlawfully.

“In my opinion very real prejudice is potentially suffered by Mr Schrems and the millions of EU data subjects if the matter is further delayed by a stay as sought in this case,” writes Ms Justice Costello.

She also criticizes Facebook for delaying tactics, and for not making it clear that its appeal against the referral — which Facebook still intends to pursue in the Irish Supreme Court — relates to a time-bound argument that the decision is moot because of an incoming update to EU privacy law (the GDPR).

“The fact that the point is only now being raised gives rise to considerable concern as to the conduct of the case by Facebook and the manner in which it has dealt with the court,” writes the judge in a withering critique.

Irish #HighCourt slaps @Facebook badly in judgment throwing out their application for a stay for the reference to the @EUCourtPress – accusing them of deliberately delaying the procedure to make it moot (which it wouldn’t be under #GDPR)..
FULL JUDGEMENT: https://t.co/CF8MaaMmGR pic.twitter.com/4D7WynuMrb

— Max Schrems (@maxschrems) May 2, 2018

In a statement on the latest developments in the case, a Facebook spokesperson told us: “We are disappointed not to have been granted a stay on the preliminary reference being made to the CJEU. We intend on continuing with seeking leave to appeal the High Court’s decision to the Irish Supreme Court.”

Schrems’ view is there’s no case for Facebook to make that the legal questions involved here are moot under GDPR, just as he says “no such appeal exists in Ireland” for Facebook to try to appeal against a referral to the CJEU via the Irish Supreme Court — even though the company is trying to do both. (But, as the judge has pointed out, it appears to like trying to buy itself time.)

Depending on how quickly the CJEU rules we’ll soon know for sure — perhaps in a little over a year’s time.

Google accused of using GDPR to impose unfair terms on publishers

A group of European and international publishers have accused Google of using an incoming update to the European Union’s data protection framework to try to push “draconian” new terms on them in exchange for continued access to its ad network — which many publishers rely on to monetize their content online.

Google trailed the terms as incoming in late March, while the new EU regulation — GDPR — is due to apply from May 25.

“[W]e find it especially troubling that you would wait until the last-minute before the GDPR comes into force to announce these terms as publishers have now little time to assess the legality or fairness of your proposal and how best to consider its impact on their own GDPR compliance plans which have been underway for a long time,” they write in a letter to the company dated April 30. “Nor do we believe that this meets the test of creating a fair, transparent and predictable business environment of the kind required by the draft Regulation COM (2018) 238 final published 26 April 2018 [an EU proposal which relates to business users of online intermediation services].”

The GDPR privacy framework both tightens consent requirements for processing the personal data of EU users and beefs up enforcement for data protection violations, with fines able to scale as high as four per cent of a company’s global annual turnover — substantially inflating the legal liabilities around the handling of any personal data which falls under its jurisdiction.

And while the law is intended to strengthen EU citizens’ fundamental rights by giving them more control over how their data is used, publishers are accusing Google of attempting to use the incoming framework as an opportunity to enforce an inappropriate “one-size fits all” approach to compliance on its publisher customers and their advertisers.

“Your proposal severely falls short on many levels and seems to lay out a framework more concerned with protecting your existing business model in a manner that would undermine the fundamental purposes of the GDPR and the efforts of publishers to comply with the letter and spirit of the law,” the coalition of publishers write to Google.

One objection they have is that Google is apparently intending to switch its status from that of a data processor of publishers’ data — i.e. the data Google receives from publishers and collects from their sites — to a data controller which they claim will enable it to “make unilateral decisions about how a publisher’s data is used”.

Though for other Google services, such as its web analytics product, the company has faced the opposite accusation: i.e. that it’s claiming it’s merely a data processor — yet giving itself expansive rights to use the data that’s gathered, rather like a data controller…

The publishers also say Google wants them to obtain valid legal consent from users to the processing of their data on its behalf — yet isn’t providing them with information about its intended uses of people’s data, which they would need to know in order to obtain valid consent under GDPR.

“[Y]ou refuse to provide publishers with any specific information about how you will collect, share and use the data. Placing the full burden of obtaining new consent on the publisher is untenable without providing the publisher with the specific information needed to provide sufficient transparency or to obtain the requisite specific, granular, and informed consent under the GDPR,” they write.

“If publishers agree to obtain consent on your behalf, then you must provide the publisher with detailed information for each use of the personal data for which you want publishers to ask for legally valid consent and model language to obtain consent for your activities.”

Nor do individual publishers necessarily want to have to use consent as the legal basis for processing their users personal data (other options are available under the law, though a legal basis is always required) — but they argue that Google’s one-size proposal doesn’t allow for alternatives.

“Some publishers may want to rely upon legitimate interest as a legal basis and since the GDPR calls for balancing several factors, it may be appropriate for publishers to process data under this legal basis for some purposes,” they note. “Our members, as providers of the news, have different purposes and interests for participating in the digital advertising ecosystem. Yet, Google’s imposition of an essentially self-prescribed one-size-fits-all approach doesn’t seem to take into account or allow for the different purposes and interests publishers have.”

They are also concerned Google is trying to transfer liability for obtaining consent onto publishers — asserting: “Given that your now-changed terms are incorporated by reference into many contracts under which publishers indemnify Google, these terms could result in publishers indemnifying Google for potentially ruinous fines. We strongly encourage you to revise your proposal to include mutual indemnification provisions and limitations on liability. While the exact allocation of liability should be negotiated by individual publishers, your current proposal represents a ‘take it or leave it’ disproportionate approach.”

They also accuse Google of risking acting in an anti-competitive manner because the proposed terms state that Google may stop serving ads on on publisher sites if it deems a publisher’s consent mechanism to be “insufficient”.

“If Google then dictates how that mechanism would look and prescribes the number of companies a publisher can work with, this would limit the choice of companies that any one publisher can gather consent for, or integrate with, to a very small number defined by Google. This gives rise to grave concerns in terms of anti-competitive behavior as Google is in effect dictating to the market which companies any publisher can do business with,” they argue.

They end the letter, which is addressed to Google’s CEO Sundar Pichai, with a series of questions for the company which they say they need answers to — including how and why Google believes its legal relationship to publishers’ data would be a data controller; whether it will seek publisher input ahead of making future changes to its terms for accessing its advertiser services; and how Google’s services could be integrated into an industry-wide consent management platform — should publishers decide to make use of one.

Commenting in a statement, Angela Mills Wade, executive director of the European Publishers Council and one of the signatories to the letter, said: “As usual, Google wants to have its cake and eat it. It wants to be data controller — of data provided by publishers — without any of the legal liability — and with apparently total freedom to do what they like with that data. Publishers have trusted relationships with their readers and advertisers — how can we get consent from them without being in a position to tell them what they are consenting to? And why should we be legally liable for any abuses when we have no control or prior knowledge? By imposing their own standard for regulatory compliance, Google effectively prevents publishers from being able to choose which partners to work with.”

The other publishers signing the letter are Digital Content Next, News Media Alliance and News Media Association.

We put some of their questions to Google — and the company rejected that it’s seeking additional rights over publishers’ data, sending us the following statement:

Guidance about the GDPR is that consent is required for personalised advertising. We have always asked publishers to get consent for the use of our ad tech on their sites, and now we’re simply updating that requirement in line with the GDPR. Because we make decisions on data processing to help publishers optimize ad revenue, we will operate as a controller across our publisher products in line with GDPR requirements, but this designation does not give us any additional rights to their data. We’re working closely with our publisher partners and are committed to providing a range of tools to help them gather user consent.

A spokesperson for the company also noted that, under GDPR, controller status merely reflects that an involved entity is more than a data processor for a specific service, also pointing out that Google’s contracts define the limits of what can be done with data in such instances.

The spokesperson further emphasized that Google is not asking publishers to obtain consent from Google’s users, but for their own users on their own sites and for the use of ad tech on those sites — noting this could be one of Google’s ad products or someone else’s.

In terms of timing the Google rep added the company would have liked to put the new ad policy out earlier but said that guidance on consent from the EU’s Article 29 Working Party only came out in draft in December, noting also that this continues to be revised.