Apple’s new ecosystem world order and the privacy economy

Apple’s splashy new product announcements at its annual Worldwide Developers Conference in San Jose also ushered in new rules of the road for its ecosystem partners that force hard turns for app makers around data ownership and control. These changes could fundamentally shift how consumers perceive and value control over the data they generate in using their devices, and that shift could change the landscape for how services are bought, consumed and sold.

A lot of privacy advocates have posited a future wherein we ascribe value to the data of individuals and potentially compensate people directly for its use. But others have also rightly pointed out than in isolation, a single individual’s data is precisely value-less, since it’s only in aggregate that this data is worth anything to the companies that currently harvest it to inform their marketing and drive their product decisions.

There are many reasons why it seems unlikely that any of the companies for which user data is a primary source of revenue or a crucial aspect of their business model would shift to a direct compensation model – not the least of which is that it’s probably much cheaper, and definitely much more scalable, to build products that provide them use value in exchange instead. But that doesn’t mean privacy won’t become a crucial lever in the information economy of the next wave of innovation and tech product development.

Perils of per datum pricing

As mentioned, the mechanics of directly selling your data to a company are problematic at best, and unworkable at worst.

One big issue with this is that there’s definitely bound to be a scale limit on any subscription paid product. In a world where that’s increasingly a preferred method for media companies, food and packaged goods delivery, and even car ownership alternatives, there’s clearly a cap on how much of their income consumers are willing to commit to these kinds of recurring costs.

Brexit data transfer gaps a risk for UK startups, MPs told

The uncertainty facing digital businesses as a result of Brexit was front and center during a committee session in the UK parliament today, with experts including the UK’s information commissioner responding to MPs’ questions about how and even whether data will continue to flow between the UK and the European Union once the country has departed the bloc — in just under a year’s time, per the current schedule.

The risks for UK startups vs tech giants were also flagged, with concerns voiced that larger businesses are better placed to weather Brexit-based uncertainty thanks to greater resources at their disposal to plug data transfer gaps resulting from the political upheaval.

Information commissioner Elizabeth Denham emphasized the overriding importance of the UK data protection bill being passed. Though that’s really just the baby step where the Brexit negotiations are concerned.

Parliamentarians have another vote on the bill this afternoon, during its third reading, and the legislative timetable is tight, given that the pan-EU General Data Protection Act (GDPR) takes direct effect on May 25 — and many provisions in the UK bill are intended to bring domestic law into line with that regulation, and complete implementation ahead of the EU deadline.

Despite the UK referendum vote to pull the country out of the EU, the government has committed to complying with GDPR — which ministers hope will lay a strong foundation for it to secure a future agreement with the EU that allows data to continue flowing, as is critical for business. Although what exactly that future data regime might be remains to be seen — and various scenarios were discussed during today’s hearing — hence there’s further operational uncertainty for businesses in the years ahead.

“Getting the data policy right is of critical importance both on the commercial side but also on the security and law enforcement side,” said Denham. “We need data to continue to flow and if we’re not part of the unified framework in the EU then we have to make sure that we’re focused and we’re robust about putting in place measures to ensure that data continues to flow appropriately, that it’s safeguarded and also that there is business certainty in advance of our exit from the EU.

“Data underpins everything that we do and it’s critically important.”

Another witness to the committee, James Mullock, a partner at law firm Bird & Bird, warned that the Brexit-shaped threat to UK-EU data flows could result in a situation akin to what happened after the long-standing Safe Harbor arrangement between the EU and the US was struck down in 2015 — leaving thousands of companies scrambling to put in place alternative data transfer mechanisms.

“If we have anything like that it would be extremely disruptive,” warned Mullock. “And it will, I think, be extremely off-putting in terms of businesses looking at where they will headquarter themselves in Europe. And therefore the long term prospects of attracting businesses from many of the sectors that this country supports so well.”

“Essentially what you’re doing is you’re putting the burden on business to find a legal agreement or a legal mechanism to agree data protection standards on an overseas recipient so all UK businesses that receive data from Europe will be having to sign these agreements or put in place these mechanisms to receive data from the European Union which is obviously one of our very major senders of data to this country,” he added of the alternative legal mechanisms fall-back scenario.

Another witness, Giles Derrington, head of Brexit policy for UK technology advocacy organization, TechUK, explained how the collapse of Safe Harbor had saddled businesses with major amounts of bureaucracy — and went on to suggest that a similar scenario befalling the UK as a result of Brexit could put domestic startups at a big disadvantage vs tech giants.

“We had a member company who had to put in place two million Standard Contractual Clauses over the space of a month or so [after Safe Harbor was struck down],” he told the committee. “The amount of cost, time, effort that took was very, very significant. That’s for a very large company.

“The other side of this is the alternatives are highly exclusionary — or could be highly exclusionary to smaller businesses. If you look at India for example, who have been trying to get an adequacy agreement with the EU for about ten years, what you’ve actually found now is a gap between those large multinationals, who can put in place binding corporate rules, standard contractual clauses, have the kind of capital to be able to do that — and it gives them an access to the European market which frankly most smaller businesses don’t have from India.

“We obviously wouldn’t want to see that in a UK tech sector which is an awful lot of startups, scale-ups, and is a key part of the ecosystem which makes the UK a tech hub within Europe.”

Denham made a similar point. “Binding corporate rules… might work for multinational companies [as an alternative data transfer mechanism] that have the ability to invest in that process,” she noted. “Codes of conduct and certification are other transfer mechanisms that could be used but there are very few codes of practice and certification mechanisms in place at this time. So, although that could be a future transfer mechanism… we don’t have codes and certifications that have been approved by authorities at this time.”

“I think it would be easier for multinational companies and large companies, rather than small businesses and certainly microbusinesses, that make up the lion’s share of business in the UK, especially in tech,” she added of the fall-back scenarios.

Giving another example of the scale of the potential bureaucracy nightmare, Stephen Hurley, head of Brexit planning and policy for UK ISP British Telecom, told the committee it has more than 18,000 suppliers. “If we were to put in place Standard Contractual Clauses it would be a subset of those suppliers but we’d have to identify where the flows of data would be coming from — in particular from the EU to the UK — and put in place those contractual clauses,” he said.

“The other problem with the contractual clauses is they’re a set form, they’re a precedent form that the Commission issues. And again that isn’t necessarily designed to deal with the modern ways of doing business — the way flows of data occurs in practice. So it’s quite a cumbersome process. And… [there’s] uncertainty as well, given they are currently under challenge before the European courts, a lot of companies now are already doing a sort of ‘belt and braces’ where even if you rely on Privacy Shield you’ll also put in place an alternative transfer mechanism to allow you to have a fall back in case one gets temporarily removed.”

A better post-Brexit scenario than every UK business having to do the bureaucratic and legal leg-work themselves would be the UK government securing a new data flow arrangement with the EU. Not least because, as Hurley mentioned, Standard Contractual Clauses are subject to a legal challenge, with legal question marks now extended to Privacy Shield too.

But what shape any such future UK-EU data transfer arrangement could take remains tbc.

The panel of witnesses agreed that personal data flows would be very unlikely to be housed within any future trade treaty between the UK and the EU. Rather data would need to live within a separate treaty or bespoke agreement, if indeed such a deal can be achieved.

Another possibility is for the UK to receive an adequacy decision from the EC — such as the Commission has granted to other third countries (like the US). But there was consensus on the panel that some form of bespoke data arrangement would be a superior outcome — for legal reasons but also for reciprocity and more.

Mullock’s view is a treaty would be preferable as it would be at lesser risk of a legal challenge. “I’m saying a treaty is preferable to a decision but we should take what we can get,” he said. “But a treaty is the ultimate standard to aim for.”

Denham agreed, underlining how an adequacy decision would be much more limiting. “I would say that a bespoke agreement or a treaty is preferable because that implies mutual recognition of each of our data protection frameworks,” she said. “It contains obligations on both sides, it would contain dispute mechanisms. If we look at an adequacy decision by the Commission that is a one-way decision judging the standard of UK law and the framework of UK law to be adequate according to the Commission and according to the Council. So an agreement would be preferable but it would have to be a standalone treaty or a standalone agreement that’s about data — and not integrate it into a trade agreement because of the fundamental rights element of data protection.”

Such a bespoke arrangement could also offer a route for the UK to negotiate and retain some role for her office within EU data protection regulation after Brexit.

Because as it stands, with the UK set to exit the EU next year — and even if an adequacy decision was secured — the ICO will lose its seat at the table at a time when EU privacy laws are setting the new global standard, thanks to GDPR.

“Unless a role for the ICO was negotiated through a bespoke agreement or a treaty there’s no way in law at present that we could participate in the one-stop shop [element of GDPR, which allows for EU DPAs to co-ordinate regulatory actions] — which would bring huge advantages to both sides and also to British businesses,” said Denham.

“At this time when the GDPR is in its infancy, participating in shaping and interpreting the law I think is really important. And the group of regulators that sit around the table at the EU are the most influential blocs of regulators — and if we’re outside of that group and we’re an observer we’re not going to have the kind of effect that we need to have with big tech companies. Because that’s all going to be decided by that group of regulators.”

“The European Data Protection Board will set the weather when it comes to standards for artificial intelligence, for technologies, for regulating big tech. So we will be a less influential regulator, we will continue to regulate the law and protect UK citizens as we do now, but we won’t be at the leading edge of interpreting the GDPR — and we won’t be bringing British values to that table if we’re not at the table,” she added.

Hurley also made the point that if the ICO is not inside the GDPR one-stop shop mechanism then UK companies will have to choose another data protection agency within the EU to act as their lead regulator — describing this as “again another burden which we want to avoid”.

The panel was asked about opportunities for domestic divergence on elements of GDPR once the UK is outside the EU. But no one saw much advantage to be eked out outside a regulatory regime that is now responsible for the de facto global standard for data protection.

“GDPR is by no means perfect and there are a number of issues that we have with it. Having said that because GDPR has global reach it is now effectively being seen as we have to comply with this at an international level by a number of our largest members, who are rolling it out worldwide — not just Europe-wide — so the opportunities for divergence are quite limited,” said Derrington. “Particularly actually in areas like AI. AI requires massive amounts of data sets. So you can’t do that just from a UK only data-set of 60 million people if you took everyone. You need more data than that.

“If you were to use European data, which most of them would, then that will require you to comply with GDPR. So actually even if you could do things which would make it easier for some of the AI processes to happen by doing so you’d be closing off your access to the data-sets — and so most of the companies I’ve spoken to… see GDPR as that’s what we’re going to have to comply with. We’d much rather it be one rule… and to be able to maintain access to [EU] data-sets rather than just applying dual standards when they’re going to have to meet GDPR anyway.”

He also noted that about two-thirds of TechUK members are small and medium sized businesses, adding: “A small business working in AI still needs massive amounts of data.

“From a tech sector perspective, considering whether data protection sits in the public consciousness now, actually don’t see there being much opportunity to change GDPR. I don’t think that’s necessarily where the centre of gravity amongst the public is — if you look at the data protection bill, as it went through both houses, most of the amendments to the bill were to go further, to strengthen data protection. So actually we don’t necessarily see this is idea that we will significantly walk back GDPR. And bear in mind that any company which are doing any work with the EU would have to comply with GDPR anyway.”

The possibility for legal challenges to any future UK-EU data arrangement were also discussed during the hearing, with Denham saying that scrutiny of the UK’s surveillance regime once it is outside the EU is inevitable — though she suggested the government will be able to win over critics if it can fully articulate its oversight regime.

“Whether the UK proceeds with an adequacy assessment or whether we go down the road of looking at a bespoke agreement or a treaty we know, as we’ve seen with the Privacy Shield, that there will be scrutiny of our intelligence services and the collection, use and retention of data. So we can expect that,” she said, before arguing the UK has a “good story” to tell on that front — having recently reworked its domestic surveillance framework and included accepting the need to make amendments to the law following legal challenges.

“Accountability, transparency and oversight of our intelligence service needs to be explained and discussed to our [EU] colleagues but there is no doubt that it will come under scrutiny — and my office was part of the most recent assessment of the Privacy Shield. And looking at the US regime. So we’re well aware of the kind of questions that are going to be asked — including our arrangement with the Five Eyes, so we have to be ready for that,” she added.

iOS will soon disable USB connection if left locked for a week

In a move seemingly designed specifically to frustrate law enforcement, Apple is adding a security feature to iOS that totally disables data being sent over USB if the device isn’t unlocked for a period of 7 days. This spoils many methods for exploiting that connection to coax information out of the device without the user’s consent.

The feature, called USB Restricted Mode, was first noticed by Elcomsoft researchers looking through the iOS 11.4 code. It disables USB data (it will still charge) if the phone is left locked for a week, re-enabling it if it’s unlocked normally.

Normally when an iPhone is plugged into another device, whether it’s the owner’s computer or another, there is an interchange of data where the phone and computer figure out if they recognize each other, if they’re authorized to send or back up data, and so on. This connection can be taken advantage of if the computer being connected to is attempting to break into the phone.

USB Restricted Mode likely a response to the fact that iPhones seized by law enforcement or by malicious actors like thieves essentially will sit and wait patiently for this kind of software exploit to be applied to them. If an officer collects a phone during a case, but there are no known ways to force open the version of iOS it’s running, no problem: just stick it in evidence and wait until some security contractor sells the department a 0-day.

But what if, a week after that phone was taken, it shut down its own Lightning port’s ability to send or receive data or even recognize it’s connected to a computer? That would prevent the law from ever having the opportunity to attempt to break into the device unless they move with a quickness.

On the other hand, had its owner simply left the phone at home while on vacation, they could pick it up, put in their PIN, and it’s like nothing ever happened. Like the very best security measures, adversaries will curse its name while users may not even know it exists. Really, this is one of those security features that seems obvious in retrospect and I would not be surprised if other phone makers copy it in short order.

Had this feature been in place a couple years ago, it would have prevented that entire drama with the FBI. It milked its ongoing inability to access a target phone for months, reportedly concealing its own capabilities the while, likely to make it a political issue and manipulate lawmakers into compelling Apple to help. That kind of grandstanding doesn’t work so well on a 7-day deadline.

It’s not a perfect solution, of course, but there are no perfect solutions in security. This may simply force all iPhone-related investigations to get high priority in courts, so that existing exploits can be applied legally within the 7-day limit (and, presumably, every few days thereafter). All the same, it should be a powerful barrier against the kind of eventual, potential access through undocumented exploits from third parties that seems to threaten even the latest models and OS versions.

Twitter has an unlaunched ‘Secret’ encrypted messages feature

Buried inside Twitter’s Android app is a “Secret conversation” option that if launched would allow users to send encrypted direct messages. The feature could make Twitter a better home for sensitive communications that often end up on encrypted messaging apps like Signal, Telegram or WhatsApp.

The encrypted DMs option was first spotted inside the Twitter for Android application package (APK) by Jane Manchun Wong. APKs often contain code for unlaunched features that companies are quietly testing or will soon make available. A Twitter spokesperson declined to comment on the record. It’s unclear how long it might be before Twitter officially launches the feature, but at least we know it’s been built.

The appearance of encrypted DMs comes 18 months after whistleblower Edward Snowden asked Twitter CEO Jack Dorsey for the feature, which Dorsey said was “reasonable and something we’ll think about.”

Twitter has gone from “thinking about” the feature to prototyping it. The screenshot above shows the options to learn more about encrypted messaging, start a secret conversation and view both your own and your conversation partner’s encryption keys to verify a secure connection.

reasonable and something we’ll think about

— jack (@jack) December 14, 2016

Twitter’s DMs have become a powerful way for people to contact strangers without needing their phone number or email address. Whether it’s to send a reporter a scoop, warn someone of a problem, discuss business or just “slide into their DMs” to flirt, Twitter has established one of the most open messaging mediums. But without encryption, those messages are subject to snooping by governments, hackers or Twitter itself.

Twitter has long positioned itself as a facilitator of political discourse and even uprisings. But anyone seriously worried about the consequences of political dissonance, whistleblowing or leaking should be using an app like Signal that offers strong end-to-end encryption. Launching encrypted DMs could win back some of those change-makers and protect those still on Twitter.

Toward transitive data privacy and securing the data you don’t share

Anshu Sharma
Contributor

Anshu Sharma is a serial entrepreneur and a former venture partner at Storm Ventures.

We are spending a lot of time discussing what happens to data when you explicitly or implicitly share it. But what about data that you have never ever shared?

Your cousin’s DNA

We all share DNA —  after all, it seems we are all descendants of a few tribes. But the more closely related you are, the closer the DNA match. While we all know we share 50 percent DNA with siblings, and 25 percent with first cousins  —  there is still some meaningful match even between distant relatives (depending on the family tree distance).

In short, if you have never taken a DNA test but one or more of your blood relatives has, and shared that data  —  some of your DNA is effectively now available for a match.

While this may have seemed like theory a few weeks ago, the cops caught the Golden State Killer by using this method.

Cambridge Analytica

A similar thing happened when data was mis-used by Cambridge Analytica . Even if you never used the quiz app on the Facebook platform but your friends did, they essentially revealed private information about you without your consent or knowledge.

The number of users that took the quiz was shockingly small  —  only 300,000 users participated. And yet, upwards of 50 million (as many as 87 million) people eventually had their data collected by Cambridge Analytica.

And all of this was done legally and while complying with the platform requirements at that time.

Transitive data privacy

The word transitive simply means if A is related to B in a certain way, and B to C  — then A is related to C. For example, cousins is a transitive property. If Alice and Bob are cousins, and Bob and Chamath are cousins, then Alice and Chamath are cousins.

As private citizens, and corporations, we now have to think about transitive data privacy loss.

The simplest version of this is if your boyfriend or girlfriend forwards your private photo or conversation screenshot to someone else.

Transitive sharing upside

While we have discussed a couple of clear negative examples, there are many ways transitive data relationships help us.

Every time you ask a friend to connect you to someone on LinkedIn for a job or fundraise, you are leveraging the transitive relationship graph.

The DNA databases being created are primarily for social good  —  to help us connect with our roots and family, detect disease early and help medical research.

In fact, you could argue that a lot of challenges we face today require more data sharing, not less. If your hospital cannot share data with your primary care doctor at the right time, or your clinical trial data cannot be accessed to monitor downstream effects, we cannot take care of our citizens’ health as we should. Organizations like NIH and the VA and CMS (Medicare) are working hard to encourage appropriate easier sharing by healthcare providers.

Further, the good news is that there have been significant advances in security in encryption and hashing that enable companies to protect against the unintended side effects. More research is definitely called for. We can anonymize data, we can perturb data, and apply these techniques for protection while still being able to derive value and help customers.

We love augmented reality, but let’s fix things that could become big problems

Cyan Banister
Contributor

Cyan Banister is a partner at Founders Fund, where she invests across sectors and stages with a particular interest in augmented reality, fertility, heavily regulated industries and businesses that help people with basic skills find meaningful work.
Alex Hertel
Contributor

Alex Hertel is the co-founder of Xperiel.

Augmented Reality (AR) is still in its infancy and has a very promising youth and adulthood ahead. It has already become one of the most exciting, dynamic, and pervasive technologies ever developed. Every day someone is creating a novel way to reshape the real world with a new digital innovation.

Over the past couple of decades, the Internet and smartphone revolutions have transformed our lives, and AR has the potential to be that big. We’re already seeing AR act as a catalyst for major change, driving advances in everything from industrial machines to consumer electronics. It’s also pushing new frontiers in education, entertainment, and health care.

But as with any new technology, there are inherent risks we should acknowledge, anticipate, and deal with as soon as possible. If we do so, these technologies are likely to continue to thrive. Some industry watchers are forecasting a combined AR/VR market value of $108 billion by 2021, as businesses of all sizes take advantage of AR to change the way their customers interact with the world around them in ways previously only possible in science fiction.

As wonderful as AR is and will continue to be, there are some serious privacy and security pitfalls, including dangers to physical safety, that as an industry we need to collectively avoid. There are also ongoing threats from cyber criminals and nation states bent on political chaos and worse — to say nothing of teenagers who can be easily distracted and fail to exercise judgement — all creating virtual landmines that could slow or even derail the success of AR. We love AR, and that’s why we’re calling out these issues now to raise awareness.

Ready Player One

Without widespread familiarity with the potential pitfalls, as well as robust self-regulation, AR will not only suffer from systemic security issues, it may be subject to stringent government oversight, slowing innovation, or even threaten existing First Amendment rights. In a climate where technology has come under attack from many fronts for unintended consequences and vulnerabilities–including Russian interference with the 2016 election as well as ever-growing incidents of hacking and malware–we should work together to make sure this doesn’t happen.

If anything causes government overreach in this area, it’ll likely be safety and privacy issues. An example of these concerns is shown in this dystopian video, in which a fictional engineer is able to manipulate both his own reality and that of others via retinal AR implants. Because AR by design blurs the divide between the digital and real worlds, threats to physical safety, job security, and digital identity can emerge in ways that were simply inconceivable in a world populated solely by traditional computers.

While far from exhaustive, the lists below present some of the pitfalls, as well as possible remedies for AR. Think of these as a starting point, beginning with pitfalls:

  • AR can cause big identity and property problems: Catching Pokemons on a sidewalk or receiving a Valentine on a coffee cup at Starbucks is really just scratching the surface of AR capabilities. On a fundamental level, we could lose the power to control how people see us. Imagine a virtual, 21st century equivalent of a sticky note with the words “kick me” stuck to some poor victim’s back. What if that note was digital, and the person couldn’t remove it? Even more seriously, AR could be used to create a digital doppelganger of someone doing something compromising or illegal. AR might also be used to add indelible graffiti to a house, business, sign, product, or art exhibit, raising some serious property concerns.
  • AR can threaten our privacy: Remember Google Glass and “Glassholes?” If a woman was physically confronted in a San Francisco dive bar just for wearing Google Glass (reportedly, her ability to capture the happenings at the bar on video was not appreciated by other patrons), imagine what might happen with true AR and privacy. We may soon see the emergence virtual dressing rooms, which would allow customers to try on clothing before purchasing online. A similar technology could be used to overlay virtual nudity onto someone without their permission. With AR wearables, for example, someone could surreptitiously take pictures of another person and publish them in real time, along with geotagged metadata. There are clear points at which the problem moves from the domain of creepiness to harassment and potentially to a safety concern.
  • AR can cause physical harm: Although hacking bank accounts and IoT devices can wreak havoc, these events don’t often lead to physical harm. With AR, however, this changes drastically when it is superimposed on the real world. AR can increase distractions and make travel more hazardous. As it becomes more common, over-reliance on AR navigation will leave consumers vulnerable to buggy or hacked GPS overlays that can manipulate drivers or pilotsmaking our outside world less safe. For example, if a bus driver’s AR headset or heads-up display starts showing illusory deer on the road, that’s a clear physical danger to pedestrians, passengers, and other drivers.
  • AR could launch disturbing career arms races: As AR advances, it can improve everything from individual productivity to worker data access, significantly impacting job performance. Eventually, workers with training and experience with AR technology might be preferred over those who don’t. That could lead to an even wider gap between so-called digital elites and those without such digital familiarity. More disturbingly, we might see something of an arms race in which a worker with eye implants as depicted in the film mentioned above might perform with higher productivity, thereby creating a competitive advantage over those who haven’t had the surgery. The person in the next cubicle could then feel pressure to do the same just to remain competitive in the job market.

How can we address and resolve these challenges? Here are some initial suggestions and guidelines to help get the conversation started:

  • Industry standards: Establish a sort of AR governing body that would evaluate, debate and then publish standards for developers to follow. Along with this, develop a centralized digital service akin to air traffic control for AR that classifies public, private and commercial spaces as well as establishes public areas as either safe or dangerous for AR use.
  • A comprehensive feedback system: Communities should feel empowered to voice their concerns. When it comes to AR, a strong and responsive way for reporting unsecure vendors that don’t comply with AR safety, privacy, and security standards will go a long way in driving consumer trust in next-gen AR products.
  • Responsible AR development and investment: Entrepreneurs and investors need to care about these issues when developing and backing AR products. They should follow a basic moral compass and not simply chase dollars and market share.
  • Guardrails for real-time AR screenshots: Rather than disallowing real-time AR screenshots entirely, instead control them through mechanisms such as geofencing. For example, an establishment such as a nightclub would need to set and publish its own rules which are then enforced by hardware or software.

While ambitious companies focus on innovation, they must also be vigilant about the potential hazards of those breakthroughs. In the case of AR, working to proactively wrestle with the challenges around identity, privacy and security will help mitigate the biggest hurdles to the success of this exciting new technology.

Recognizing risks to consumer safety and privacy is only the first step to resolving long-term vulnerabilities that rapidly emerging new technologies like AR create. Since AR blurs the line between the real world and the digital one, it’s imperative that we consider the repercussions of this technology alongside its compelling possibilities. As innovators, we have a duty to usher in new technologies responsibly and thoughtfully so that they’re improving society in ways that can’t also be abused -we need to anticipate problems and police ourselves. If we don’t safeguard our breakthroughs and the consumers who use them, someone else will.

UK watchdog orders Cambridge Analytica to give up data in US voter test case

Another big development in the personal data misuse saga attached to the controversial Trump campaign-linked UK-based political consultancy, Cambridge Analytica — which could lead to fresh light being shed on how the company and its multiple affiliates acquired and processed US citizens’ personal data to build profiles on millions of voters for political targeting purposes.

The UK’s data watchdog, the ICO, has today announced that it’s served an enforcement notice on Cambridge Analytica affiliate SCL Elections, under the UK’s 1998 Data Protection Act.

The company has been ordered to give up all the data it holds on one US academic within 30 days — with the ICO warning that: “Failure to do so is a criminal offence, punishable in the courts by an unlimited fine.”

The notice follows a subject access request (SAR) filed in January last year by US-based academic, David Carroll after he became suspicious about how the company was able to build psychographic profiles of US voters. And while Carroll is not a UK citizen, he discovered his personal data had been processed in the UK — so decided to bring a test case by requesting his personal data under UK law.

Carroll’s complaint, and the ICO’s decision to issue an enforcement notice in support of it, looks to have paved the way for millions of US voters to also ask Cambridge Analytica for their data (the company claimed to have up to 7,000 data points on the entire US electorate, circa 240M people — so just imagine the class action that could be filed here… ).

The Guardian reports that Cambridge Analytica had tried to dismiss Carroll’s argument by claiming he had no more rights “than a member of the Taliban sitting in a cave in the remotest corner of Afghanistan”. The ICO clearly disagrees.

Important development. @ICOnews agrees with our complaint and orders full disclosure to @profcarroll following findings of non-cooperation by Cambridge Analytica / SCL. We look forward to full disclosure within 30 days. Decision here: https://t.co/X5g1FY95j0 https://t.co/ZsonQhPsKQ

— Ravi Naik (@RaviNa1k) May 5, 2018

Cambridge Analytica/SCL Group responded to Carroll’s original SAR in March 2017 but he was unimpressed by the partial data they sent him — which ranked his interests on a selection of topics (including gun rights, immigration, healthcare, education and the environment) yet did not explain how the scores had been calculated.

It also listed his likely partisanship and propensity to vote in the 2016 US election — again without explaining how those predictions had been generated.

So Carroll complained to the UK’s data watchdog in September 2017 — which began sending its own letters to CA/SCL, leading to further unsatisfactory responses.

“The company’s reply refused to address the ICO’s questions and incorrectly stated Prof Caroll had no legal entitlement to it because he wasn’t a UK citizen or based in this country. The ICO reiterated this was not legally correct in a letter to SCL the following month,” the ICO writes today. “In November 2017, the company replied, denying that the ICO had any jurisdiction or that Prof Carroll was legally entitled to his data, adding that SCL did “.. not expect to be further harassed with this sort of correspondence”.”

In a strongly worded statement, information commissioner Elizabeth Denham further adds:

The company has consistently refused to co-operate with our investigation into this case and has refused to answer our specific enquiries in relation to the complainant’s personal data — what they had, where they got it from and on what legal basis they held it.

The right to request personal data that an organisation holds about you is a cornerstone right in data protection law and it is important that Professor Carroll, and other members of the public, understand what personal data Cambridge Analytica held and how they analysed it.

We are aware of recent media reports concerning Cambridge Analytica’s future but whether or not the people behind the company decide to fold their operation, a continued refusal to engage with the ICO will potentially breach an Enforcement Notice and that then becomes a criminal matter.

Since mid-March this year, Cambridge Analytica’s name (along with the names of various affiliates) has been all over headlines relating to a major Facebook data misuse scandal, after press reports revealed in granular detail how an app developer had used the social media’s platform’s 2014 API structure to extract and process large amounts of users’ personal data, passing psychometrically modeled scores on US voters to Cambridge Analytica for political targeting.

But Carroll’s curiosity about what data Cambridge Analytica might hold about him predates the scandal blowing up last month. Although journalists had actually raised questions about the company as far back as December 2015 — when the Guardian reported that the company was working for the Ted Cruz campaign, using detailed psychological profiles of voters derived from tens of millions of Facebook users’ data.

Though it was not until last month that Facebook confirmed as many as 87 million users could have had personal data misappropriated.

Carroll, who has studied the Internet ad tech industry as part of his academic work, reckons Facebook is not the sole source of the data in this case, telling the Guardian he expects to find a whole host of other companies are also implicated in this murky data economy where people’s personal information is quietly traded and passed around for highly charged political purposes — bankrolled by billionaires.

“I think we’re going to find that this goes way beyond Facebook and that all sorts of things are being inferred about us and then used for political purposes,” he told the newspaper.

Under mounting political, legal and public pressure, Cambridge Analytica claimed to be shutting down this week — but the move appears more like a rebranding exercise, as parent entity, SCL Group, maintains a sprawling network of companies and linked entities. (Such as one called Emerdata, which was founded in mid-2017 and is listed at the same address as SCL Elections, and has many of the same investors and management as Cambridge Analytica… But presumably hasn’t yet been barred from social media giants’ ad platforms, as its predecessor has.)

Closing one of the entities embroiled in the scandal could also be a tactic to impede ongoing investigations, such as the one by the ICO — as Denham’s statement alludes, by warning that any breach of the enforcement notice could lead to criminal proceedings being brought against the owners and operators of Cambridge Analytica’s parent entity.

In March ICO officials obtained a warrant to enter and search Cambridge Analytica’s London offices, removing documents and computers for examination as part of a wider, year-long investigation into the use of personal data and analytics by political campaigns, parties, social media companies and other commercial actors. And last month the watchdog said 30 organizations — including Facebook — were now part of that investigation.

The Guardian also reports that the ICO has suggested to Cambridge Analytica that if it has difficulties complying with the enforcement notice it should hand over passwords for the servers seized during the March raid on its London office – raising questions about how much data the watchdog has been able to retrieve from the seized servers.

SCL Group’s website contains no obvious contact details beyond a company LinkedIn profile — a link which appears to be defunct. But we reached out to SCL Group’s CEO Nigel Oakes, who has maintained a public LinkedIn presence, to ask if he has any response to the ICO enforcement notice.

Meanwhile Cambridge Analytica continues to use its public Twitter account to distribute a stream of rebuttals and alternative ‘facts’.

Facebook is still falling short on privacy, says German minister

Germany’s justice minister has written to Facebook calling for the platform to implement an internal “control and sanction mechanism” to ensure third-party developers and other external providers are not able to misuse Facebook data — calling for it to both monitor third party compliance with its platform policies and apply “harsh penalties” for any violations.

The letter, which has been published in full in local mediafollows the privacy storm that has engulfed the company since mid March when fresh revelations were published by the Observer of London and the New York Times — detailing how Cambridge Analytica had obtained and used personal information on up to 87 million Facebook users for political ad targeting purposes.

Writing to Facebook’s founder and CEO Mark Zuckerberg, justice minister Katarina Barley welcomes some recent changes the company has made around user privacy, describing its decision to limit collaboration with “data dealers” as “a good start”, for example.

However she says the company needs to do more — setting out a series of what she describes as “core requirements” in the area of data and consumer protection (bulleted below). 

She also writes that the Cambridge Analytica scandal confirms long-standing criticisms against Facebook made by data and consumer advocates in Germany and Europe, adding that it suggests various lawsuits filed against the company’s data practices have “good cause”.

Unfortunately, Facebook has not responded to this criticism in all the years or only insufficiently,” she continues (translated via Google Translate). “Facebook has rather expanded its data collection and use. This is at the expense of the privacy and self-determination of its users and third parties.”

“What is needed is that Facebook lives up to its corporate responsibility and makes a serious change,” she says at the end of the letter. “In interviews and advertisements, you have stated that the new EU data protection regulations are the standard worldwide for the social network. Whether Facebook consistently implements this view, unfortunately, seems questionable,” she continues, critically flagging Facebook’s decision to switch the data controller status of ~1.5BN international users this month so they will no longer be under the jurisdiction of EU law, before adding: “I will therefore keep a close eye on the further measures taken by Facebook.

Since revelations about Cambridge Analytica’s use of Facebook data snowballed into a global privacy scandal for the company this spring, the company has revealed a series of changes which it claims are intended to bolster data protection on its platform.

Although, in truth, many of the tweaks Facebook has announced were likely in train already — as it has been working for months (if not years) on its response to the EU’s incoming GDPR framework, which will apply from May 25.

Yet, even so, many of these measures have been roundly criticized by privacy experts, who argue they do not go far enough to comply with GDPR and will trigger legal challenges once the framework is being applied.

For example, a new consent flow, announced by Facebook last month, has been accused of being intentionally manipulative — and of going against the spirit of the new rules, at very least.

Barley picks up on these criticisms in her letter — calling specifically for Facebook to deliver:

  • More transparency for users
  • Real control of users’ data processing by Facebook
  • Strict compliance with privacy by default and consent in the entire ecosystem of Facebook
  • Objective, neutral, non-discriminatory and manipulation-free algorithms
  • More freedom of choice for users through various settings and uses

On consent, she emphasizes that under GDPR the company will need to obtain consent for each data use — and cannot bundle up uses to try to obtain a ‘lump-sum’ consent, as she puts it.

Yet this is pretty clearly exactly what Facebook is doing when it asks Europeans to opt into its face recognition technology, for example, by suggesting this could help protect users against strangers using their photos; and be an aid to visually impaired users on its platform; yet there’s absolutely no specific examples in the consent flow of the commercial uses to which Facebook will undoubtedly put the tech.

The minister also emphasizes that GDPR demands a privacy-by-default approach, and requires data collection to be minimized — saying Facebook will need to adapt all of its data processing operations in order to comply. 

Any data transfers from “friends” should also only take place with explicit consent in individual cases, she continues (consent that was of course entirely lacking in 2014 when Facebook APIs allowed a developer on its platform to harvest data on up to 87 million users — and pass the information to Cambridge Analytica).

Barley also warns explicitly that Facebook must not create shadow profiles, an especially awkward legal issue for Facebook which US lawmakers also questioned Zuckerberg closely about last month.

Facebook’s announcement this week, at its f8 conference, of an incoming Clear History button — which will give users the ability to clear past browsing data the company has gathered about them — merely underscores the discrepancies here, with tracked Facebook non-users not even getting this after-the-fact control, although tracked users also can’t ask Facebook never to track them in the first place.

Nor is it clear what Facebook does with any derivatives it gleans from this tracked personal data — i.e. whether those insights are also dissociated from an individual’s account.

Sure, Facebook might delete a web log of the sites you visited — like a gambling site or a health clinic — when you hit the button but that does not mean it’s going to remove all the inferences it’s gleaned from that data (and added to the unseen profile it holds of you and uses for ad targeting purposes).

Safe to say, the value of the Clear History button looks mostly as PR for Facebook — so the company can point to it and claim it’s offering users another ‘control’ as a strategy to try to deflect lawmakers’ awkward questions (just such disingenuousness was on ample show in Congress last month — and has also been publicly condemned by the UK parliament).

We asked Facebook our own series of questions about how Clear History operates, and why — for example — it is not offering users the ability to block tracking entirely. After multiple emails on this topic, over two days, we’re still waiting for the company to answer anything we asked.

Facebook’s processing of non-users’ data, collected via tracking pixels and social plugins across other popular web services, has already got Facebook into hot water with some European regulators. Under GDPR it will certainly face fresh challenges to any consent-less handling of people’s data — unless it radically rethinks its approach, and does so in less than a month. 

In her letter, Barley also raises concerns around the misuse of Facebook’s platform for political influence and opinion manipulation — saying it must take “all necessary technical and organizational measures to prevent abuse and manipulation possibilities (e.g. via fake accounts and social bots)”, and ensure the algorithms it uses are “objective, neutral and non-discriminatory”.

She says she also wants the company to disclose the actions it takes on this front in order to enable “independent review”.

Facebook’s huge sprawl and size — with its business consisting of multiple popular linked platforms (such as WhatsApp and Instagram), as well as the company deploying its offsite tracking infrastructure across the Internet to massively expand the reach of its ecosystem — “puts a special strain on the privacy and self-determination of German and European users”, she adds.

At the time of writing Facebook had not responded to multiple requests for comment about the letter.

Cambridge Analytica shuts down in light of ‘unfairly negative’ press coverage

Cambridge Analytica is done. In light of the sprawling controversy around its role in improperly obtaining data from Facebook users through a third party, the company will end its U.S. and U.K. operations.

In a press release confirming the decision, the company said that “unfairly negative media coverage” around the Facebook incident has “driven away virtually all of the Company’s customers and suppliers,” making its business no longer financially viable. The same goes for the SCL Elections, a CA-affiliated company:

Earlier today, SCL Elections Ltd., as well as certain of its and Cambridge Analytica LLC’s U.K. affiliates (collectively, the “Company” or “Cambridge Analytica”) filed applications to commence insolvency proceedings in the U.K.  The Company is immediately ceasing all operations…

Additionally, parallel bankruptcy proceedings will soon be commenced on behalf of Cambridge Analytica LLC and certain of the Company’s U.S. affiliates in the United States Bankruptcy Court for the Southern District of New York.

On Wednesday, just before the company went public with its news, Gizmodo reported that employees of Cambridge Analytica’s U.S. offices learned that their jobs were being terminated when they were ordered to hand over their company keycards.

Given its already fairly shadowy business practices, it remains to be seen if this is really the end for Cambridge Analytica or just a strategic rebrand while it waits for the “siege” of negative media coverage to cool off.

Probably the latter, since the U.K.-based SCL Group, the mothership in the constellation of associated companies, is not going out of business. Nor are its many other ventures, including a new one, Emerdata, which several former CA leaders have recently moved to.

Facebook engineer and ‘professional stalker’ reportedly fired over creepy Tinder messages

There’s no shortage of Facebook news this week on account of F8, but this creepy Facebook-adjacent event with a good outcome seems worth noting. An engineer accused of abusing his access to data at the company in Tinder messages has been fired, NBC News reported.

The issue arose over the weekend: Jackie Stokes, founder of Spyglass Security, explained on Twitter that someone she knew had received some rather creepy messages from someone she personally confirmed was a Facebook engineer.

The person (gender unspecified) described theirselves as a “professional stalker,” which however accurate it may be (they attempt to unmask hackers) is probably not the best way to introduce yourself to a potential partner. They then implied that they had been employing their professional acumen in pursuit of identifying their new quarry.

I really, really hope I’m wrong about this. pic.twitter.com/NDkOptx8Hv

— Jackie Stokes 🙋🏽 (@find_evil) April 30, 2018

Note that the above isn’t the whole exchange, just an excerpt.

Facebook employees contacted Stokes for more information and began investigating. Alex Stamos, Facebook’s chief security officer, told NBC News that “we have strict policy controls and technical restrictions so employees only access the data they need to do their jobs – for example to fix bugs, manage customer support issues or respond to valid legal requests. Employees who abuse these controls will be fired.”

And fired he was, Stamos added in another statement. I’ve reached out to the company for confirmation and more details, including what those controls are that should ostensibly have prevented the person from accessing the data of a prospective date.

It’s disturbing that someone in such a privileged position would use it for such tawdry and selfish purposes, but not really surprising. It is, however, also heartening that the person was fired promptly for doing so, and while everyone was busy at a major conference, at that.

I’d like to thank the many Facebook employees who reached out to me personally to find out what they could do to help, and especially their CSO @alexstamos for deft handling of a dicey issue during a time when words and actions matter more than ever.https://t.co/W8Joe2Bc6e

— Jackie Stokes 🙋🏽 (@find_evil) May 2, 2018