Chinese government admits collection of deleted WeChat messages

Chinese authorities revealed over the weekend that they have the capability of retrieving deleted messages from the almost universally used WeChat app. The admission doesn’t come as a surprise to many, but it’s rare for this type of questionable data collection tactic to be acknowledged publicly.

As noted by the South China Morning Post, an anti-corruption commission in Hefei province posted Saturday to social media that it has “retrieved a series of deleted WeChat conversations from a subject” as part of an investigation.

The post was deleted Sunday, but not before many had seen it and understood the ramifications. Tencent, which operates the WeChat service used by nearly a billion people (including myself), explained in a statement that “WeChat does not store any chat histories — they are only stored on users’ phones and computers.”

The technical details of this storage were not disclosed, but it seems clear from the commission’s post that they are accessible in some way to interested authorities, as many have suspected for years. The app does, of course, comply with other government requirements, such as censoring certain topics.

There are still plenty of questions, the answers to which would help explain user vulnerability: Are messages effectively encrypted at rest? Does retrieval require the user’s password and login, or can it be forced with a “master key” or backdoor? Can users permanently and totally delete messages on the WeChat platform at all?

Fears over Chinese government access to data held or handled by Chinese companies has led to a global backlash against those companies, including some countries (including the U.S.) banning Chinese-made devices and services from sensitive applications or official use altogether.

Twitter also sold data access to Cambridge Analytica researcher

Since it was revealed that Cambridge Analytica improperly accessed the personal data of millions of Facebook users, one question has lingered in the minds of the public: What other data did Dr. Aleksandr Kogan gain access to?

Twitter confirmed to The Telegraph on Saturday that GSR, Kogan’s own commercial enterprise, had purchased one-time API access to a random sample of public tweets from a five-month period between December 2014 and April 2015. Twitter told Bloomberg that, following an internal review, the company did not find any access to private data about people who use Twitter.

Twitter sells API access to large organizations or enterprises for the purposes of surveying sentiment or opinion during various events, or around certain topics or ideas.

Here’s what a Twitter spokesperson said to The Telegraph:

Twitter has also made the policy decision to off-board advertising from all accounts owned and operated by Cambridge Analytica. This decision is based on our determination that Cambridge Analytica operates using a business model that inherently conflicts with acceptable Twitter Ads business practices. Cambridge Analytica may remain an organic user on our platform, in accordance with the Twitter Rules.

Obviously, this doesn’t have the same scope as the data harvested about users on Facebook. Twitter’s data on users is far less personal. Location on the platform is opt-in and generic at that, and users are not forced to use their real name on the platform.

We reached out to Twitter and will update when we hear back.

How Microsoft helped imprison a man for ‘counterfeiting’ software it gives away for free

In a sickening concession to bad copyright law and Microsoft’s bottom line over basic technical truths and common sense, Eric Lundgren will spend 15 months in prison for selling discs that let people reinstall Windows on licensed machines. A federal appeals court this week upheld the sentence handed down in ignorance by a Florida district judge, for a crime the man never committed.

Now, to be clear, Lundgren did commit a crime, and admitted as much — but not the crime he was convicted for, the crime Microsoft alleges he did, the crime that carries a year-plus prison term. Here’s what happened.

In 2012 feds seized a shipment of discs, which they determined were counterfeit copies of Windows, heading to the U.S. where they were to be sold to retailers by Lundgren. U.S. Prosecutors, backed by Microsoft’s experts, put him on the hook for about $8.3 million — the retail price of Windows multiplied by the number of discs seized.

The only problem with that was that these weren’t counterfeit copies of Windows, and they were worth almost nothing. The confusion is understandable — here’s why.

When you buy a computer, baked into the cost of that computer is usually a license for the software on it — for instance, Windows. And included with that computer is often a disc that, should you have to reinstall that OS for whatever reason (virus infection, general slowdown), allows you to do so. This installation only works, of course, if you feed it your license key, which you’ll probably find on a sticker attached to your computer, its “Certificate of Authenticity.”

But what if you lose that disc? Fortunately, all those years Microsoft itself provided disc images, files that you could use to burn a new copy of the disc at no cost. Look, you can still do it, and you used to be able to get one without a license key. In fact that’s how many Windows installs were created — buy a license key directly from Microsoft or some reseller, then download and burn the install disc yourself.

Of course, if you don’t have a DVD burner (remember, this was a while back — these days you’d use a USB drive), you’d have to get one from a friend who has one, a licensed refurbisher, or your manufacturer (for instance, Dell or Lenovo) for a fee.

This option is still available, and very handy — I’ve used it many times.

What Lundgren did was have thousands of these recovery discs printed so that repair and refurbishing shops could sell them for cheap to anyone who can’t make their own. No need to go call Alienware customer service, just go to a computer store and grab a disc for a couple bucks.

Lundgren, by the way, is not some scammer looking to fleece a few people and make a quick buck. He has been a major figure on the e-waste scene, working to minimize the toxic wages of planned obsolescence and running a company of 100 to responsibly refurbish or recycle old computers and other devices.

His actual crime, which he pleaded guilty to, was counterfeiting the packaging to make the discs pass for Dell-branded ones.

But the fundamental idea that this was counterfeit software, with all that implies, is simply wrong.

Software vs. license

The whole thing revolves around the fact that Microsoft — and every other software maker — doesn’t just plain sell software; they sell licenses to that software. Because software can easily be copied from computer to computer, piracy is easy if you make a program that anyone can just install. It’s more effective to distribute the software itself freely, but only unlock it for use with a special one-off code sold to the customer: a license, or product key.

When you buy a “copy” of Windows, you’re really buying a license to use Windows, not the bits and bytes that make up the OS. The company literally provided up to date disc images of Windows on its website! You could easily install it using those. But without a license key, the OS won’t work properly; it’ll nag you, remove functionality, and may shut down entirely. No one would confuse this with a licensed copy of the OS.

This distinction between software and license is a fine one, but important. Not just for overarching discussions of copyright law and where it fails us as technology moves beyond the severely dated DMCA. Because in this case it’s the difference between a box of Windows recovery discs being worth millions of dollars, as prosecutors originally said they were, and being worth essentially nothing, which is what an expert witness and advocates countered.

More importantly, it’s the difference between someone getting 15 months in prison for a nonviolent crime harming no one and causing no actual financial loss, and getting a suitable punishment for counterfeiting labels.

A Microsoft representative told me, reasonably enough, that they want customers to be able to trust their software. So going after counterfeiters is a high priority. After all, if you buy a cheap, fake DVD of Windows on eBay and it turns out the disc has been pre-loaded with malware, that’s bad news for the consumer and hurts the Microsoft brand. Makes sense.

It said in an official statement:

We participate in cases like these because counterfeit software exposes our customers to malware and other forms of cybercrime. There are responsible ways to refurbish computers and save waste, but Mr. Lundgren intentionally deceived people about the software they were buying and put their security at risk.

First, it is worth mentioning that the court record is replete with tests showing these discs were perfectly normal copies of software that Microsoft provides for free. Prosecutors went through the entire install process several times and encountered nothing unusual — in fact, their arguments rely on the fact that these were perfect copies, not a compromised one. This may not affect Microsoft’s reasoning for pursuing the case, but it sure has a bearing on this one.

Lundgren deceived people that this was an official disc from Dell, certainly. That’s a crime and he admitted to it right off the bat. But from what I can tell, the discs were indistinguishable from Dell discs except for inconsistencies in the packaging. There’s nothing in the record to think otherwise. I was told Microsoft declined to look into whether the discs might have had malware because it would have no bearing on the case, which strikes me as ridiculous. It would be trivial to check the integrity and contents of a disc Microsoft itself provides the data for, and malware or the like would provide evidence of criminal intent by Lundgren or his supplier.

If on the other hand the discs were identical to those they are meant to imitate, we would expect to hear little about their content except that they are functional, which is what we see in the record.

From the court records, the discs seized produced ordinary Windows installs when tested by multiple parties.

Furthermore: people weren’t buying software, let alone “counterfeit software.” The discs in question are at best “unauthorized” copies of software provided for free by Microsoft, not really a term that carries a lot of legal or even rhetorical weight. I could make a recovery disc, then make another for my friend who doesn’t have a DVD burner. Is that copy authorized or not? And how could it be unauthorized if it’s an image made available to users specifically for the purpose of burning recovery discs? How can it be counterfeit if it’s just a copy of that image? Furthermore, how can it be “pirated” if the business model requires the end user to purchase a license key to activate the product?

If the data on the disc is worth anything at all, why does Microsoft provide it for free? There was in fact no piracy because no license to use the software, which amounts to the entire value of the software, was ever sold.

What damage?

But how, then, could this freely available software produce damage in the millions, as first alleged, and later in the hundreds of thousands?

What Microsoft alleged, when it became clear that the data on the discs was worth precisely nothing without a license key, as evidenced by its own free distribution thereof, was that the discs Lundgren was selling were intended to short circuit its official refurbishment program.

That’s the official registered refurbisher program where a company might buy old laptops, wipe them, and contact Microsoft saying “Hey, give us 12 Windows 7 Home licenses,” which are then provided for a deep discount — $20-40 each, down from the full retail price of hundreds. It encourages reuse of perfectly good hardware and keeps costs down, both of which are solid goals.

Every disc Lundgren sold to refurbishers, Microsoft argued, caused $20-40 (times .75, the profit ratio) of lost OS sales because it would be used in place of the official licensing process. This was the basis for the $700,000 figure used in part to determine the severity of his crime.

There are several things wrong with this statement, so I’m putting them in bullet points.

  • Lundgren was not necessarily selling these discs to refurbishers for use in refurbishing computers — the discs would be perfectly useful to any Dell owner who walked in and wanted a recovery disc for their own purposes. The government case rests on an assumption that was not demonstrated by any testimony or evidence.
  • The discs are not what Microsoft charges for. As already established, the disc and the data on it are provided for free. Anyone could download a copy and make their own, including refurbishers. Microsoft charges for a license to activate the software on the disc. The discs themselves are just an easy way to move data around. There’s no reason why refurbishers would not buy discs from Lundgren and order licenses from Microsoft.
  • Dell computers (and most computers from dealers) come with a Certificate of Authenticity with a corresponding Windows product key. So if intentions are to be considered, fundamentally these discs were intended for sale to and use by authorized, licensed users of the OS.
  • Furthermore, since many computers come with COAs, if the refurbishers decide to skip getting a new license use a given computer’s COA, that is not the fault of Lundgren, and could easily be accomplished with the free software Microsoft itself provides.
  • That process — using the COA instead of buying a new license — is not permitted by Microsoft and is murky copyright-wise. But in this case the defendants say it was admitted by U.S. prosecutors that the COA “belongs” to the hardware, not the first buyer. The alternative is that, for example, if I sold a computer to a friend with Windows installed, he would be required to buy a new copy of Windows to install over the first, which is absurd.
  • Naturally no actual damage was actually done. The damage is entirely theoretical and incorrect at that. A copy of Windows cannot be sold because it is freely provided; only a license key can be sold, and those sales are what Microsoft alleges were affected — but Lundgren neither had nor sold any license keys.

In fact an expert witness, Glenn Weadock, who had previously been involved in a 2001 government antitrust case against Microsoft, appeared in court to argue these very points.

Weadock was asked what the value of the discs is without a license or COA. “Zero or near zero,” he said. The value is a “convenience factor,” he said, in that someone can use a pre-made disc instead of burning their own or having the manufacturer provide it.

Real damage

This fact, a difference between selling a license that activates a piece of software and provides its real value, and the distribution of the software itself — again, provided for free to any asker — was completely ignored by the courts.

The government’s expert testified that the lowest amount Microsoft charges buyers in the relevant market—the small registered computer refurbisher market—was $25 per disc. Although the defense expert testified that discs containing the relevant Microsoft OS software had little or no value when unaccompanied by a product key or license, the district court explicitly stated that it did not find that testimony to be credible.

As I’ve already established, discs are free. $25 is the price of the license accompanying the disc. Again, a fine but very important distinction.

Weadock’s testimony and all arguments along these lines were disregarded by the judges, who decided that the “infringing item” “is or appears to be a reasonably informed purchaser to be, identical or substantially equivalent to the infringed item.”

This is fundamentally wrong.

The “infringing” item is a disc. The “infringed” item is a license. The ones confusing the two aren’t purchasers but the judges in this case, with Microsoft’s help.

“[Defendants] cannot claim that Microsoft suffered minimal pecuniary injury,” wrote the judges in the ruling affirming the previous court’s sentencing. “Microsoft lost the sale of its software as a direct consequence of the defendants’ actions.”

Microsoft does not sell discs. It sells licenses.

Lundgren did not sell licenses. He sold discs.

These are two different things with different values and different circumstances.

I don’t know how I can make this any more clear. Right now a man is going to prison for 15 months because these judges didn’t understand basic concepts of the modern software ecosystem. 15 months! In prison!

What would a reasonable punishment be for counterfeiting labels to put on software anyone can download for free? I couldn’t say. That would be for a court to decide. Possibly, based on Lundgren’s suggestion that if damages had to be calculated, that $4 per disc was more realistic, he would still face time. But instead the court has made an ignorant decision based on corporate misinformation that will deprive someone of more than a year of his life — not to mention all the time and money that has been spent explaining these things to deaf ears for the last few years.

Microsoft cannot claim that it was merely a victim or bystander here. It has worked with the FBI and prosecutors the whole time pursuing criminal charges for which the defendant could face years in prison. And as you can see, those charges are wildly overstated and produced a sentence far more serious than Lundgren’s actual crime warranted.

The company could at any point have changed its testimony to reflect the facts of the matter. It could have corrected the judges that the infringing and infringed items are strictly speaking completely different things, a fact it knows and understands, since it sells one for hundreds and gives the other away. It could have cautioned the prosecution that copyright law in this case produces a punishment completely out of proportion with the crime, or pursued a civil case on separate lines.

This case has been ongoing for years and Microsoft has supported it from start to finish; it has as much sentenced Lundgren to prison for a crime he didn’t commit as the fools of judges it convinced of its great “pecuniary loss.” I expect the company to push back against this idea, saying that it only had consumers’ best interests in mind, but the bad-faith arguments we have seen above, and which I have heard directly from Microsoft, seem to suggest it was in fact looking for a strong judgment at any cost to deter others.

If it was possible that Microsoft was not aware how bad the optics on this case are, they’ve been warned over and over as the case has worn on. Now that Lundgren is going to prison it seems reasonable to say that his imprisonment is as much a Microsoft product as the OS it accused him wrongly of pirating.

Facebook reveals 25 pages of takedown rules for hate speech and more

Facebook has never before made public the guidelines its moderators use to decide whether to remove violence, spam, harassment, self-harm, terrorism, intellectual property theft, and hate speech from social network until now. The company hoped to avoid making it easy to game these rules, but that worry has been overridden by the public’s constant calls for clarity and protests about its decisions. Today Facebook published 25 pages of detailed criteria and examples for what is and isn’t allowed.

Facebook is effectively shifting where it will be criticized to the underlying policy instead of individual incidents of enforcement mistakes like when it took down posts of the newsworthy “Napalm Girl” historical photo because it contains child nudity before eventually restoring them. Some groups will surely find points to take issue with, but Facebook has made some significant improvements. Most notably, it no longer disqualifies minorities from shielding from hate speech because an unprotected characteristic like “children” is appended to a protected characteristic like “black”.

Nothing is technically changing about Facebook’s policies. But previously, only leaks like a copy of an internal rulebook attained by the Guardian had given the outside world a look at when Facebook actually enforces those policies. These rules will be translated into over 40 languages for the public. Facebook currently has 7500 content reviewers, up 40% from a year ago.

Facebook also plans to expand its content removal appeals process, It already let users request a review of a decision to remove their profile, Page, or Group. Now Facebook will notify users when their nudity, sexual activity, hate speech or graphic violence content is removed and let them hit a button to “Request Review”, which will usually happen within 24 hours. Finally, Facebook will hold Facebook Forums: Community Standards events in Germany, France, the UK, India, Singapore, and the US to give its biggest communities a closer look at how the social network’s policy works.

Fixing the “white people are protected, black children aren’t” policy

Facebook’s VP of Global Product Management Monika Bickert who has been coordinating the release of the guidelines since September told reporters at Facebook’s Menlo Park HQ last week that “There’s been a lot of research about how when institutions put their policies out there, people change their behavior, and that’s a good thing.” She admits there’s still the concern that terrorists or hate groups will get better at developing “workarounds” to evade Facebook’s moderators, “but the benefits of being more open about what’s happening behind the scenes outweighs that.”

Content moderator jobs at various social media companies including Facebook have been described as hellish in many exposes regarding what it’s like to fight the spread of child porn, beheading videos, racism for hours a day. Bickert says Facebook’s moderators get trained to deal with this and have access to counseling and 24/7 resources, including some on-site. They can request to not look at certain kinds of content they’re sensitive to. But Bickert didn’t say Facebook imposes an hourly limit on how much offensive moderators see per day like how YouTube recently implemented a four-hour limit.

A controversial slide depicting Facebook’s now-defunct policy that disqualified subsets of protected groups from hate speech shielding. Image via ProPublica

The most useful clarification in the newly revealed guidelines explains how Facebook has ditched its poorly received policy that deemed “white people” as protected from hate speech, but not “black children”. That rule that left subsets of protected groups exposed to hate speech was blasted in a ProPublica piece in June 2017, though Facebook said it no longer applied that policy.

Now Bickert says “Black children — that would be protected. White men — that would also be protected. We consider it an attack if it’s against a person, but you can criticize an organization, a religion . . . If someone says ‘this country is evil’, that’s something that we allow. Saying ‘members of this religion are evil’ is not.” She explains that Facebook is becoming more aware of the context around who is being victimized. However, Bickert notes that if someone says “‘I’m going to kill you if you don’t come to my party’, if it’s not a credible threat we don’t want to be removing it.” 

Do community standards = editorial voice?

Being upfront about its policies might give Facebook more to point to when it’s criticized for failing to prevent abuse on its platform. Activist groups say Facebook has allowed fake news and hate speech to run rampant and lead to violence in many developing countries where Facebook hasn’t had enough native speaking moderators. The Sri Lankan government temporarily blocked Facebook in hopes of ceasing calls for violence, and those on the ground say Zuckerberg overstated Facebook improvements to the problem in Myanmar that led to hate crimes against the Rohingya people.

Revealing the guidelines could at least cut down on confusion about whether hateful content is allowed on Facebook. It isn’t. Though the guidelines also raise the question of whether the Facebook value system it codifies means the social network has an editorial voice that would define it as a media company. That could mean the loss of legal immunity for what its users post. Bickert stuck to a rehearsed line that “We are not creating content and we’re not curating content”. Still, some could certainly say all of Facebook’s content filters amount to a curatorial layer.

But whether Facebook is a media company or a tech company, it’s a highly profitable company. It needs to spend some more of the billions it earns each quarter applying the policies evenly and forcefully around the world.

Google confirms some of its own services are now getting blocked in Russia over the Telegram ban

A shower of paper airplanes darted through the skies of Moscow and other towns in Russia today, as users answered the call of entrepreneur Pavel Durov to send the blank missives out of their windows at a pre-appointed time in support of Telegram, a messaging app he founded that was blocked last week by Russian regulator Roskomnadzor (RKN) that uses a paper airplane icon. RKN believes the service is violating national laws by failing to provide it with encryption keys to access messages on the service (Telegram has refused to comply).

The paper plane send-off was a small, flashmob turn in a “Digital Resistance” — Durov’s preferred term — that has otherwise largely been played out online: currently, nearly 18 million IP addresses are knocked out from being accessed in Russia, all in the name of blocking Telegram.

And in the latest development, Google has now confirmed to us that its own services are now also being impacted. From what we understand, Google Search, Gmail and push notifications for Android apps are among the products being affected.

“We are aware of reports that some users in Russia are unable to access some Google products, and are investigating those reports,” said a Google spokesperson in an emailed response. We’d been trying to contact Google all week about the Telegram blockade, and this is the first time that the company has both replied and acknowledged something related to it.

(Amazon has acknowledged our messages but has yet to reply to them.)

Google’s comments come on the heels of RKN itself also announcing today that it had expanded its IP blocks to Google’s services. At its peak, RKN had blocked nearly 19 million IP addresses, with dozens of third-party services that also use Google Cloud and Amazon’s AWS, such as Twitch and Spotify, also getting caught in the crossfire.

Russia is among the countries in the world that has enforced a kind of digital firewall, blocking periodically or permanently certain online content. Some turn to VPNs to access that content anyway, but it turns out that Telegram hasn’t needed to rely on that workaround to get used.

“RKN is embarrassingly bad at blocking Telegram, so most people keep using it without any intermediaries,” said Ilya Andreev, COO and co-founder of Vee Security, which has been providing a proxy service to bypass the ban. Currently, it is supporting up to 2 million users simultaneously, although this is a relatively small proportion considering Telegram has around 14 million users in the country (and, likely, more considering all the free publicity it’s been getting).

As we described earlier this week, the reason so many IP addresses are getting blocked is because Telegram has been using a technique that allows it to “hop” to a new IP address when the one that it’s using is blocked from getting accessed by RKN. It’s a technique that a much smaller app, Zello, had also resorted to using for nearly a year when the RKN announced its own ban.

Zello ceased its activities earlier this year when RKN got wise to Zello’s ways and chose to start blocking entire subnetworks of IP addresses to avoid so many hops, and Amazon’s AWS and Google Cloud kindly asked Zello to stop as other services also started to get blocked. So, when Telegram started the same kind of hopping, RKN, in effect, knew just what to do to turn the screws. (And it also took the heat off Zello, which miraculously got restored.)

So far, Telegram’s cloud partners have held strong and have not taken the same route, although getting its own services blocked could see Google’s resolve tested at a new level.

Some believe that one outcome could be the regulator playing out an elaborate game of chicken with Telegram and the rest of the internet companies that are in some way aiding and abetting it, spurred in part by Russia’s larger profile and how such blocks would appear to international audiences.

“Russia can’t keep blocking random things on the Internet,” Andreev said. “Russia is working hard to make its image more alluring to foreigners in preparation for the World Cup,” which is taking place this June and July. “They can’t have tourists coming and realising Google doesn’t work in Russia.”

We’ll update this post and continue to write on further developments as we learn more.

Russia’s game of Telegram whack-a-mole grows to 19M blocked IPs, hitting Twitch, Spotify and more

As the messaging app Telegram continues to try to evade Russian authorities by switching up its IP addresses, Russia’s regulator Roskomnadzor (RKN) has continued its game of whack-a-mole to try to lock it down by knocking out complete swathes of IP address. The resulting chase how now ballooned to nearly 19 million IP addresses at the time of writing, as tracked by unofficial RKN observer RKNSHOWTIME (updated on a Telegram channel with stats accessible on the web via Phil Kulin’s site).

As a result, there have been a number of high-profile services also knocked out in the crossfire, with people in Russia reporting dozens of sites affected, including Twitch, Slack, SoundCloud, Viber, Spotify, FIFA and Nintendo, as well as Amazon and Google. (A full list of nearly 40 addresses is listed below.)

What’s notable is that Google and Amazon themselves seem still not to be buckling under pressure. As we reported earlier this week, a similar — but far smaller — instance happened in the case of Zello, which had also devised a technique to hop around IP addresses when its own IP addresses were shut down by Russian regulators.

Zello’s circumventing lasted for nearly a year, until it seemed the regulator started to use a more blanket approach of blocking entire subnets — a move that ultimately led to Google and Amazon asking Zello to cease its activities.

After that, Zello’s main access point for its Russian users was via VPN proxies — one of the key ways that users in one country can effectively appear as if they are in another, allowing them to circumvent geoblocking and geofencing, either by the companies themselves, or those that have been banned by a state.

It’s important to note that the domain fronting that Google is in the process of shutting down is not the same as IP hopping — although, more generally, it will mean that there is now one less route for those globally whose traffic is getting blocked through censorship to wiggle around that. The IP hopping that has led to 19 million addresses getting blocked in Russia is another kind of circumvention. (I’m pointing this out because several people I’ve spoken to assumed they were the same.)

Pavel Durov, Telegram’s founder and CEO, has made several public calls on Telegram and also third-party sites like Twitter to praise how steadfast the big internet companies have been. And others like the ACLU have also waded into the story to call on Amazon, Apple, Google and Microsoft to hold strong and continue to allow Telegram to IP hop.

But what could happen next?

I’ve contacted Google, Amazon and Telegram now several times to ask this question and for more details on what is going on. As of yet I’ve had no replies. However, Alexey Gavrilov, the CTO and founder of Zello, provided a little more potential insight:

He said that ultimately they might ask Telegram to stop — something that might become increasingly hard not to do as more services get affected — and if that doesn’t work they can suspend Telegram’s account.

“Each cloud provider has provisions, which let them do it if your use interferes with other customers using their service,” Gavrilov notes. “The interpretation of this rule may be not trivial in case when the harm is caused by third party (i.e. RKN in this case) so I think there are some legal risks for Amazon / Google. Plus that would likely cause a PR issue for them.”

Another question is whether there are bigger fish to fry in this story. Some have floated the idea that just as Zello preceded Telegram, RKN’s battles with the latter might lead to how it negotiates with Facebook.

As we have reported before, Facebook notably has never moved to house Russian Facebook data in Russia. Local hosting has been one of the key requirements that the regulator has enforced against a number of other companies as part of its “data protection” rules, and over the last couple of years while some high-profile companies have run afoul of the these regulations, others (including Apple and Google) have reportedly complied.

Regardless, there’s been one ironic silver lining in this story. Since RKN shifted its focus to waging a war on Telegram, Gavrilov tells me that Zello service has been restored in Russia. Here’s to weathering the storm. 

We’ll update this post as and when we get responses from the big players. A more complete list of sites that people have reported as affected by the 19 million address block is below, via Telegram channel Нецифровая экономика (“Non-digital economy”). Some of these have been disputed, so take this with a grain of salt:

1. Sberbank (disputed)
2. Alfa Bank (disputed)
3. VTB
4. Mastercard
5. Some Microsoft services
6. Video agency RT Ruptly
7. Games like Fortnite, PUBG, Guild Wars 2, Vainglory, Guns of Boom, World of Warships Blitz, Lineage 2 Mobile and Total War: Arena
8. Twitch
9. Google
10. Amazon
11. Russian food retailer Dixy (disputed)
12. Odnoklassniki (the social network, ok,ru)
13. Viber
14. Дилеры Volvo
15. Gett Taxi
16. BattleNet
17. SoundCloud
18. DevianArt
19. Coursera
20. Realtimeboard
21. Trello
22. Slack
23. Evernote
24. Skyeng (online English language school)
25. Part of the PlayStation Network
26. Ivideon
27. ResearchGate
28. Gitter
29. eLama
30. Behance
31. Nintendo
32. Codecademy
33. Lifehacker
34. Spotify
35. FIFA
36. And it seems like some of RKN’s site itself

Data experts on Facebook’s GDPR changes: Expect lawsuits

Make no mistake: Fresh battle lines are being drawn in the clash between data-mining tech giants and Internet users over people’s right to control their personal information and protect their privacy.

An update to European Union data protection rules next month — called the General Data Protection Regulation — is the catalyst for this next chapter in the global story of tech vs privacy.

A fairytale ending would remove that ugly ‘vs’ and replace it with an enlightened ‘+’. But there’s no doubt it will be a battle to get there — requiring legal challenges and fresh case law to be set down — as an old guard of dominant tech platforms marshal their extensive resources to try to hold onto the power and wealth gained through years of riding roughshod over data protection law.

Payback is coming though. Balance is being reset. And the implications of not regulating what tech giants can do with people’s data has arguably never been clearer.

The exciting opportunity for startups is to skate to where the puck is going — by thinking beyond exploitative legacy business models that amount to embarrassing blackboxes whose CEOs dare not publicly admit what the systems really do — and come up with new ways of operating and monetizing services that don’t rely on selling the lie that people don’t care about privacy.

More than just small print

Right now the EU’s General Data Protection Regulation can take credit for a whole lot of spilt ink as tech industry small print is reworded en masse. Did you just receive a T&C update notification about a company’s digital service? Chances are it’s related to the incoming standard.

The regulation is generally intended to strengthen Internet users’ control over their personal information, as we’ve explained before. But its focus on transparency — making sure people know how and why data will flow if they choose to click ‘I agree’ — combined with supersized fines for major data violations represents something of an existential threat to ad tech processes that rely on pervasive background harvesting of users’ personal data to be siphoned biofuel for their vast, proprietary microtargeting engines.

This is why Facebook is not going gentle into a data processing goodnight.

Indeed, it’s seizing on GDPR as a PR opportunity — shamelessly stamping its brand on the regulatory changes it lobbied so hard against, including by taking out full page print ads in newspapers…

Here we are. Wow, what a fun thinking about all these years of debates with fb representatives telling me ‘consumers don’t want privacy rights anymore’ and ‘a startup (sic) like facebook shouldn’t be overburdened’. 😘#GDPR #dataprotection #privacy https://t.co/gowYVvKjJf

— Jan Philipp Albrecht (@JanAlbrecht) April 15, 2018

This is of course another high gloss plank in the company’s PR strategy to try to convince users to trust it — and thus to keep giving it their data. Because — and only because — GDPR gives consumers more opportunity to lock down access to their information and close the shutters against countless prying eyes.

But the pressing question for Facebook — and one that will also test the mettle of the new data protection standard — is whether or not the company is doing enough to comply with the new rules.

One important point re: Facebook and GDPR is that the standard applies globally, i.e. for all Facebook users whose data is processed by its international entity, Facebook Ireland (and thus within the EU); but not necessarily universally — with Facebook users in North America not legally falling under the scope of the regulation.

Users in North America will only benefit if Facebook chooses to apply the same standard everywhere. (And on that point the company has stayed exceedingly fuzzy.)

It has claimed it won’t give US and Canadian users second tier status vs the rest of the world where their privacy is concerned — saying they’re getting the same “settings and controls” — but unless or until US lawmakers spill some ink of their own there’s nothing but an embarrassing PR message to regulate what Facebook chooses to do with Americans’ data. It’s the data protection principles, stupid.

Zuckerberg was asked by US lawmakers last week what kind of regulation he would and wouldn’t like to see laid upon Internet companies — and he made a point of arguing for privacy carve outs to avoid falling behind, of all things, competitors in China.

Which is an incredibly chilling response when you consider how few rights — including human rights — Chinese citizens have. And how data-mining digital technologies are being systematically used to expand Chinese state surveillance and control.

The ugly underlying truth of Facebook’s business is that it also relies on surveillance to function. People’s lives are its product.

That’s why Zuckerberg couldn’t tell US lawmakers to hurry up and draft their own GDPR. He’s the CEO saddled with trying to sell an anti-privacy, anti-transparency position — just as policymakers are waking up to what that really means.

Plus ça change?

Facebook has announced a series of updates to its policies and platform in recent months, which it’s said are coming to all users (albeit in ‘phases’). The problem is that most of what it’s proposing to achieve GDPR compliance is simply not adequate.

Coincidentally many of these changes have been announced amid a major data mishandling scandal for Facebook, in which it’s been revealed that data on up to 87M users was passed to a political consultancy without their knowledge or consent.

It’s this scandal that led Zuckerberg to be perched on a booster cushion in full public view for two days last week, dodging awkward questions from US lawmakers about how his advertising business functions.

He could not tell Congress there wouldn’t be other such data misuse skeletons in its closet. Indeed the company has said it expects it will uncover additional leaks as it conducts a historical audit of apps on its platform that had access to “a large amount of data”. (How large is large, one wonders… )

But whether Facebook’s business having enabled — in just one example — the clandestine psychological profiling of millions of Americans for political campaign purposes ends up being the final, final straw that catalyzes US lawmakers to agree their own version of GDPR is still tbc.

Any new law will certainly take time to formulate and pass. In the meanwhile GDPR is it.

The most substantive GDPR-related change announced by Facebook to date is the shuttering of a feature called Partner Categories — in which it allowed the linking of its own information holdings on people with data held by external brokers, including (for example) information about people’s offline activities.

Evidently finding a way to close down the legal liabilities and/or engineer consent from users to that degree of murky privacy intrusion — involving pools of aggregated personal data gathered by goodness knows who, how, where or when — was a bridge too far for the company’s army of legal and policy staffers.

Other notable changes it has so far made public include consolidating settings onto a single screen vs the confusing nightmare Facebook has historically required users to navigate just to control what’s going on with their data (remember the company got a 2011 FTC sanction for “deceptive” privacy practices); rewording its T&Cs to make it more clear what information it’s collecting for what specific purpose; and — most recently — revealing a new consent review process whereby it will be asking all users (starting with EU users) whether they consent to specific uses of their data (such as processing for facial recognition purposes).

As my TC colleague Josh Constine wrote earlier in a critical post dissecting the flaws of Facebook’s approach to consent review, the company is — at very least — not complying with the spirit of GDPR’s law.

Indeed, Facebook appears pathologically incapable of abandoning its long-standing modus operandi of socially engineering consent from users (doubtless fed via its own self-reinforced A/B testing ad expertise). “It feels obviously designed to get users to breeze through it by offering no resistance to continue, but friction if you want to make changes,” was his summary of the process.

But, as we’ve pointed out before, concealment is not consent.

To get into a few specifics, pre-ticked boxes — which is essentially what Facebook is deploying here, with a big blue “accept and continue” button designed to grab your attention as it’s juxtaposed against an anemic “manage data settings” option (which if you even manage to see it and read it sounds like a lot of tedious hard work) — aren’t going to constitute valid consent under GDPR.

Nor is this what ‘privacy by default’ looks like — another staple principle of the regulation. On the contrary, Facebook is pushing people to do the opposite: Give it more of their personal information — and fuzzing why it’s asking by bundling a range of usage intentions.

The company is risking a lot here.

In simple terms, seeking consent from users in a way that’s not fair because it’s manipulative means consent is not being freely given. Under GDPR, it won’t be consent at all. So Facebook appears to be seeing how close to the wind it can fly to test how regulators will respond.

Safe to say, EU lawmakers and NGOs are watching.

“Yes, they will be taken to court”

“Consent should not be regarded as freely given if the data subject has no genuine or free choice or is unable to refuse or withdraw consent without detriment,” runs one key portion of GDPR.

Now compare that with: “People can choose to not be on Facebook if they want” — which was Facebook’s deputy chief privacy officer, Rob Sherman’s, paper-thin defense to reporters for the lack of an overall opt out for users to its targeted advertising.

Data protection experts who TechCrunch spoke to suggest Facebook is failing to comply with, not just the spirit, but the letter of the law here. Some were exceeding blunt on this point.

“I am less impressed,” said law professor Mireille Hildebrandt discussing how Facebook is railroading users into consenting to its targeted advertising. “It seems they have announced that they will still require consent for targeted advertising and refuse the service if one does not agree. This violates [GDPR] art. 7.4 jo recital 43. So, yes, they will be taken to court.”

Facebook says users must accept targeted ads even under new EU law: NO THEY MUST NOT, there are other types of advertising, subscription etc. https://t.co/zrUgsgxtwo

— Mireille Hildebrandt (@mireillemoret) April 18, 2018

“Zuckerberg appears to view the combination of signing up to T&Cs and setting privacy options as ‘consent’,” adds cyber security professor Eerke Boiten. “I doubt this is explicit or granular enough for the personal data processing that FB do. The default settings for the privacy settings certainly do not currently provide for ‘privacy by default’ (GDPR Art 25).

“I also doubt whether FB Custom Audiences work correctly with consent. FB finds out and retains a small bit of personal info through this process (that an email address they know is known to an advertiser), and they aim to shift the data protection legal justification on that to the advertisers. Do they really then not use this info for future profiling?”

That looming tweak to the legal justification of Facebook’s Custom Audiences feature — a product which lets advertisers upload contact lists in a hashed form to find any matches among its own user-base (so those people can be targeted with ads on Facebook’s platform) — also looks problematical.

Here the company seems to be intending to try to claim a change in the legal basis, pushed out via new terms in which it instructs advertisers to agree they are the data controller (and it is merely a data processor). And thereby seek to foist a greater share of the responsibility for obtaining consent to processing user data onto its customers.

However such legal determinations are simply not a matter of contract terms. They are based on the fact of who is making decisions about how data is processed. And in this case — as other experts have pointed out — Facebook would be classed as a joint controller with any advertisers that upload personal data. The company can’t use a T&Cs change to opt out of that.

Wishful thinking is not a reliable approach to legal compliance.

Fear and manipulation of highly sensitive data

Over many years of privacy-hostile operation, Facebook has shown it has a major appetite for even very sensitive data. And GDPR does not appear to have blunted that.

Let’s not forget, facial recognition was a platform feature that got turned off in the EU, thanks to regulatory intervention. Yet here Facebook is now trying to use GDPR as a route to process this sensitive biometric data for international users after all — by pushing individual users to consent to it by dangling a few ‘feature perks’ at the moment of consent.

Veteran data protection and privacy consultant, Pat Walshe, is unimpressed.

“The sensitive data tool appears to be another data grab,” he tells us, reviewing Facebook’s latest clutch of ‘GDPR changes’. “Note the subtlety. It merges ‘control of sharing’ such data with FB’s use of the data “to personalise features and products”. From the info available that isn’t sufficient to amount to consent for such sensitive data and nor is it clear folks can understand the broader implications of agreeing.

“Does it mean ads will appear in Instagram? WhatsApp etc? The default is also set to ‘accept’ rather than ‘review and consider’. This is really sensitive data we’re talking about.”

“The face recognition suggestions are woeful,” he continues. “The second image — is using an example… to manipulate and stoke fear — “we can’t protect you”.

“Also, the choices and defaults are not compatible with [GDPR] Article 25 on data protection by design and default nor Recital 32… If I say no to facial recognition it’s unclear if other users can continue to tag me.”

Of course it goes without saying that Facebook users will keep uploading group photos, not just selfies. What’s less clear is whether Facebook will be processing the faces of other people in those shots who have not given (and/or never even had the opportunity to give) consent to its facial recognition feature.

People who might not even be users of its product.

But if it does that it will be breaking the law. Yet Facebook does indeed profile non-users — despite Zuckerberg’s claims to Congress not to know about its shadow profiles. So the risk is clear.

It can’t give non-users “settings and controls” not to have their data processed. So it’s already compromised their privacy — because it never gained consent in the first place.

New Mexico Representative Ben Lujan made this point to Zuckerberg’s face last week and ended the exchange with a call to action: “So you’re directing people that don’t even have a Facebook page to sign up for a Facebook page to access their data… We’ve got to change that.”

WASHINGTON, DC – APRIL 11: Facebook co-founder, Chairman and CEO Mark Zuckerberg prepares to testify before the House Energy and Commerce Committee in the Rayburn House Office Building on Capitol Hill April 11, 2018 in Washington, DC. This is the second day of testimony before Congress by Zuckerberg, 33, after it was reported that 87 million Facebook users had their personal information harvested by Cambridge Analytica, a British political consulting firm linked to the Trump campaign. (Photo by Chip Somodevilla/Getty Images)

But nothing in the measures Facebook has revealed so far, as its ‘compliance response’ to GDPR, suggest it intends to pro-actively change that.

Walshe also critically flags how — again, at the point of consent — Facebook’s review process deploys examples of the social aspects of its platform (such as how it can use people’s information to “suggest groups or other features or products”) as a tactic for manipulating people to agree to share religious affiliation data, for example.

“The social aspect is not separate to but bound up in advertising,” he notes, adding that the language also suggests Facebook uses the data.

Again, this whiffs a whole lot more than smells like GDPR compliance.

“I don’t believe FB has done enough,” adds Walshe, giving a view on Facebook’s GDPR preparedness ahead of the May 25 deadline for the framework’s application — as Zuckerberg’s Congress briefing notes suggested the company itself believes it has. (Or maybe it just didn’t want to admit to Congress that U.S. Facebook users will get lower privacy standards vs users elsewhere.)

“In fact I know they have not done enough. Their business model is skewed against privacy — privacy gets in the way of advertising and so profit. That’s why Facebook has variously suggested people may have to pay if they want an ad free model & so ‘pay for privacy’.”

“On transparency, there is a long way to go,” adds Boiten. “Friend suggestions, profiling for advertising, use of data gathered from like buttons and web pixels (also completely missing from “all your Facebook data”), and the newsfeed algorithm itself are completely opaque.”

“What matters most is whether FB’s processing decisions will be GDPR compliant, not what exact controls are given to FB members,” he concludes.

US lawmakers also pumped Zuckerberg on how much of the information his company harvests on people who have a Facebook account is revealed to them when they ask for it — via its ‘Download your data’ tool.

His answers on this appeared to intentionally misconstrue what was being asked — presumably in a bid to mask the ugly reality of the true scope and depth of the surveillance apparatus he commands. (Sometimes with a few special ‘CEO privacy privileges’ thrown in — like being able to selectively retract just his own historical Facebook messages from conversations, ahead of bringing the feature to anyone else.)

‘Download your Data’ is clearly partial and self-serving — and thus it also looks very far from being GDPR compliant.

Not even half the story

Facebook is not even complying with the spirit of current EU data protection law on data downloads. Subject Access Requests give individuals the right to request not just the information they have voluntarily uploaded to a service, but also personal data the company holds about them; Including giving a description of the personal data; the reasons it is being processed; and whether it will be given to any other organizations or people.

Facebook not only does not include people’s browsing history in the info it provides when you ask to download your data — which, incidentally, its own cookies policy confirms it tracks (via things like social plug-ins and tracking pixels on millions of popular websites etc etc) — it also does not include a complete list of advertisers on its platform that have your information.

Instead, after a wait, it serves up an eight-week snapshot. But even this two month view can still stretch to hundreds of advertisers per individual.

If Facebook gave users a comprehensive list of advertisers’ access to their information the number of third party companies would clearly stretch into the thousands. (In some cases thousands might even be a conservative estimate.)

There’s plenty of other information harvested from users that Facebook also intentionally fails to divulge via ‘Download your data’. And — to be clear — this isn’t a new problem either. The company has a very long history of blocking these type of requests.

In the EU it currently invokes a exception in Irish law to circumvent more fulsome compliance — which, even setting GDPR aside, raises some interesting competition law questions, as Paul-Olivier Dehaye told the UK parliament last month.

“All your Facebook data” isn’t a complete solution,” agrees Boiten. “It misses the info Facebook uses for auto-completing searches; it misses much of the information they use for suggesting friends; and I find it hard to believe that it contains the full profiling information.”

“Ads Topics” looks rather random and undigested, and doesn’t include the clear categories available to advertisers,” he further notes.

Facebook wouldn’t comment publicly about this when we asked. But it maintains its approach towards data downloads is GDPR compliant — and says it’s reviewed what it offers via with regulators to get feedback.

Earlier this week it also put out a wordy blog post attempting to diffuse this line of attack by pointing the finger of blame at the rest of the tech industry — saying, essentially, that a whole bunch of other tech giants are at it too.

Which is not much of a moral defense even if the company believes its lawyers can sway judges with it. (Ultimately I wouldn’t fancy its chances; the EU’s top court has a robust record of defending fundamental rights.)

Think of the children…

What its blog post didn’t say — yet again — was anything about how all the non-users it nonetheless tracks around the web are able to have any kind of control over its surveillance of them.

And remember, some Facebook non-users will be children.

So yes, Facebook is inevitably tracking kids’ data without parental consent. Under GDPR that’s a majorly big no-no.

TC’s Constine had a scathing assessment of even the on-platform system that Facebook has devised in response to GDPR’s requirements on parental consent for processing the data of users who are between the ages of 13 and 15.

“Users merely select one of their Facebook friends or enter an email address, and that person is asked to give consent for their ‘child’ to share sensitive info,” he observed. “But Facebook blindly trusts that they’ve actually selected their parent or guardian… [Facebook’s] Sherman says Facebook is “not seeking to collect additional information” to verify parental consent, so it seems Facebook is happy to let teens easily bypass the checkup.”

So again, the company is being shown doing the minimum possible — in what might be construed as a cynical attempt to check another compliance box and carry on its data-sucking business as usual.

Given that intransigence it really will be up to the courts to bring the enforcement stick. Change, as ever, is a process — and hard won.

Hildebrandt is at least hopeful that a genuine reworking of Internet business models is on the way, though — albeit not overnight. And not without a fight.

“In the coming years the landscape of all this silly microtargeting will change, business models will be reinvented and this may benefit both the advertisers, consumers and citizens,” she tells us. “It will hopefully stave off the current market failure and the uprooting of democratic processes… Though nobody can predict the future, it will require hard work.”

Facebook, Microsoft and others sign anti-cyberattack pledge

Microsoft, Facebook and Cloudflare are among a group of technology firms that have signed a joint pledge committing publicly not to assist offensive government cyberattacks.

The pledge also commits them to work together to enhance security awareness and the resilience of the global tech ecosystem.

The four top-line principles the firms are agreeing to are [ALL CAPS theirs]:

  • 1. WE WILL PROTECT ALL OF OUR USERS AND CUSTOMERS EVERYWHERE.
  • 2. WE WILL OPPOSE CYBERATTACKS ON INNOCENT CITIZENS AND ENTERPRISES FROM ANYWHERE.
  • 3. WE WILL HELP EMPOWER USERS, CUSTOMERS AND DEVELOPERS TO STRENGTHEN CYBERSECURITY PROTECTION.
  • 4. WE WILL PARTNER WITH EACH OTHER AND WITH LIKEMINDED GROUPS TO ENHANCE CYBERSECURITY.

You can read the full Cybersecurity Tech Accord here.

So far 34 companies have signed up to the initiative, which was announced on the eve of the RSA Conference in San Francisco, including ARM, Cloudflare, Facebook, Github, LinkedIn, Microsoft and Telefonica.

In a blog post announcing the initiative Microsoft’s Brad Smith writes that it’s hopeful more will soon follow.

“Protecting our online environment is in everyone’s interest,” says Smith. “The companies that are part of the Cybersecurity Tech Accord promise to defend and advance technology’s benefits for society. And we commit to act responsibly, to protect and empower our users and customers, and help create a safer and more secure online world.”

Notably not on the list are big tech’s other major guns: Amazon, Apple and Google — nor indeed most major mobile carriers (TC’s parent Oath’s parent Verizon is not yet a signee, for example).

And, well, tech giants are often the most visible commercial entities bowing to political pressure to comply with ‘regulations’ that do the opposite of enhance the security of their users living under certain regimes — merely to ensure continued market access for themselves.

But the accord raises more nuanced questions than who has not (yet) spilt ink on it.

What does ‘protect’ mean in this cybersecurity context? Are the companies which have signed up to the accord committing to protect their users from government mass surveillance programs, for example?

What about the problem of exploits being stockpiled by intelligence agencies — which might later leak and wreak havoc on innocent web users — as was apparently the case with the Wannacrypt malware.

Will the undersigned companies fight against (their own and other) governments doing that — in order to reduce security risks for all Internet users?

“We will strive to protect all our users and customers from cyberattacks — whether an individual, organization or government — irrespective of their technical acumen, culture or location, or the motives of the attacker, whether criminal or geopolitical,” sure sounds great in principle.

In practice this stuff gets very muddy and murky, very fast.

Perhaps the best element here is the commitment between the firms to work together for the greater security cause — including “to improve technical collaboration, coordinated vulnerability disclosure, and threat sharing, as well as to minimize the levels of malicious code being introduced into cyberspace”.

That at least may bear some tangible fruit.

Other security issues are far too tightly bound up with geopolitics for even a number of well-intentioned technology firms to be able to do much to shift the needle.

A flaw-by-flaw guide to Facebook’s new GDPR privacy changes

Facebook is about to start pushing European users to speed through giving consent for its new GDPR privacy law compliance changes. They ask users review how Facebook uses data around the web to target you with ads, sensitive profile info they share, and facial recognition But with a design the encourages rapidly hitting the “Agree” button, a lack of granular controls, a laughably cheatable parental consent request for teens, and an aesthetic overhaul of Download Your Information that doesn’t make it any easier to switch social networks, Facebook shows it’s still hungry for your data.

The new privacy change and terms of service consent flow will appear starting this week to European users, though they’ll be able to dismiss it for now, at least until the May 25th GDPR compliance deadline Facebook vowed to uphold in Europe. Meanwhile, Facebook says it will roll out the changes and consent flow globally over the coming weeks and months, though with some slight regional differences. And finally, all teens worldwide that share sensitive info will have to go through the weak new parental consent flow.

Facebook brought a group of reporters to the new Building 23 at its Menlo Park headquarters to preview the changes. But feedback was heavily critical as journalists grilled Facebook’s deput chief privacy officer Rob Sherman. Questions centered around how Facebook makes accepting the updates much easier than review or changing them, but Sherman stuck to talking points about how important it was to give users choice and information.

“Trust is really important and it’s clear that we have a lot of work to do to regain the trust of people on our service” he said, giving us deja vu about Mark Zuckerberg’s testimonies before congress. “We know that people won’t becomfortable using facebook if they don’t feel that their information is protected.”

Trouble At Each Step Of Facebook’s Privacy Consent Flow

There are a ton of small changes so we’ll lay out each with our criticisms.

Facebook’s consent flow starts well enough with the screen above offering a solid overview of why it’s making changes for GDPR and what you’ll be reviewing. But with just an ‘X’ up top to back out, it’s already training users to speed through by hitting that big blue button at the bottom.

Sensitive Info

First up is control of your sensitive profile information, specifically your sexual preference, religious views, and political views. As you’ll see at each step, you can either hit the pretty blue “Accept And Continue” button regardless of whether you’ve scrolled through the information. But if you hit the ugly grey “Manage Settings” button, you have to go through an interstitial where Facebook makes it’s argument trying to deter you from moving the info before letting you make and save your choice. It feels obviously designed to get users to breeze through it by offering no resistance to continue, but friction if you want to make changes.

Facebook doesn’t let advertisers target you based on this sensitive info, which is good. The only exception is that in the US, political views alongside political Pages and Events you interact with inform your overarching personality categories that can be targeted with ads. But your only option here is either to remove any info you’ve shared in these categories so friends can’t see it, or allow Facebook to use it to personalize the site. There’s no option to keep this stuff on your profile but not let Facebook use it.

Facial Recognition

The Face Recognition step won’t actually give users in the European Union a choice, as the government has banned the feature. But everyone else will get to choose whether to leave their existing setting, which defaults to on, or turn off the feature. Here the lack of granularity is concerning. Users might want to see warnings about possible impersonators using their face in their profile pics, but not be suggested as someone to tag in their friends’ photos. Unfortunately, it’s all or nothing. While Facebook is right to make it simple to turn on or off completely, granular controls that unfold for those that want them would be much more empowering.

Data Collection Across The Web

A major concern that’s arisen in the wake of Zuckerberg’s testimonies is how Facebook uses data collected about you from around the web to target users with ads and optimize its service. While Facebook deputer chief privacy officer Rob Sherman echoed Zuckerberg in saying that users tell the company they prefer relevant ads, and that this data can help thwart hackers and scrapers, many users are unsettled by the offsite collection practices. Here, Facebook lets you block it from targeting you wih ads based on data about your browsing behavior on sites that show its Like and share buttons, conversion Pixel, or Audience Network ads. Here the issue is that there’s no way to stop Facebook from using that data from personalizing your News Feed or optimizing other parts of its service.

New Terms Of Service

Facebook recently rewrote its Terms Of Service and Data Use Policy to be more explicit and easy to read. It didn’t make any significant changes other than noting the policy now applies to its subsidiaries like Instagram, WhatsApp, and Oculus. That’s all clearly explained here, which is nice. But the fact that the button to reject the new Terms Of Service isn’t even a button, it’s a tiny ‘see your options’ hyperlink shows how badly Facebook wants to avoid you closing your account. When Facebook’s product designer for the GDPR flow was asked if she thought this hyperlink was the best way to present the alternative to the big ‘I Accept’ button, she disingenuously said yes, eliciting scoffs from the room of reporters. It seems obvious that Facebook is trying to minimize the visibility of the path to account deletion rather than making it an obvious course of action if you don’t agree to its terms.

I requested Facebook actually show us what was on the other side of the that tine ‘see my options’ link and this is what we got. First, Facebook doesn’t mention its temporary deactivation option, just the scary permanent delete option. Facebook recommends downloading your data before deleting your account, which you should. But the fact that you’ll have to wait (often a few hours) before you can download your data could push users to delay deletion and perhaps never resume. And only if you keep scrolling do you get to another tiny “I’m ready to delete my account” hyperlink instead of a real button.

Parental Consent

GDPR also implements new regulation about how teens are treated, specifically users between the ages of 13 (the minimum age required to sign up for Facebook) and 15. If users in this age range have shared their religious views, political views, or sexual preference, Facebook requires them to either remove it or get parental consent to keep it. But the system for attaining and verifying that parental consent is a joke.

Users merely select one of their Facebook friends or enter an email address, and that person is asked to give consent for their ‘child’ to share sensitive info. But Facebook blindly trusts that they’ve actually selected their parent or guardian, even though it has a feature for users to designate who their family is, and the kid could put anyone in the email field, including an alternate address they control. Sherman says Facebook is “not seeking to collect additional information” to verify parental consent, so it seems Facebook is happy to let teens easily bypass the checkup.

Privacy Shortcuts

To keep all users abreast of their privacy settings, Facebook has redesigned its Privacy Shortcuts in a colorful format that sticks out from the rest of the site. No complaints here.

Download Your Information

Facebook has completely redesigned its Download Your Information tool after keeping it basically the same for the past 8 years. You can now view your content and data in different categories without downloading it, which alongside the new privacy shortcuts is perhaps the only unequivocally positive and unproblematic change amidst today’s announcements.

And Facebook now lets you select certain categories of data, date ranges, JSON or HTML format, and image quality to download. That could make it quicker and easier if you just need a copy of a certain type of content but don’t need to export all your photos and videos for example. Thankfully, Facebook says you’ll be able to now export your media in a higher resolution than the old tool allowed.

But the big problem here was the subject of my feature piece about Facebook’s lack of data portability. The Download Your Information tool is supposed to let you take your data and go to a different social network. But it only exports your social graph aka your friends as a text list of names. There are no links, usernames, or other unique identifiers unless friends opt into let you export their email or phone number, so good luck finding the right John Smith on another app. The new version of Download Your Information works exactly the same, rather than offering any interoperable format that would let you find your friends elsewhere.

A Higher Standard

Overall, it seems like Facebook is complying with the letter of GDPR law, but with questionable spirit. Sure, privacy is boring to a lot of people. Too little info and they feel confused and scared. Too many choices and screens and they feel overwhelmed and annoyed. Facebook struck the right balance in some places here. But the subtly pushy designs seem intended to push people away from changing their defaults in ways that could hamper Facebook’s mission and business.

Making the choices even in visible weight, rather than burying the ways to make changes in grayed-out buttons and tiny links, would have been more fair. And it would have shown that Facebook has faith in the value it provides, such that users would stick around and leave features enabled if they truly wanted to.

When questioned about this, Sherman pointed the finger at other tech companies, saying he thought Facebook was more upfront with users. Asked to clarify if he thought Facebook’s approach was “better”, he said “I think that’s right”. But Facebook isn’t being judged by the industry standard because it’s not a standard company. It’s built its purpose and its business on top of our private data, and touted itself as a boon to the world. But when asked to clear a higher bar for privacy, Facebook delved into design tricks to keep from losing our data

Electric scooter permits will be required in San Francisco

The San Francisco Board of Supervisors unanimously voted today to approve the ordinance that looks to regulate electric scooters in San Francisco. The ordinance seeks to establish regulation and a permitting process that would enable the San Francisco Municipal Transportation Agency or Department of Public Works to take action against scooters from companies that don’t have an official permit from the city.

“Part of the brouhaha has been really the function of the fact, which was admitted yesterday, was that some of these companies have been a little bit fast and loose with the truth,” Supervisor Aaron Peksin, a sponsor of the ordinance, said today at the Board of Supervisors meeting.*

Peskin is referencing the fact that Lime, Spin and Bird deployed their respective scooters without permission from the city. The permitting scheme the city has in mind, Peskin said, is very similar to the one San Francisco has in place around stationless bike-sharing.

“This is a basic permitting scheme to allow the professional staff at SFMTA to permit these with sensible, regulatory frameworks and to be able to confiscate unpermitted vehicles or devices,” Peskin said.

He added that these electric scooters can absolutely serve some benefits to people in San Francisco, but that it does not mean the city should have to sacrifice its sidewalk space. The next step is for the BOS to continue working with the SFMTA to develop this regulation. At a hearing yesterday, the SFMTA said it hopes to open up the permitting process by May 1.

Earlier in the meeting today, the BOS adopted a resolution to develop a working group to inform future legislation around emerging technologies. One of the resolution’s sponsors, Supervisor Norman Yee, noted how he’s heard from seniors and people in wheelchairs who are “being imperiled and inconvenienced because they are having to navigate around scooters and bikes.”

He later added, the purpose of the working group would be to ensure the city is mindful of both the intended and unintended consequences of emerging technologies.

Yesterday, SF City Attorney Dennis Herrera sent cease-and-desist letters to Lime, Bird and Spin, but that doesn’t seem to be making any difference to Lime, Bird and Spin. All three of their respective scooters were found on the streets of San Francisco this morning.

“As it says in the letter, the City Attorney has laid out some recommendations for operation that he will like to see implemented by April 30; he has not requested an immediate stoppage of service,” a Bird spokesperson told TechCrunch. “We are taking his concerns very seriously and reviewing his recommendations for improving Bird in San Francisco.”

I’ve reached out to Lime and Spin about their respective operations in San Francisco. I’ll update this story if I hear back.

An earlier version of this story misattributed Supervisor Aaron Peskin’s quotes to another supervisor.