Alphabet, Apple, Amazon and Facebook are in the crosshairs of the FTC and DOJ

A new deal between the Department of Justice and the Federal Trade Commission will see U.S. regulators divide and conquer as they expand their oversight of Apple, Alphabet, Amazon and Facebook, according to multiple reports.

New details have emerged of an agreement between the two U.S. regulators that would see the Federal Trade Commission take the reins in antitrust investigations of Amazon and Facebook, while the Department of Justice will oversee investigations of Alphabet’s Google business and Apple.

Shares of Apple’s stock tumbled on news of the Justice Department’s role in overseeing the company, which was first reported by Reuters even as the company was in the middle of its developer conference celebrating all things Apple.

Google has been here before. Eight years ago, the Federal Trade Commission began an investigation into Google for antitrust violations, ultimately determining that the company had not violated any antitrust or anti-competitive statutes in its display of search results.

On Friday, The Washington Post reported that the Justice Department was initiating a federal antitrust investigation into Google’s business practices, according to multiple sources.

That report was followed by additional reporting from The Wall Street Journal indicating that Facebook would be investigated by the Federal Trade Commission.

The last time that technology companies faced this kind of scrutiny was Google’s antitrust investigation, or the now twenty-one year old lawsuit brought by the Justice Department and multiple states against Microsoft.

But times have changed since Google had its hearing before a much friendlier audience of regulators under President Barack Obama .

These days, Republican and Democratic lawmakers are both making the case that big technology companies hold too much power in American political and economic life.

Issues around personal privacy, economic consolidation, misinformation and free speech are on the minds of both Republican and Democratic lawmakers. Candidates vying for the Democratic nomination in next years Presidential election have made investigations into the breakup of big technology companies central components of their policy platforms.

Meanwhile, Republican lawmakers and agencies began stepping up their rhetoric and planning for how to oversee these companies beginning last September, when the Justice Department brought a group of the nation’s top prosecutors together to discuss technology companies’ growing power.

News of the increasing government activity sent technology stocks plummeting. Amazon shares were down $96 per-share to $1,680.05 — an over 5% drop on the day. Shares of Alphabet tumbled to $1031.53, a $74.76 decline or 6.76%. Declines at Facebook and Apple were more muted, with Apple falling $2.97, or 1.7%, to $172.32 and Facebook sliding $14.11 (or 7.95%) to $163.36.

In Senate confirmation hearings in January, the new Attorney General William Barr noted that technology companies would face more time under the regulatory microscope during his tenure, according to The Wall Street Journal .

“I don’t think big is necessarily bad, but I think a lot of people wonder how such huge behemoths that now exist in Silicon Valley have taken shape under the nose of the antitrust enforcers,” Barr said. “You can win that place in the marketplace without violating the antitrust laws, but I want to find out more about that dynamic.”

The formula behind San Francisco’s startup success

Why has San Francisco’s startup scene generated so many hugely valuable companies over the past decade?

That’s the question we asked over the past few weeks while analyzing San Francisco startup funding, exit, and unicorn creation data. After all, it’s not as if founders of Uber, Airbnb, Lyft, Dropbox and Twitter had to get office space within a couple of miles of each other.

We hadn’t thought our data-centric approach would yield a clear recipe for success. San Francisco private and newly public unicorns are a diverse bunch, numbering more than 30, in areas ranging from ridesharing to online lending. Surely the path to billion-plus valuations would be equally varied.

But surprisingly, many of their secrets to success seem formulaic. The most valuable San Francisco companies to arise in the era of the smartphone have a number of shared traits, including a willingness and ability to post massive, sustained losses; high-powered investors; and a preponderance of easy-to-explain business models.

No, it’s not a recipe that’s likely replicable without talent, drive, connections and timing. But if you’ve got those ingredients, following the principles below might provide a good shot at unicorn status.

First you conquer, then you earn

Losing money is not a bug. It’s a feature.

First, lose money until you’ve left your rivals in the dust. This is the most important rule. It is the collective glue that holds the narratives of San Francisco startup success stories together. And while companies in other places have thrived with the same practice, arguably San Franciscans do it best.

It’s no secret that a majority of the most valuable internet and technology companies citywide lose gobs of money or post tiny profits relative to valuations. Uber, called the world’s most valuable startup, reportedly lost $4.5 billion last year. Dropbox lost more than $100 million after losing more than $200 million the year before and more than $300 million the year before that. Even Airbnb, whose model of taking a share of homestay revenues sounds like an easy recipe for returns, took nine years to post its first annual profit.

Not making money can be the ultimate competitive advantage, if you can afford it.

Industry stalwarts lose money, too. Salesforce, with a market cap of $88 billion, has posted losses for the vast majority of its operating history. Square, valued at nearly $20 billion, has never been profitable on a GAAP basis. DocuSign, the 15-year-old newly public company that dominates the e-signature space, lost more than $50 million in its last fiscal year (and more than $100 million in each of the two preceding years). Of course, these companies, like their unicorn brethren, invest heavily in growing revenues, attracting investors who value this approach.

We could go on. But the basic takeaway is this: Losing money is not a bug. It’s a feature. One might even argue that entrepreneurs in metro areas with a more fiscally restrained investment culture are missing out.

What’s also noteworthy is the propensity of so many city startups to wreak havoc on existing, profitable industries without generating big profits themselves. Craigslist, a San Francisco nonprofit, may have started the trend in the 1990s by blowing up the newspaper classified business. Today, Uber and Lyft have decimated the value of taxi medallions.

Not making money can be the ultimate competitive advantage, if you can afford it, as it prevents others from entering the space or catching up as your startup gobbles up greater and greater market share. Then, when rivals are out of the picture, it’s possible to raise prices and start focusing on operating in the black.

Raise money from investors who’ve done this before

You can’t lose money on your own. And you can’t lose any old money, either. To succeed as a San Francisco unicorn, it helps to lose money provided by one of a short list of prestigious investors who have previously backed valuable, unprofitable Northern California startups.

It’s not a mysterious list. Most of the names are well-known venture and seed investors who’ve been actively investing in local startups for many years and commonly feature on rankings like the Midas List. We’ve put together a few names here.

You might wonder why it’s so much better to lose money provided by Sequoia Capital than, say, a lower-profile but still wealthy investor. We could speculate that the following factors are at play: a firm’s reputation for selecting winning startups, a willingness of later investors to follow these VCs at higher valuations and these firms’ skill in shepherding portfolio companies through rapid growth cycles to an eventual exit.

Whatever the exact connection, the data speaks for itself. The vast majority of San Francisco’s most valuable private and recently public internet and technology companies have backing from investors on the short list, commonly beginning with early-stage rounds.

Pick a business model that relatives understand

Generally speaking, you don’t need to know a lot about semiconductor technology or networking infrastructure to explain what a high-valuation San Francisco company does. Instead, it’s more along the lines of: “They have an app for getting rides from strangers,” or “They have an app for renting rooms in your house to strangers.” It may sound strange at first, but pretty soon it’s something everyone seems to be doing.

It’s not a recipe that’s likely replicable without talent, drive, connections and timing.

list of 32 San Francisco-based unicorns and near-unicorns is populated mostly with companies that have widely understood brands, including Pinterest, Instacart and Slack, along with Uber, Lyft and Airbnb. While there are some lesser-known enterprise software names, they’re not among the largest investment recipients.

Part of the consumer-facing, high brand recognition qualities of San Francisco startups may be tied to the decision to locate in an urban center. If you were planning to manufacture semiconductor components, for instance, you would probably set up headquarters in a less space-constrained suburban setting.

Reading between the lines of red ink

While it can be frustrating to watch a company lurch from quarter to quarter without a profit in sight, there is ample evidence the approach can be wildly successful over time.

Seattle’s Amazon is probably the poster child for this strategy. Jeff Bezos, recently declared the world’s richest man, led the company for more than a decade before reporting the first annual profit.

These days, San Francisco seems to be ground central for this company-building technique. While it’s certainly not necessary to locate here, it does seem to be the single urban location most closely associated with massively scalable, money-losing consumer-facing startups.

Perhaps it’s just one of those things that after a while becomes status quo. If you want to be a movie star, you go to Hollywood. And if you want to make it on Wall Street, you go to Wall Street. Likewise, if you want to make it by launching an industry-altering business with a good shot at a multi-billion-dollar valuation, all while losing eye-popping sums of money, then you go to San Francisco.

Regulus Cyber launches with a technology to secure autonomous vehicles

Over the next 20 years the autonomous vehicle market is expected to grow into a $700 billion industry as robots take over nearly every aspect of mobility.

One of the key arguments for this shift away from manually operated machines is that they offer greater safety thanks to less risk of human error. But as these autonomous vehicles proliferate, there needs to be a way to ensure that these systems aren’t exposed to the same kinds of hacking threats that have bedeviled the tech industry since its creation.

It’s the rationale behind Regulus Cyber, a new Israeli security technology developer founded by Yonatan Zunger and Yoav Zangvil — two longtime professionals from Israel’s aerospace and defense industry.

“We’re building a system that is looking at different sensors and the first system is GPS,” Zunger says. Using a proprietary array of off-the-shelf antennas and software developed internally, the system Regulus has designed can determine whether a GPS signal is legitimate or has been spoofed by a hacker (think of it as a way to defend against the kind of hack used by the bad guys in “Die Hard 2“).

Zunger first had the idea to launch the company three years ago while he was working with drones at the Israeli technology firm, Elbit. At the time, militaries were beginning to develop technologies to combat drone operations and Zunger figured it was only a matter of time before those technologies made their way into the commercial drone market as well.

While the technology works for unmanned aerial vehicles, it also has applications for pretty much any type of autonomous transportation technology.

Backing the company are a clutch of well-known Israeli and American investors, including Sierra Ventures, Canaan Partners Israel, Technion and F2 Capital.

Regulus, which raised $6.3 million in financing before emerging from stealth, said the money will be used to expand its sales and marketing efforts and to continue to develop its technology.

The company’s first two products are a spoofing protection module that integrates with any autonomous vehicle and a communication security manager that protects against hacking and misdirection.

“We are very excited to lead this round of financing. Sensors security for autonomous machines will become as important as processors security. Regulus identified the key vulnerabilities and developed the best-in-class solutions,” said Ben Yu, a managing director of Sierra Ventures, in a statement. “Having been working with the company since seed funding, Sierra invested with strong confidence in the team to build Regulus into the category leader.”

Investing in frontier technology is (and isn’t) cleantech all over again

I entered the world of venture investing a dozen years ago.  Little did I know that I was embarking on a journey to master the art of balancing contradictions: building up experience and pattern recognition to identify outliers, emphasizing what’s possible over what’s actual, generating comfort and consensus around a maverick founder with a non-consensus view, seeking the comfort of proof points in startups that are still very early, and most importantly, knowing that no single lesson learned can ever be applied directly in the future as every future scenario will certainly be different.

I was fortunate to start my venture career at a fund specializing in funding “Frontier” technology companies. Real-estate was white hot, banks were practically giving away money, and VCs were hungry to fund hot startups.

I quickly found myself in the same room as mainstream software investors looking for what’s coming after search, social, ad-tech, and enterprise software. Cleantech was very compelling: an opportunity to make money while saving our planet.  Unfortunately for most, neither happened: they lost their money and did little to save the planet.

Fast forward a decade, after investors scored their wins in online lending, cloud storage, and on-demand, I find myself, again, in the same room with consumer and cloud investors venturing into “Frontier Tech”.  The are dazzled by the founders’ presentations, and proud to have a role in funding turning the seemingly impossible to what’s possible through science. However, what lessons did they take away from the Cleantech cycle? What should Frontier Tech founders and investors be thinking about to avoid the same fate?

Coming from a predominantly academic background, I was excited to be part of the emerging trend of funding founders leveraging technology to make how we generate, move, and consume our natural resources more efficient and sustainable. I was thrilled to be digging into technologies underpinning new batteries, photovoltaics, wind turbines, superconductors, and power electronics.

To prove out their business models, these companies needed to build out factories, supply chains, and distribution channels. It wasn’t long until the core technology development became a small piece of an otherwise complex, expensive operation. The hot energy startup factory started to look and feel mysteriously like a magnetic hard drive factory down the street. Wait a minute, that’s because much of the equipment and staff did come from factories making components for PCs; but this time they were making products for generating, storing, and moving energy more renewably. So what went wrong?

Whether it was solar, wind, or batteries, the metrics were pretty similar: dollars per megawatt, mass per megawatt, or multiplying by time to get dollars and mass per unit energy, whether it was for the factories or the systems. Energy is pretty abundant, so the race was on to to produce and handle a commodity. Getting started as a real competitive business meant going BIG: as many of the metrics above depended on size and scale. Hundreds of millions of dollars of venture money only went so far.

The onus was on banks, private equity, engineering firms, and other entities that do not take technology risk, to take a leap of faith to take a product or factory from 1/10th scale to full-scale. The rest is history: most cleantech startups hit a funding valley of death.  They need to raise big money while sitting at high valuations, without a kernel of a real business to attract investors that write those big checks to scale up businesses.

How are Frontier-Tech companies advantaged relative to their Cleantech counterparts? For starters, most aren’t producing a commodity…

Frontier Tech, like Cleantech, can be capital-intense. Whether its satellite communications, driverless cars, AI chips, or quantum computing; like Cleantech, there is relatively larger amounts of capital needed to take the startups the point where they can demonstrate the kernel of a competitive business.  In other words, they typically need at least tens of millions of dollars to show they can sell something and profitably scale that business into a big market. Some money is dedicated to technology development, but, like cleantech a disproportionate amount will go into building up an operation to support the business. Here are a couple examples:

  • Satellite communications: It takes a few million dollars to demonstrate a new radio and spacecraft. It takes tens of millions of dollars to produce the satellites, put them into orbit, build up ground station infrastructure, the software, systems, and operations needed to serve fickle, enterprise customers. All of this while facing competition from incumbent or in-house efforts. At what point will the economics of the business attract a conventional growth investor to fund expansion? If Cleantech taught us anything, it’s that the big money would prefer to watch from the sidelines for longer than you’d think.
  • Quantum compute: Moore’s law is improving new computers at a breakneck pace, but the way they get implemented as pretty incremental. Basic compute architectures date back to the dawn of computing, and new devices can take decades to find their way into servers. For example, NAND Flash technology dates back to the 80s, found its way into devices in the 90s, and has been slowly penetrating datacenters in the past decade. Same goes for GPUs; even with all the hype around AI. Quantum compute companies can offer a service direct to users, i.e., homomorphic computing, advanced encryption/decryption, or molecular simulations. However, that would one of the rare occasions where novel computing machine company has offered computing as opposed to just selling machines. If I had to guess; building the quantum computers will be relatively quick; building the business will be expensive.
  • Operating systems for driverless cars: Tremendous progress has been made since Google first presented its early work in 2011. Dozens of companies are building software that do some combination of perception, prediction, planning, mapping, and simulations.  Every operator of autonomous cars, whether they are vertical like Zoox, or working in partnerships like GM/Cruise, have their own proprietary technology stacks. Unlike building an iPhone app, where the tools are abundant and the platform is well-understood, integrating a complete software module into an autonomous driving system may take up more effort than putting together the original code in the first place.

How are Frontier-Tech companies advantaged relative to their Cleantech counterparts? For starters, most aren’t producing a commodity: it’s easier to build a Frontier-tech company that doesn’t need to raise big dollars before demonstrating the kernel of an interesting business. On rare occasions, if the Frontier tech startup is a pioneer in its field, then it can be acquired for top dollar for the quality of its results and its team.

Recent examples are Salesforce’s acquisition of Metamind, GM’s acquisition of Cruise, and Intel’s acquisition of Nervana (a Lux investment). However, as more competing companies get to work on a new technology, the sense of urgency to acquire rapidly diminishes as the scarce, emerging technology quickly becomes widely available: there are now scores of AI, autonomous car, and AI chip companies out there. Furthermore, as technology becomes more complex, its cost of integration into a product (think about the driverless car example above) also skyrockets.  Knowing this likely liability, acquirers will tend to pay less.

Creative founding teams will find ways to incrementally build interesting businesses as they are building up their technologies. 

I encourage founders, and investors to emphasize the businesses they are building through their inventions.  I encourage founders to rethink plans that require tens of millions of dollars before being able to sell products, while warning founders not to chase revenue for the sake of revenue.

I suggest they look closely at their plans and find creative ways to start penetrating, or building exciting markets, hence interesting businesses, with modest amounts of capital. I advise them to work with investors who, regardless of whether they saw how Cleantech unfolded, are convinced that their $$ can take the company to the point where it can engage customers with an interesting product with a sense for how it can scale into an attractive business.

By keeping its head in the cloud, Microsoft makes it rain on shareholders

Thanks in part to its colossal cloud business, Microsoft earnings are drenching shareholders in dollars.

For the quarter ending March 31, 2018, the tech ringer from Redmond saw its revenue increase to $26.8 billion (up 16 percent) from $23.2 billion, with operating income up 23 percent to $8.3 billion, up from $6.7 billion.

Income was a whopping $7.4 billion (up from $5.5 billion) and diluted earnings per share were 95 cents versus analyst expectations of 85 cents per share, according to FactSet.

Despite the earnings beat, shares of the company stock fell as much as 1 percent in after-hours trading on the Nasdaq stock exchange.

Floating much of Microsoft’s success for the quarter was the continued strength of the company’s cloud business, which chief executive Satya Nadella singled out in a statement.

“Our results this quarter reflect the trust people and organizations are placing in the Microsoft Cloud,” Nadella said. “We are innovating across key growth categories of infrastructure, AI, productivity and business applications.”

The company also returned $6.3 billion to shareholders in dividends and share repurchases in the third quarter 2018, an increase of 37 percent.

The company notched wins across the board. In addition to the growth of its cloud business — led by Azure (which grew 93 percent) — Microsoft also recorded strong growth from LinkedIn, which saw revenue increase 37 percent to $1.3 billion and hardware revenue from the Surface increasing 32 percent.

Even the move of Microsoft office into a hosted business seems to have stanched the flow of bleeding from the company’s former cash cow. The company counts 135 million business users on Office 365, and 30.6 million consumer subscriptions for the service.

The Surface numbers are notable because it’s perhaps the first indication that its hardware successes aren’t necessarily limited to the Xbox (insert Zune joke here).

Leaked iPhone pics show glass back and headphone jack

The headphone jack could still have a future in an iPhone. These leaked pics show an iPhone SE 2 with a glass back and headphone jack. Like the current iPhone SE, the design seems to be a take on the classic iPhone 5. I dig it.

The leak also states the upcoming device sports wireless charging, which puts it in line with the iPhone 8 and iPhone X.

Rumors have long stated that Apple was working on an updated iPhone SE. The original was released in March 16 and updated a year later with improved specs. With a 4-inch screen, the iPhone SE is the smallest iPhone Apple offers and also the cheapest.

WWDC in early June is the next major Apple event and could play host for the launch of this phone. Last month, around the iPhone SE’s birthday, Apple held a special event in a Chicago school to launch an education-focused iPad. It’s logical that Apple pushed the launch of this new iPhone SE to WWDC to give the iPad event breathing room.

While Apple cut the headphone jack from its flagship devices, the SE looks to retain the connection. It makes sense. The low-cost iPhone is key for Apple in growing markets across the world where the last two models helped grow iOS’s market penetration. This is Apple’s low-cost offering and thus suggests Apple doesn’t expect buyers to also spring for its wireless earbuds.

If released at WWDC or later in the year, the iPhone SE looks to serve consumers who enjoy smaller phones with headphone jacks. That’s me.

Meet the quantum blockchain that works like a time machine

A new – and theoretical – system for blockchain -based data storage could ensure that hackers will not be able to crack cryptocurrencies once we all enter the quantum era. The idea, proposed by researchers at the Victoria University of Wellington in New Zealand, would secure our crypto futures for decades to coming using a blockchain technology that is like a time machine.

You can check out their findings here.

To understand what’s going on here we have to define some terms. A blockchain stores every transaction in a system on what amounts to an immutable record of events. The work necessary for maintaining and confirming this immutable record is what is commonly known as mining . But this technology – which the paper’s co-author Del Rajan claims will make up “10 percent of global GDP… by 2027” – will become insecure in an era of quantum computers.

Therefore the solution is to store a blockchain in a quantum era requires a quantum blockchain using a series of entangled photons. Further, Spectrum writes: “Essentially, current records in a quantum blockchain are not merely linked to a record of the past, but rather a record in the past, one does that not exist anymore.”

Yeah, it’s weird.

From the paper intro:

Our method involves encoding the blockchain into a temporal GHZ (Greenberger–Horne–Zeilinger) state of photons that do not simultaneously coexist. It is shown that the entanglement in time, as opposed to an entanglement in space, provides the crucial quantum advantage. All the subcomponents of this system have already been shown to be experimentally realized. Perhaps more shockingly, our encoding procedure can be interpreted as non-classically influencing the past; hence this decentralized quantum blockchain can be viewed as a quantum networked time machine.

In short the quantum blockchain is immutable because the photons that it contains do not exist in the current time but are still extant and readable. This means you can see the entire blockchain but you cannot “touch” it and the only entry you would be able to try to tamper with is the most recent one. In fact, the researchers write, “In this spatial entanglement case, if an attacker tries to tamper with any photon, the full blockchain would be invalidated immediately.”

Is this really possible? The researchers note that the technology already exists.

“Our novel methodology encodes a blockchain into these temporally entangled states, which can then be integrated into a quantum network for further useful operations. We will also show that entanglement in time, as opposed to entanglement in space, plays the pivotal role for the quantum benefit over a classical blockchain,” the authors write. “As discussed below, all the subsystems of this design have already been shown to be experimentally realized. Furthermore, if such a quantum blockchain were to be constructed, we will show that it could be viewed as a quantum networked time machine.”

Don’t worry about having to update your Bitcoin wallet, though. This process is still very theoretical and not at all available to mere mortals. That said, it’s nice to know someone is looking out for our quantum future, however weird it may be.

Mobile guru Amol Sarva talks about the future of work

Amol Sarva has done some amazing stuff. The founder of Virgin Mobile, Sarva went on to create the Peek email device created back when cheap, ubiquitous mobile devices were nowhere to be found. Now he runs Knotel, a unique workspace aimed at up and coming startups.

In this episode of Technotopia I asked Sarva about his thoughts on work, interaction and the future of offices. In his vision we are all working together remotely using tools that could allow us to all directly interact simply by using our brains. It’s an odd — and cool — idea, and he’s a fun interview subject.

Technotopia is a podcast by John Biggs about a better future. You can subscribe in Stitcher, RSS or iTunes and listen to the MP3 here.

UK report urges action to combat AI bias

The need for diverse development teams and truly representational data-sets to avoid biases being baked into AI algorithms is one of the core recommendations in a lengthy Lords committee report looking into the economic, ethical and social implications of artificial intelligence, and published today by the upper House of the UK parliament.

“The main ways to address these kinds of biases are to ensure that developers are drawn from diverse gender, ethnic and socio-economic backgrounds, and are aware of, and adhere to, ethical codes of conduct,” the committee writes, chiming with plenty of extant commentary around algorithmic accountability.

“It is essential that ethics take centre stage in AI’s development and use,” adds committee chairman, Lord Clement-Jones, in a statement. “The UK has a unique opportunity to shape AI positively for the public’s benefit and to lead the international community in AI’s ethical development, rather than passively accept its consequences.”

The report also calls for the government to take urgent steps to help foster “the creation of authoritative tools and systems for auditing and testing training datasets to ensure they are representative of diverse populations, and to ensure that when used to train AI systems they are unlikely to lead to prejudicial decisions” — recommending a publicly funded challenge to incentivize the development of technologies that can audit and interrogate AIs.

“The Centre for Data Ethics and Innovation, in consultation with the Alan Turing Institute, the Institute of Electrical and Electronics Engineers, the British Standards Institute and other expert bodies, should produce guidance on the requirement for AI systems to be intelligible,” the committee adds. “The AI development sector should seek to adopt such guidance and to agree upon standards relevant to the sectors within which they work, under the auspices of the AI Council” — the latter being a proposed industry body it wants established to help ensure “transparency in AI”.

The committee is also recommending a cross-sector AI Code to try to steer developments in a positive, societally beneficial direction — though not for this to be codified in law (the suggestion is it could “provide the basis for statutory regulation, if and when this is determined to be necessary”).

Among the five principles they’re suggesting as a starting point for the voluntary code are that AI should be developed for “the common good and benefit of humanity”, and that it should operate on “principles of intelligibility and fairness”.

Though, elsewhere in the report, the committee points out it can be a challenge for humans to understand decisions made by some AI technologies — going on to suggest it may be necessary to refrain from using certain AI techniques for certain types of use-cases, at least until algorithmic accountability can be guaranteed.

“We believe it is not acceptable to deploy any artificial intelligence system which could have a substantial impact on an individual’s life, unless it can generate a full and satisfactory explanation for the decisions it will take,” it writes in a section discussing ‘intelligible AI’. “In cases such as deep neural networks, where it is not yet possible to generate thorough explanations for the decisions that are made, this may mean delaying their deployment for particular uses until alternative solutions are found.”

A third principle the committee says it would like to see included in the proposed voluntary code is: “AI should not be used to diminish the data rights or privacy of individuals, families or communities”.

Though this is a curiously narrow definition — why not push for AI not to diminish rights, period?

“It’s almost as if ‘follow the law’ is too hard to say,” observes Sam Smith, a coordinator at patient data privacy advocacy group, medConfidential, discussing the report.

Looking at the tech industry as a whole, it’s certainly hard to conclude that self-defined ‘ethics’ appear to offer much of a meaningful check on commercial players’ data processing and AI activities.

Topical case in point: Facebook has continued to claim there was nothing improper about the fact millions of people’s information was shared with professor Aleksandr Kogan. People “knowingly provided their information” is the company’s defensive claim.

Yet the vast majority of people whose personal data was harvested from Facebook by Kogan clearly had no idea what was possible under its platform terms — which, until 2015, allowed one user to ‘consent’ to the sharing of all their Facebook friends. (Hence ~270,000 downloaders of Kogan’s app being able to pass data on up to 87M Facebook users.)

So Facebook’s self-defined ‘ethical code’ has been shown to be worthless — aligning completely with its commercial imperatives, rather than supporting users to protect their privacy. (Just as its T&Cs are intended to cover its own “rear end”, rather than clearly inform people’s about their rights, as one US congressman memorably put it last week.)

“A week after Facebook were criticized by the US Congress, the only reference to the Rule of Law in this report is about exempting companies from liability for breaking it,” Smith adds in a MedConfidential response statement to the Lords report. “Public bodies are required to follow the rule of law, and any tools sold to them must meet those legal obligations. This standard for the public sector will drive the creation of tools which can be reused by all.”

Health data “should not be shared lightly”

The committee, which took evidence from Google -owned DeepMind as one of a multitude of expert witnesses during more than half a year’s worth of enquiry, touches critically on the AI company’s existing partnerships with UK National Health Service Trusts.

The first of which, dating from 2015 — and involving the sharing of ~1.6 million patients’ medical records with the Google-owned company — ran into trouble with the UK’s data protection regulator. The UK’s information commissioner concluded last summer that the Royal Free NHS Trust’s agreement with DeepMind had not complied with UK data protection law.

Patients’ medical records were used by DeepMind to develop a clinical task management app wrapped around an existing NHS algorithm for detecting a condition known as acute kidney injury. The app, called Streams, has been rolled out for use in the Royal Free’s hospitals — complete with PR fanfare. But it’s still not clear what legal basis exists to share patients’ data.

“Maintaining public trust over the safe and secure use of their data is paramount to the successful widespread deployment of AI and there is no better exemplar of this than personal health data,” the committee warns. “There must be no repeat of the controversy which arose between the Royal Free London NHS Foundation Trust and DeepMind. If there is, the benefits of deploying AI in the NHS will not be adopted or its benefits realised, and innovation could be stifled.”

The report also criticizes the “current piecemeal” approach being taken by NHS Trusts to sharing data with AI developers — saying this risks “the inadvertent under-appreciation of the data” and “NHS Trusts exposing themselves to inadequate data sharing arrangements”.

“The data held by the NHS could be considered a unique source of value for the nation. It should not be shared lightly, but when it is, it should be done in a manner which allows for that value to be recouped,” the committee writes.

A similar point — about not allowing a huge store of potential value which is contained within publicly-funded NHS datasets to be cheaply asset-stripped by external forces — was made by Oxford University’s Sir John Bell in a UK government-commissioned industrial strategy review of the life sciences sector last summer.

Despite similar concerns, the committee also calls for a framework for sharing NHS data be published by the end of the year, and is pushing for NHS Trusts to digitize their current practices and records — with a target deadline of 2022 — in “consistent formats” so that people’s medical records can be made more accessible to AI developers.

But worryingly, given the general thrust towards making sensitive health data more accessible to third parties, the committee does not seem to have a very fine-grained grasp of data protection in a health context — where, for example, datasets can be extremely difficult to render truly anonymous given the level of detail typically involved.

Although they are at least calling for the relevant data protection and patient data bodies to be involved in provisioning the framework for sharing NHS data, alongside Trusts that have already worked with DeepMind (and in one case received an ICO wrist-slap).

They write:

We recommend that a framework for the sharing of NHS data should be prepared and published by the end of 2018 by NHS England (specifically NHS Digital) and the National Data Guardian for Health and Care should be prepared with the support of the ICO [information commissioner’s office] and the clinicians and NHS Trusts which already have experience of such arrangements (such as the Royal Free London and Moorfields Eye Hospital NHS Foundation Trusts), as well as the Caldicott Guardians [the NHS’ patient data advocates]. This framework should set out clearly the considerations needed when sharing patient data in an appropriately anonymised form, the precautions needed when doing so, and an awareness of the value of that data and how it is used. It must also take account of the need to ensure SME access to NHS data, and ensure that patients are made aware of the use of their data and given the option to opt out.

As the Facebook-Cambridge Analytica scandal has clearly illustrated, opt-outs alone cannot safeguard people’s data or their legal rights — which is why incoming EU data protection rules (GDPR) beef up consent requirements to require a clear affirmative. (And it goes without saying that opt-outs are especially concerning in a medical context where the data involved is so sensitive — yet, at least in the case of a DeepMind partnership with Taunton and Somerset NHS Trust, patients do not even appear to have been given the ability to say no to their data being processed.)

Opt-outs (i.e. rather than opt-in systems) for data-sharing and self-defined/voluntary codes of ‘ethics’ demonstrably do very little to protect people’s legal rights where digital data is concerned — even if it’s true, for example, that Facebook holds itself in check vs what it could theoretically do with data, as company execs have suggested (one wonders what kind stuff they’re voluntarily refraining from, given what they have been caught trying to manipulate).

The wider risk of relying on consumer savvy to regulate commercial data sharing is that an educated, technologically aware few might be able to lock down — or reduce — access to their information; but the mainstream majority will have no clue they need to or even how it’s possible. And data protection for a select elite doesn’t sound very equitable.

Meanwhile, at least where this committee’s attitude to AI is concerned, developers and commercial entities are being treated with favorable encouragement — via the notion of a voluntary (and really pretty basic) code of AI ethics — rather than being robustly reminded they need to follow the law.

Given the scope and scale of current AI-fueled sandals, that risks the committee looking naive.

Though the government has made AI a strategic priority, and policies to foster and accelerate data-sharing to drive tech developments are a key part of its digital and industrial strategies. So the report needs to be read within that wider context.

The committee does add its voice to questions about whether/how legal liability will mesh with automated decision making — writing that “clarity is required” on whether “new mechanisms for legal liability and redress” are needed or not.

We recommend that the Law Commission consider the adequacy of existing legislation to address the legal liability issues of AI and, where appropriate, recommend to Government appropriate remedies to ensure that the law is clear in this area,” it says on this. “At the very least, this work should establish clear principles for accountability and intelligibility. This work should be completed as soon as possible.” 

But this isn’t exactly cutting edge commentary. Last month the government announced a three-year regulatory review focused on self-driving cars and the law, for instance. And the liability point is already generally well-aired — and in the autonomous cars case, at least, now having its tires extensively kicked in the UK.

What’s less specifically discussed in government circles is how AIs are demonstrably piling pressure on existing laws. And what — if anything — should be done to address those kind of AI-fueled breaking points. (Exceptions: Terrorist content spreading via online platforms has been decried for some years, with government ministers more than happy to make platforms and technologies their scapegoat and toughen laws; more recently hate speech on online platforms has also become a major political target for governments in Europe.)

The committee briefly touches on some of these societal pressure points in a section on AI’s impact on “social and political cohesion”, noting concerns raised to it about issues such as filter bubbles and the risk of AIs being used to manipulate elections. “[T]here is a rapidly growing need for public understanding of, and engagement with, AI to develop alongside the technology itself. The manipulation of data in particular will be a key area for public understanding and discussion in the coming months and years,” it writes here. 

However it has little in the way of gunpowder — merely recommending that research is commissioned into “the possible impact of AI on conventional and social media outlets”, and to investigate “measures which might counteract the use of AI to mislead or distort public opinion as a matter of urgency”.

Elsewhere in the report, it alsos raise an interesting concern about data monopolies — noting that investments by “large overseas technology companies in the UK economy” are “increasing consolidation of power and influence by a select few”, which it argues risks damaging the UK’s home-grown AI start-up sector.

But again there’s not much of substance in its response. The committee doesn’t seem to have formed its own ideas on how or even whether the government needs to address data being concentrating power in the hands of big tech — beyond calling for “strong” competition frameworks. This lack of conviction is attributed to hearing mixed messages on the topic from its witnesses. (Though may well also be related to the economic portion of the enquiry’s focus.)

“The monopolisation of data demonstrates the need for strong ethical, data protection and competition frameworks in the UK, and for continued vigilance from the regulators,” it concludes. “We urge the Government, and the Competition and Markets Authority, to review proactively the use and potential monopolisation of data by the big technology companies operating in the UK.”

The report also raises concerns about access to funding for UK AI startups to ensure they can continue scaling domestic businesses — recommending that a chunk of the £2.5BN investment fund at the British Business Bank, which the government announced in the Autumn Budget 2017, is “reserved as an AI growth fund for SMEs with a substantive AI component, and be specifically targeted at enabling such companies to scale up”.

No one who supports the startup cause would argue with trying to make more money available. But if data access has been sealed up by tech giants all the scale up funding in the world won’t help domestic AI startups break through that algorithmic ceiling.

Also touched on: The looming impact of Brexit, with the committee calling on the government to “commit to underwriting, and where necessary replacing, funding for European research and innovation programmes, after we have left the European Union” . Which boils down to another whistle in a now very long score of calls for replacement funding after the UK leaves the EU.

Funding for regulators is another concern, with a warning that the ICO must be “adequately and sustainably resourced” — as a result of the additional burden the committee expects AI to put on existing regulators.

This issue is also on the radar of the UK’s digital minister, Matt Hancock, who has said he’s considering what additional resources the ICO might need — such as the power to compel testimony from individuals. (Though the ICO itself has previously raised concerns that the minister and his data protection bill are risking undermining her authority.) For now it remains to be seen how well armed the agency will be to meet the myriad challenges generated and scaled by AI’s data processors.

“Blanket AI-specific regulation, at this stage, would be inappropriate,” the report adds. “We believe that existing sector-specific regulators are best placed to consider the impact on their sectors of any subsequent regulation which may be needed. We welcome that the Data Protection Bill and GDPR appear to address many of the concerns of our witnesses regarding the handling of personal data, which is key to the development of AI. The Government Office for AI, with the Centre for Data Ethics and Innovation, needs to identify the gaps, if any, where existing regulation may not be adequate. The Government Office for AI must also ensure that the existing regulators’ expertise is utilised in informing any potential regulation that may be required in the future.”

The committee’s last two starter principles for their voluntary AI code serve to underline how generously low the ethical bar is really being set here — boiling down to: AI shouldn’t be allowed to kill off free schools for our kids, nor be allowed to kill us — which may itself be another consequence of humans not always being able to clearly determine how AI does what it does or exactly what it might be doing to us.

Is America’s national security Facebook and Google’s problem?

Jamie Metzl
Contributor

Jamie Metzl is a Senior Fellow for Technology and National Security at the Atlantic Council.
Eleonore Pauwels
Contributor

Eleonore Pauwels is Director of the Anticipatory Intelligence Lab at the Wilson Center, an international science policy expert specializing in the governance and democratization of converging technologies, and a former official of the European Commission’s Directorate on Science, Economy and Society.

Outrage that Facebook made the private data of over 87 million of its U.S. users available to the Trump campaign has stoked fears of big US-based technology companies are tracking our every move and misusing our personal data to manipulate us without adequate transparency, oversight, or regulation.

These legitimate concerns about the privacy threat these companies potentially pose must be balanced by an appreciation of the important role data-optimizing companies like these play in promoting our national security.

In his testimony to the combined US Senate Commerce and Judiciary Committees, Facebook CEO Mark Zuckerberg was not wrong to present his company as a last line of defense in an “ongoing arms race” with Russia and others seeking to spread disinformation and manipulate political and economic systems in the US and around the world.

The vast majority of the two billion Facebook users live outside the United States, Zuckerberg argued, and the US should be thinking of Facebook and other American companies competing with foreign rivals in “strategic and competitive” terms. Although the American public and US political leaders are rightly grappling with critical issues of privacy, we will harm ourselves if we don’t recognize the validity of Zuckerberg’s national security argument.

Facebook CEO and founder Mark Zuckerberg testifies during a US House Committee on Energy and Commerce hearing about Facebook on Capitol Hill in Washington, DC, April 11, 2018. (Photo: SAUL LOEB/AFP/Getty Images)

Examples are everywhere of big tech companies increasingly being seen as a threat. US President Trump has been on a rampage against Amazon, and multiple media outlets have called for the company to be broken up as a monopoly. A recent New York Times article, “The Case Against Google,” argued that Google is stifling competition and innovation and suggested it might be broken up as a monopoly. “It’s time to break up Facebook,” Politico argued, calling Facebook “a deeply untransparent, out-of-control company that encroaches on its users’ privacy, resists regulatory oversight and fails to police known bad actors when they abuse its platform.” US Senator Bill Nelson made a similar point when he asserted during the Senate hearings that “if Facebook and other online companies will not or cannot fix the privacy invasions, then we are going to have to. We, the Congress.”

While many concerns like these are valid, seeing big US technology companies solely in the context of fears about privacy misses the point that these companies play a far broader strategic role in America’s growing geopolitical rivalry with foreign adversaries. And while Russia is rising as a threat in cyberspace, China represents a more powerful and strategic rival in the 21st century tech convergence arms race.

Data is to the 21st century what oil was to the 20th, a key asset for driving wealth, power, and competitiveness. Only companies with access to the best algorithms and the biggest and highest quality data sets will be able to glean the insights and develop the models driving innovation forward. As Facebook’s failure to protect its users’ private information shows, these date pools are both extremely powerful and can be abused. But because countries with the leading AI and pooled data platforms will have the most thriving economies, big technology platforms are playing a more important national security role than ever in our increasingly big data-driven world.

BEIJING, CHINA – 2017/07/08: Robots dance for the audience on the expo. On Jul. 8th, Beijing International Consumer electronics Expo was held in Beijing China National Convention Center. (Photo by Zhang Peng/LightRocket via Getty Images)

China, which has set a goal of becoming “the world’s primary AI innovation center” by 2025, occupying “the commanding heights of AI technology” by 2030, and the “global leader” in “comprehensive national strength and international influence” by 2050, understands this. To build a world-beating AI industry, Beijing has kept American tech giants out of the Chinese market for years and stolen their intellectual property while putting massive resources into developing its own strategic technology sectors in close collaboration with national champion companies like Baidu, Alibaba, and Tencent.

Examples of China’s progress are everywhere.

Close to a billion Chinese people use Tencent’s instant communication and cashless platforms. In October 2017, Alibaba announced a three-year investment of $15 billion for developing and integrating AI and cloud-computing technologies that will power the smart cities and smart hospitals of the future. Beijing is investing $9.2 billion in the golden combination of AI and genomics to lead personalized health research to new heights. More ominously, Alibaba is prototyping a new form of ubiquitous surveillance that deploys millions of cameras equipped with facial recognition within testbed cities and another Chinese company, Cloud Walk, is using facial recognition to track individuals’ behaviors and assess their predisposition to commit a crime.

In all of these areas, China is ensuring that individual privacy protections do not get in the way of bringing together the massive data sets Chinese companies will need to lead the world. As Beijing well understands, training technologists, amassing massive high-quality data sets, and accumulating patents are key to competitive and security advantage in the 21st century.

“In the age of AI, a U.S.-China duopoly is not just inevitable, it has already arrived,” said Kai-Fu Lee, founder and CEO of Beijing-based technology investment firm Sinovation Ventures and a former top executive at Microsoft and Google. The United States should absolutely not follow China’s lead and disregard the privacy protections of our citizens. Instead, we must follow Europe’s lead and do significantly more to enhance them. But we also cannot blind ourselves to the critical importance of amassing big data sets for driving innovation, competitiveness, and national power in the future.

UNITED STATES – SEPTEMBER 24: Aerial view of the Pentagon building photographed on Sept. 24, 2017. (Photo By Bill Clark/CQ Roll Call)

In its 2017 unclassified budget, the Pentagon spent about $7.4 billion on AI, big data and cloud-computing, a tiny fraction of America’s overall expenditure on AI. Clearly, winning the future will not be a government activity alone, but there is a big role government can and must play. Even though Google remains the most important AI company in the world, the U.S. still crucially lacks a coordinated national strategy on AI and emerging digital technologies. While the Trump administration has gutted the white house Office of Science and Technology Policy, proposed massive cuts to US science funding, and engaged in a sniping contest with American tech giants, the Chinese government has outlined a “military-civilian integration development strategy” to harness AI to enhance Chinese national power.

FBI Director Christopher Wray correctly pointed out that America has now entered a “whole of society” rivalry with China. If the United States thinks of our technology champions solely within our domestic national framework, we might spur some types of innovation at home while stifling other innovations that big American companies with large teams and big data sets may be better able to realize.

America will be more innovative the more we nurture a healthy ecosystem of big, AI driven companies while also empowering smaller startups and others using blockchain and other technologies to access large and disparate data pools. Because breaking up US technology giants without a sufficient analysis of both the national and international implications of this step could deal a body blow to American prosperity and global power in the 21st century, extreme caution is in order.

America’s largest technology companies cannot and should not be dragooned to participate in America’s growing geopolitical rivalry with China. Based on recent protests by Google employees against the company’s collaboration with the US defense department analyzing military drone footage, perhaps they will not.

But it would be self-defeating for American policymakers to not at least partly consider America’s tech giants in the context of the important role they play in America’s national security. America definitely needs significantly stronger regulation to foster innovation and protect privacy and civil liberties but breaking up America’s tech giants without appreciating the broader role they are serving to strengthen our national competitiveness and security would be a tragic mistake.