Google appeals $1.7BN EU AdSense antitrust fine

Like clockwork, Google has filed a legal appeal against the €1.49 billion ($1.7BN) antitrust penalty the European Commission slapped on its search ad brokering business three months ago.

The Telegraph reported late yesterday that the appeal had been lodged in the General Court of the European Union in Brussels.

A Google spokesperson confirmed the appeal has been filed but declined to comment further.

Reached for comment, a Commission spokesperson told us: “The Commission will defend its decision in Court.”

The AdSense antitrust decision is the third fine for Google under the Commission’s current antitrust chief, Margrethe Vestager — who also issued a $5BN penalty for anti-competitive behaviors attached to Android last summer; following a $2.7BN fine for Google Shopping antitrust violations, in mid 2017.

Google is appealing both earlier penalties but has also made changes to how it operates Google Shopping and Android in Europe in the meanwhile, to avoid the risk of further punitive penalties.

In the case of AdSense, the Commission found that between 2006 and 2016 Google included restrictive clauses in its contracts with major sites that use its ad platform which Vestager said could only be seen as intending to keep rivals out of the market.

Restrictions had included exclusivity provisions and premium ad placement requirements that gave Google’s ads priority and plumb positioning on “the most visible and most profitable parts of the page”. Another illegal clause put controls on how partner websites could display rival search ads.

The restrictions were only removed by Google when the Commission issued its formal statement of objections in 2016 — signalling the start of serious scrutiny.

As well as going on to fine Google €1.49BN for AdSense antitrust breaches, the Commission’s enforcement decision requires that Google does not include any other restriction “with an equivalent effect” in its contracts, as well as stipulating that it must not reinstate the earlier abusive clauses.

Apple’s new ecosystem world order and the privacy economy

Apple’s splashy new product announcements at its annual Worldwide Developers Conference in San Jose also ushered in new rules of the road for its ecosystem partners that force hard turns for app makers around data ownership and control. These changes could fundamentally shift how consumers perceive and value control over the data they generate in using their devices, and that shift could change the landscape for how services are bought, consumed and sold.

A lot of privacy advocates have posited a future wherein we ascribe value to the data of individuals and potentially compensate people directly for its use. But others have also rightly pointed out than in isolation, a single individual’s data is precisely value-less, since it’s only in aggregate that this data is worth anything to the companies that currently harvest it to inform their marketing and drive their product decisions.

There are many reasons why it seems unlikely that any of the companies for which user data is a primary source of revenue or a crucial aspect of their business model would shift to a direct compensation model – not the least of which is that it’s probably much cheaper, and definitely much more scalable, to build products that provide them use value in exchange instead. But that doesn’t mean privacy won’t become a crucial lever in the information economy of the next wave of innovation and tech product development.

Perils of per datum pricing

As mentioned, the mechanics of directly selling your data to a company are problematic at best, and unworkable at worst.

One big issue with this is that there’s definitely bound to be a scale limit on any subscription paid product. In a world where that’s increasingly a preferred method for media companies, food and packaged goods delivery, and even car ownership alternatives, there’s clearly a cap on how much of their income consumers are willing to commit to these kinds of recurring costs.

How Kubernetes came to rule the world

Open source has become the de facto standard for building the software that underpins the complex infrastructure that runs everything from your favorite mobile apps to your company’s barely usable expense tool. Over the course of the last few years, a lot of new software is being deployed on top of Kubernetes, the tool for managing large server clusters running containers that Google open-sourced five years ago.

Today, Kubernetes is the fastest growing open-source project, and earlier this month, the bi-annual KubeCon+CloudNativeCon conference attracted almost 8,000 developers to sunny Barcelona, Spain, making the event the largest open-source conference in Europe yet.

To talk about how Kubernetes came to be, I sat down with Craig McLuckie, one of the co-founders of Kubernetes at Google (who then went on to his own startup, Heptio, which he sold to VMware); Tim Hockin, another Googler who was an early member on the project and was also on Google’s Borg team; and Gabe Monroy, who co-founded Deis, one of the first successful Kubernetes startups, and then sold it to Microsoft, where he is now the lead PM for Azure Container Compute (and often the public face of Microsoft’s efforts in this area).

Google’s cloud and the rise of containers

To set the stage a bit, it’s worth remembering where Google Cloud and container management were five years ago.

Alphabet, Apple, Amazon and Facebook are in the crosshairs of the FTC and DOJ

A new deal between the Department of Justice and the Federal Trade Commission will see U.S. regulators divide and conquer as they expand their oversight of Apple, Alphabet, Amazon and Facebook, according to multiple reports.

New details have emerged of an agreement between the two U.S. regulators that would see the Federal Trade Commission take the reins in antitrust investigations of Amazon and Facebook, while the Department of Justice will oversee investigations of Alphabet’s Google business and Apple.

Shares of Apple’s stock tumbled on news of the Justice Department’s role in overseeing the company, which was first reported by Reuters even as the company was in the middle of its developer conference celebrating all things Apple.

Google has been here before. Eight years ago, the Federal Trade Commission began an investigation into Google for antitrust violations, ultimately determining that the company had not violated any antitrust or anti-competitive statutes in its display of search results.

On Friday, The Washington Post reported that the Justice Department was initiating a federal antitrust investigation into Google’s business practices, according to multiple sources.

That report was followed by additional reporting from The Wall Street Journal indicating that Facebook would be investigated by the Federal Trade Commission.

The last time that technology companies faced this kind of scrutiny was Google’s antitrust investigation, or the now twenty-one year old lawsuit brought by the Justice Department and multiple states against Microsoft.

But times have changed since Google had its hearing before a much friendlier audience of regulators under President Barack Obama .

These days, Republican and Democratic lawmakers are both making the case that big technology companies hold too much power in American political and economic life.

Issues around personal privacy, economic consolidation, misinformation and free speech are on the minds of both Republican and Democratic lawmakers. Candidates vying for the Democratic nomination in next years Presidential election have made investigations into the breakup of big technology companies central components of their policy platforms.

Meanwhile, Republican lawmakers and agencies began stepping up their rhetoric and planning for how to oversee these companies beginning last September, when the Justice Department brought a group of the nation’s top prosecutors together to discuss technology companies’ growing power.

News of the increasing government activity sent technology stocks plummeting. Amazon shares were down $96 per-share to $1,680.05 — an over 5% drop on the day. Shares of Alphabet tumbled to $1031.53, a $74.76 decline or 6.76%. Declines at Facebook and Apple were more muted, with Apple falling $2.97, or 1.7%, to $172.32 and Facebook sliding $14.11 (or 7.95%) to $163.36.

In Senate confirmation hearings in January, the new Attorney General William Barr noted that technology companies would face more time under the regulatory microscope during his tenure, according to The Wall Street Journal .

“I don’t think big is necessarily bad, but I think a lot of people wonder how such huge behemoths that now exist in Silicon Valley have taken shape under the nose of the antitrust enforcers,” Barr said. “You can win that place in the marketplace without violating the antitrust laws, but I want to find out more about that dynamic.”

Diving deep into Africa’s blossoming tech scene

Jumia may be the first startup you’ve heard of from Africa. But the e-commerce venture that recently listed on the NYSE is definitely not the first or last word in African tech.

The continent has an expansive digital innovation scene, the components of which are intersecting rapidly across Africa’s 54 countries and 1.2 billion people.

When measured by monetary values, Africa’s tech ecosystem is tiny by Shenzen or Silicon Valley standards.

But when you look at volumes and year over year expansion in VC, startup formation, and tech hubs, it’s one of the fastest growing tech markets in the world. In 2017, the continent also saw the largest global increase in internet users—20 percent.

If you’re a VC or founder in London, Bangalore, or San Francisco, you’ll likely interact with some part of Africa’s tech landscape for the first time—or more—in the near future.

That’s why TechCrunch put together this Extra-Crunch deep-dive on Africa’s technology sector.

Tech Hubs

A foundation for African tech is the continent’s 442 active hubs, accelerators, and incubators (as tallied by GSMA). These spaces have become focal points for startup formation, digital skills building, events, and IT activity on the continent.

Prominent tech hubs in Africa include CcHub in Nigeria, Pan-African incubator MEST, and Kenya’s iHub, with over 200 resident members. More of these organizations are receiving funds from DFIs, such as the World Bank, and aid agencies, including France’s $76 million African tech fund.

Blue-chip companies such as Google and Microsoft are also providing money and support. In 2018 Facebook opened its own Hub_NG in Lagos with partner CcHub, to foster startups using AI and machine learning.

Huawei bars staff from having technical meetings with US contacts

Reeling from the ongoing U.S.-China trade war, Chinese technology giant Huawei has found itself in yet another dilemma: How to pursue internal communications with its own U.S. employees? For now, the company has ordered its Chinese employees to bar technical meetings with their U.S. contacts and sent home the American workers deployed in research and development functions in Shenzhen headquarters.

Dang Wenshuan, Huawei’s chief strategy architect, told the Financial Times that the company has also limited general communications between its Chinese and U.S. workers. The move comes as the Chinese technology giant scrambles to comply with the murky laws after its weeks-long tension with the U.S. government sees no signs of resolution in the immediate future.

The Chinese giant is also controlling the subjects of interactions workers in its campus have with overseas visitors. The conversations cannot touch topics related to technology, the FT report said. Dang said the company was just trying to ensure it was on the right side of the law.

It remains unclear exactly how export controls could mandate disruption of internal communications within an organization. Huawei could be using this tack as a bargaining chip, showing the U.S. that its own citizens are being hurt by its policies. A Huawei spokesperson declined to comment on queries sent by TechCrunch.

Earlier this month, Huawei and 68 affiliates were put on an “entity list” by the U.S. Commerce Department over national security concerns, forcing American companies to take approval from the government before conducting any business with the Chinese giant. In the aftermath, a range of companies, including chipmakers, Google and Microsoft, have made significant changes to their business agreements with Huawei.

In recent weeks, several Huawei executives have spoken out about the significance of the U.S. government order. In the meantime, the company has also explored ways to fight back the order. Earlier this week, Huawei filed a legal motion to challenge the U.S. ban on its equipment, calling it “unconstitutional.”

At stake is the future of one of the largest suppliers of smartphones and networking equipment. A significant portion of the company’s business comes from outside of China. For smartphones, one of its core businesses, the company says it is already working on an operating system that does not rely on technologies sourced from U.S. companies. But it is yet to provide any evidence on how — and if — that operating system would function.

The U.S. government earlier this month offered some relief to Huawei by granting a temporary general export license for 90 days, which allows companies such as Google to continue to provide critical support to the Chinese company for three months.

Google Pay’s app adds boarding passes, tickets, p2p payments and more

Google Pay got a big upgrade at Google I/O this week. At a breakout session, Google announced a series of changes to its payments platform, recently rebranded from Android Pay, including support for peer-to-peer payments in the main Google Pay app; online payments support in all browsers; the ability to see all payments in a single place, instead of just those in-store; and support for tickets and boarding passes in Google Pay’s APIs, among several other things.

Some of Google Pay’s expansions were previously announced, like its planned support for more browsers and devices, for example.

However, the company detailed a host of other features at I/O that are now rolling out across the Google Pay platform.

One notable addition is support for peer-to-peer payments which is being added to the Google Pay app in the U.S. and the U.K.

And that transaction history, along with users’ other payments, will all be consolidated into one place.

“In an upcoming update of the Google Pay app, we’re going to allow you to manage all the payment methods in your Google account – not just the payment methods that you used to pay in-store,” said Gerardo Capiel, Product Management lead at Google Pay, during the session at I/O. “And even better, we’re going to provide you with a holistic view of all your transactions – whether they be on Google apps and services, such as Play and YouTube, whether they be with third-party merchants, such as Walgreens and Uber, or whether they’re transactions you’ve made to friends and families via our peer-to-peer service,” he said.

The company also said it would allow users to send and request money, manage payment info linked to their Google accounts, and see their transaction history on the web with the Google Pay iOS app, too.

And because I/O is a developer conference, many of the new additions were in the form new and updated APIs.

For starters, Google launched a new API for incorporating Google Pay into other third-party apps.

“Via our APIs, we’re going to enable these ready-to-pay users [who already have payment information stored with Google Pay] to also checkout quickly and easily in your own apps and websites,” Capiel said.

The benefit to those developers who add Google Pay support is an increase in conversion rates and faster monetization, he noted.

Plus, Google added support for tickets and boarding passes to the Google Pay APIs, where they joined the existing support for offers and loyalty cards.

This allows companies such as Urban Airship or DotDashPay to help business clients distribute and update their passes and tickets to Google Pay users.

“It shows an even stronger commitment on Google Pay’s part to make the digital wallet a priority,” Sean Arietta, founder and CEO of DotDashPay, told TechCrunch, following the presentation. “It also reinforces their focus on partners like DotDashPay to help build connections between consumers and brands. The fact that they are specifically highlighting a complete experience that starts with payments and ends with an NFC tap-to-identify, is really powerful. It makes the Google Pay story now complete,” he added.

Urban Airship was also touting the changes earlier this week, via a press release.

“We help businesses reinvent the customer experience by delivering the right information at the right time on any digital channel, and mobile wallets fill an increasingly critical role in that vision,” Brett Caine, CEO and president of Urban Airship, said in a statement. “Google Pay’s new support for tickets and boarding passes means customers will always have up-to-date information when they need it most – on the go.”

Some of Google’s early access partners on ticketing include Singapore Airlines, Eventbrite, Southwest, and FortressGB, which handles major soccer league tickets in the U.K. and elsewhere.

In terms of transit-related announcements, Google added a few more partners who will soon adopt Google Pay integration, including Vancouver, Canada and the U.K. bus system, following recent launches in Las Vegas and Portland.

The company also offered an update on Google Pay’s traction, noting the Google Pay app just passed 100 million downloads in the Google Play store, where it’s available to users in 18 markets worldwide.

Soon, Google said it will launch many of the core features and the Google Pay app globally to billions of Google users worldwide.

Duplex shows Google failing at ethical and creative AI design

Google CEO Sundar Pichai milked the woos from a clappy, home-turf developer crowd at its I/O conference in Mountain View this week with a demo of an in-the-works voice assistant feature that will enable the AI to make telephone calls on behalf of its human owner.

The so-called ‘Duplex’ feature of the Google Assistant was shown calling a hair salon to book a woman’s hair cut, and ringing a restaurant to try to book a table — only to be told it did not accept bookings for less than five people.

At which point the AI changed tack and asked about wait times, earning its owner and controller, Google, the reassuring intel that there wouldn’t be a long wait at the elected time. Job done.

The voice system deployed human-sounding vocal cues, such as ‘ums’ and ‘ahs’ — to make the “conversational experience more comfortable“, as Google couches it in a blog about its intentions for the tech.

The voices Google used for the AI in the demos were not synthesized robotic tones but distinctly human-sounding, in both the female and male flavors it showcased.

Indeed, the AI pantomime was apparently realistic enough to convince some of the genuine humans on the other end of the line that they were speaking to people.

At one point the bot’s ‘mm-hmm’ response even drew appreciative laughs from a techie audience that clearly felt in on the ‘joke’.

But while the home crowd cheered enthusiastically at how capable Google had seemingly made its prototype robot caller — with Pichai going on to sketch a grand vision of the AI saving people and businesses time — the episode is worryingly suggestive of a company that views ethics as an after-the-fact consideration.

One it does not allow to trouble the trajectory of its engineering ingenuity.

A consideration which only seems to get a look in years into the AI dev process, at the cusp of a real-world rollout — which Pichai said would be coming shortly.

Deception by design

“Google’s experiments do appear to have been designed to deceive,” agreed Dr Thomas King, a researcher at the Oxford Internet Institute’s Digital Ethics Lab, discussing the Duplex demo. “Because their main hypothesis was ‘can you distinguish this from a real person?’. In this case it’s unclear why their hypothesis was about deception and not the user experience… You don’t necessarily need to deceive someone to give them a better user experience by sounding naturally. And if they had instead tested the hypothesis ‘is this technology better than preceding versions or just as good as a human caller’ they would not have had to deceive people in the experiment.

“As for whether the technology itself is deceptive, I can’t really say what their intention is — but… even if they don’t intend it to deceive you can say they’ve been negligent in not making sure it doesn’t deceive… So I can’t say it’s definitely deceptive, but there should be some kind of mechanism there to let people know what it is they are speaking to.”

“I’m at a university and if you’re going to do something which involves deception you have to really demonstrate there’s a scientific value in doing this,” he added, agreeing that, as a general principle, humans should always be able to know that an AI they’re interacting with is not a person.

Because who — or what — you’re interacting with “shapes how we interact”, as he put it. “And if you start blurring the lines… then this can sew mistrust into all kinds of interactions — where we would become more suspicious as well as needlessly replacing people with meaningless agents.”

No such ethical conversations troubled the I/O stage, however.

Yet Pichai said Google had been working on the Duplex technology for “many years”, and went so far as to claim the AI can “understand the nuances of conversation” — albeit still evidently in very narrow scenarios, such as booking an appointment or reserving a table or asking a business for its opening hours on a specific date.

“It brings together all our investments over the years in natural language understanding, deep learning, text to speech,” he said.

What was yawningly absent from that list, and seemingly also lacking from the design of the tricksy Duplex experiment, was any sense that Google has a deep and nuanced appreciation of the ethical concerns at play around AI technologies that are powerful and capable enough of passing off as human — thereby playing lots of real people in the process.

The Duplex demos were pre-recorded, rather than live phone calls, but Pichai described the calls as “real” — suggesting Google representatives had not in fact called the businesses ahead of time to warn them its robots might be calling in.

“We have many of these examples where the calls quite don’t go as expected but our assistant understands the context, the nuance… and handled the interaction gracefully,” he added after airing the restaurant unable-to-book example.

So Google appears to have trained Duplex to be robustly deceptive — i.e. to be able to reroute around derailed conversational expectations and still pass itself off as human — a feature Pichai lauded as ‘graceful’.

And even if the AI’s performance was more patchy in the wild than Google’s demo suggested it’s clearly the CEO’s goal for the tech.

While trickster AIs might bring to mind the iconic Turing Test — where chatbot developers compete to develop conversational software capable of convincing human judges it’s not artificial — it should not.

Because the application of the Duplex technology does not sit within the context of a high profile and well understood competition. Nor was there a set of rules that everyone was shown and agreed to beforehand (at least so far as we know — if there were any rules Google wasn’t publicizing them). Rather it seems to have unleashed the AI onto unsuspecting business staff who were just going about their day jobs. Can you see the ethical disconnect?

“The Turing Test has come to be a bellwether of testing whether your AI software is good or not, based on whether you can tell it apart from a human being,” is King’s suggestion on why Google might have chosen a similar trick as an experimental showcase for Duplex.

“It’s very easy to say look how great our software is, people cannot tell it apart from a real human being — and perhaps that’s a much stronger selling point than if you say 90% of users preferred this software to the previous software,” he posits. “Facebook does A/B testing but that’s probably less exciting — it’s not going to wow anyone to say well consumers prefer this slightly deeper shade of blue to a lighter shade of blue.”

Had Duplex been deployed within Turing Test conditions, King also makes the point that it’s rather less likely it would have taken in so many people — because, well, those slightly jarringly timed ums and ahs would soon have been spotted, uncanny valley style.

Ergo, Google’s PR flavored ‘AI test’ for Duplex is also rigged in its favor — to further supercharge a one-way promotional marketing message around artificial intelligence. So, in other words, say hello to yet another layer of fakery.

How could Google introduce Duplex in a way that would be ethical? King reckons it would need to state up front that it’s a robot and/or use an appropriately synthetic voice so it’s immediately clear to anyone picking up the phone the caller is not human.

“If you were to use a robotic voice there would also be less of a risk that all of your voices that you’re synthesizing only represent a small minority of the population speaking in ‘BBC English’ and so, perhaps in a sense, using a robotic voice would even be less biased as well,” he adds.

And of course, not being up front that Duplex is artificial embeds all sorts of other knock-on risks, as King explained.

“If it’s not obvious that it’s a robot voice there’s a risk that people come to expect that most of these phone calls are not genuine. Now experiments have shown that many people do interact with AI software that is conversational just as they would another person but at the same time there is also evidence showing that some people do the exact opposite — and they become a lot ruder. Sometimes even abusive towards conversational software. So if you’re constantly interacting with these bots you’re not going to be as polite, maybe, as you normally would, and that could potentially have effects for when you get a genuine caller that you do not know is real or not. Or even if you know they’re real perhaps the way you interact with people has changed a bit.”

Safe to say, as autonomous systems get more powerful and capable of performing tasks that we would normally expect a human to be doing, the ethical considerations around those systems scale as exponentially large as the potential applications. We’re really just getting started.

But if the world’s biggest and most powerful AI developers believe it’s totally fine to put ethics on the backburner then risks are going to spiral up and out and things could go very badly indeed.

We’ve seen, for example, how microtargeted advertising platforms have been hijacked at scale by would-be election fiddlers. But the overarching risk where AI and automation technologies are concerned is that humans become second class citizens vs the tools that are being claimed to be here to help us.

Pichai said the first — and still, as he put it, experimental — use of Duplex will be to supplement Google’s search services by filling in information about businesses’ opening times during periods when hours might inconveniently vary, such as public holidays.

Though for a company on a general mission to ‘organize the world’s information and make it universally accessible and useful’ what’s to stop Google from — down the line — deploying vast phalanx of phone bots to ring and ask humans (and their associated businesses and institutions) for all sorts of expertise which the company can then liberally extract and inject into its multitude of connected services — monetizing the freebie human-augmented intel via our extra-engaged attention and the ads it serves alongside?

During the course of writing this article we reached out to Google’s press line several times to ask to discuss the ethics of Duplex with a relevant company spokesperson. But ironically — or perhaps fittingly enough — our hand-typed emails received only automated responses.

Pichai did emphasize that the technology is still in development, and said Google wants to “work hard to get this right, get the user experience and the expectation right for both businesses and users”.

But that’s still ethics as a tacked on afterthought — not where it should be: Locked in place as the keystone of AI system design.

And this at a time when platform-fueled AI problems, such as algorithmically fenced fake news, have snowballed into huge and ugly global scandals with very far reaching societal implications indeed — be it election interference or ethnic violence.

You really have to wonder what it would take to shake the ‘first break it, later fix it’ ethos of some of the tech industry’s major players…

Google Assistant making calls pretending to be human not only without disclosing that it’s a bot, but adding “ummm” and “aaah” to deceive the human on the other end with the room cheering it… horrifying. Silicon Valley is ethically lost, rudderless and has not learned a thing.

— zeynep tufekci (@zeynep) May 9, 2018

Ethical guidance relating to what Google is doing here with the Duplex AI is actually pretty clear if you bother to read it — to the point where even politicians are agreed on foundational basics, such as that AI needs to operate on “principles of intelligibility and fairness”, to borrow phrasing from just one of several political reports that have been published on the topic in recent years.

In short, deception is not cool. Not in humans. And absolutely not in the AIs that are supposed to be helping us.

Transparency as AI standard

The IEEE technical professional association put out a first draft of a framework to guide ethically designed AI systems at the back end of 2016 — which included general principles such as the need to ensure AI respects human rights, operates transparently and that automated decisions are accountable. 

In the same year the UK’s BSI standards body developed a specific standard — BS 8611 Ethics design and application robots — which explicitly names identity deception (intentional or unintentional) as a societal risk, and warns that such an approach will eventually erode trust in the technology. 

“Avoid deception due to the behaviour and/or appearance of the robot and ensure transparency of robotic nature,” the BSI’s standard advises.

It also warns against anthropomorphization due to the associated risk of misinterpretation — so Duplex’s ums and ahs don’t just suck because they’re fake but because they are misleading and so deceptive, and also therefore carry the knock-on risk of undermining people’s trust in your service but also more widely still, in other people generally.

“Avoid unnecessary anthropomorphization,” is the standard’s general guidance, with the further steer that the technique be reserved “only for well-defined, limited and socially-accepted purposes”. (Tricking workers into remotely conversing with robots probably wasn’t what they were thinking of.)

The standard also urges “clarification of intent to simulate human or not, or intended or expected behaviour”. So, yet again, don’t try and pass your bot off as human; you need to make it really clear it’s a robot.

For Duplex the transparency that Pichai said Google now intends to think about, at this late stage in the AI development process, would have been trivially easy to achieve: It could just have programmed the assistant to say up front: ‘Hi, I’m a robot calling on behalf of Google — are you happy to talk to me?’

Instead, Google chose to prioritize a demo ‘wow’ factor — of showing Duplex pulling the wool over busy and trusting humans’ eyes — and by doing so showed itself tonedeaf on the topic of ethical AI design.

Not a good look for Google. Nor indeed a good outlook for the rest of us who are subject to the algorithmic whims of tech giants as they flick the control switches on their society-sized platforms.

“As the development of AI systems grows and more research is carried out, it is important that ethical hazards associated with their use are highlighted and considered as part of the design,” Dan Palmer, head of manufacturing at BSI, told us. “BS 8611 was developed… alongside ​scientists, academics, ethicists, philosophers and users​. It explains that any autonomous system or robot should be accountable, truthful and unprejudiced.

“The standard raises a number of potential ethical hazards that are relevant to the Google Duplex; one of these is the risk of AI machines becoming sexist or racist due to a biased data feed. This surfaced prominently when ​Twitter users influenced Microsoft’s AI chatbot, Tay, to spew out offensive messages.

​”Another contentious subject is whether forming an emotional bond with a robot is desirable, especially if the voice assistant interacts with the elderly or children. Other guidelines on new hazards that should be considered include: robot deception, robot addiction and the potential for a learning system to exceed its remit.

“Ultimately, it must always be transparent who is responsible for the behavior of any voice assistant or robot, even if it behaves autonomously.”

Yet despite all the thoughtful ethical guidance and research that’s already been produced, and is out there for the reading, here we are again being shown the same tired tech industry playbook applauding engineering capabilities in a shiny bubble, stripped of human context and societal consideration, and dangled in front of an uncritical audience to see how loud they’ll cheer.

Leaving important questions — over the ethics of Google’s AI experiments and also, more broadly, over the mainstream vision of AI assistance it’s so keenly trying to sell us — to hang and hang.

Questions like how much genuine utility there might be for the sorts of AI applications it’s telling us we’ll all want to use, even as it prepares to push these apps on us, because it can — as a consequence of its great platform power and reach.

A core ‘uncanny valley-ish’ paradox may explain Google’s choice of deception for its Duplex demo: Humans don’t necessarily like speaking to machines. Indeed, oftentimes they prefer to speak to other humans. It’s just more meaningful to have your existence registered by a fellow pulse-carrier. So if an AI reveals itself to be a robot the human who picked up the phone might well just put it straight back down again.

“Going back to the deception, it’s fine if it’s replacing meaningless interactions but not if it’s intending to replace meaningful interactions,” King told us. “So if it’s clear that it’s synthetic and you can’t necessarily use it in a context where people really want a human to do that job. I think that’s the right approach to take.

“It matters not just that your hairdresser appears to be listening to you but that they are actually listening to you and that they are mirroring some of your emotions. And to replace that kind of work with something synthetic — I don’t think it makes much sense.

“But at the same time if you reveal it’s synthetic it’s not likely to replace that kind of work.”

So really Google’s Duplex sleight of hand may be trying to conceal the fact AIs won’t be able to replace as many human tasks as technologists like to think they will. Not unless lots of currently meaningful interactions are rendered meaningless. Which would be a massive human cost that societies would have to — at very least — debate long and hard.

Trying to avoid such a debate from taking place by pretending there’s nothing ethical to see here is, hopefully, not Google’s designed intention.

King also makes the point that the Duplex system is (at least for now) computationally costly. “Which means that Google cannot and should not just release this as software that anyone can run on their home computers.

“Which means they can also control how it is used, and in what contexts — and they can also guarantee it will only be used with certain safeguards built in. So I think the experiments are maybe not the best of signs but the real test will be how they release it — and will they build the safeguards that people demand into the software,” he adds.

As well as a lack of visible safeguards in the Duplex demo, there’s also — I would argue — a curious lack of imagination on display.

Had Google been bold enough to reveal its robot interlocutor it might have thought more about how it could have designed that experience to be both clearly not human but also fun or even funny. Think of how much life can be injected into animated cartoon characters, for example, which are very clearly not human yet are hugely popular because people find them entertaining and feel they come alive in their own way.

It really makes you wonder whether, at some foundational level, Google lacks trust in both what AI technology can do and in its own creative abilities to breath new life into these emergent synthetic experiences.

Google used improv rules to deal with a farting Assistant

Listen, it’s late on day two of Google I/O and we’re all getting a bit punchy here. During a late-afternoon panel on design and Assistant, Google Principal Designer Ryan Germick explained that the company used improv skills to figure out how to build out the AI’s personality — and answer some of life’s more difficult questions.

One question Assistant gets “more often than you’d expect”: “did you fart?” For one thing, farts are always funny. For another, what’s the point of having a smart assistant if you can’t blame it for your various bodily odors?

Germick explained that the company went through various iterations of answers to the fart question, starting with something along the lines of “of course I didn’t, I don’t have a body.” That, it turns out, is not a particularly satisfying answer. Instead, the company embraced the “artful dodge,” using what anyone who’s taken an introductory improv class will tell you is known as “yes and-ing.”

So go ahead and ask your Assistant if it farted and you’ll probably hear something along the lines of “you can blame me, if you want,” along with somewhere in the neighborhood of 25 additional answers.

And always remember Isaac Asimov’s unspoken fourth rule of robotics: he who smelt it, dealt it. 

Google previews what’s next for Android Auto

Over the course of the last few days, Google teased a few updates to Android Auto, its platform for bringing its mobile operating system to the car. At its I/O developer conference, the company showed off what the next version of Android Auto will look like and how developers can start preparing their applications for it.

Earlier this week, Google announced that Volvo would build Android Auto directly into its head units, making it one of the first car manufacturers to do so. Typically, Android Auto essentially mirrors your phone — with a special on-screen interface designed for the car. By building Android Auto right into the car, you won’t need a phone. Instead, it’ll be a stand-alone experience and thanks to that, the car manufacturer can also offer a number of custom elements or maybe even support multiple screens.

As the Android Auto team noted during its I/O session, in-car screens are starting to get bigger and popping up in different sizes and aspect rations. At the same time, input methods are also evolving and while Google didn’t say so today, it appears the team is looking at how it can support features like a touchpad in the car.

Unsurprisingly, the team is now looking at how it can evolve the Android Auto UI to better support these different screens. As the team showed in today’s session, that could mean using a wide-screen display in the car to show both the Google Maps interface and a media player side-by-side.

Developers won’t have to do anything to support these new screen sizes and input mechanisms since the Android Auto platform will simply handle that for them.

The new concept design for a built-in Android Auto experience the company showed today looks quite a bit like its integration with Volvo. It relies on a large vertical screen and a user interface that is deeply integrated with the rest of the car’s functions.

“The goal of this concept is to adapt Android Auto’s design to a vehicle-specific theme,” Google’s Lauren Wunderlich said in today’s session. “This includes additional ergonomic details and nods to the vehicle’s interior design.”

As part of today’s preview, Google showcased a few new features, including an improved search experience, which developers will have to support in their apps. This new experience will allow developers to group results by groups, say playlists and albums in a music app, for example (and interesting, Google mostly highlighted Spotify as a music app in today’s session and not its own Google Play Music service).

Google also promises a better messaging experience with support for the new RCS standard.

Google is also introducing a couple of new user interface elements in the media player like an explicit content warning and an icon that lets you see when a playlist has been downloaded to your device, for example.

But Google also briefly showed a slide with a few more items on its roadmap for Android P in the car. Those include support for things like integrations between driver assistant systems and Maps data, for example, as well as ways to suspend Android Auto to RAM for, I assume, the built-in version.

Google hasn’t shared any exact numbers that would allow us to quantify the popularity of Android Auto, but the team did say that “thousands of apps” now support the platform, a number that’s up 200 percent since last year. As more car manufacturers support it, the number of overall users has also increased and the team today reported over 300 percent user growth in the last year.