Apple’s new Health feature tracks unsafe headphone volumes

According to recent numbers from the World Health Organization (WHO), roughly half of people aged 12-35 are at risk for hearing loss. That’s due in no small part to the explosive growth in “personal listening devices” like smartphones. Young people are cranking up the volume on their headphones and could be doing irreparable damage to their hearing in the process.

One of the health features Apple didn’t get around to discussing onstage yesterday tracks headphone volume levels over time. The feature, which is available as part of the Health app, is able to track listening levels on calibrated and MFi headphones (including AirPods, Beats and the like). That information will be logged as either “OK” or “Loud” based on guidance from the WHO. 

The feature joins the new Noise app, which uses the Apple Watch’s built-in microphones to measure ambient noise. That app will send notifications if sound levels reach 90dBs — the level at which sustained exposure can lead to hearing loss.

The headphone health feature is less proactive — presumably because users have to opt into loud headphone volumes. Still, there’s something to be said for the ability to receive notifications when levels get loud, particularly over a sustained time period. I know I’ve certainly been in situations where I’ve unknowingly cranked the volume up on my headphones at, say, the gym where I’m using my own music to counteract whatever they’re pumping through the PA.

As this generation ages, this issue will likely only become more critical. But by the time many begin to discover the problem with prolonged volumes, it could be too late.

Diving deep into Africa’s blossoming tech scene

Jumia may be the first startup you’ve heard of from Africa. But the e-commerce venture that recently listed on the NYSE is definitely not the first or last word in African tech.

The continent has an expansive digital innovation scene, the components of which are intersecting rapidly across Africa’s 54 countries and 1.2 billion people.

When measured by monetary values, Africa’s tech ecosystem is tiny by Shenzen or Silicon Valley standards.

But when you look at volumes and year over year expansion in VC, startup formation, and tech hubs, it’s one of the fastest growing tech markets in the world. In 2017, the continent also saw the largest global increase in internet users—20 percent.

If you’re a VC or founder in London, Bangalore, or San Francisco, you’ll likely interact with some part of Africa’s tech landscape for the first time—or more—in the near future.

That’s why TechCrunch put together this Extra-Crunch deep-dive on Africa’s technology sector.

Tech Hubs

A foundation for African tech is the continent’s 442 active hubs, accelerators, and incubators (as tallied by GSMA). These spaces have become focal points for startup formation, digital skills building, events, and IT activity on the continent.

Prominent tech hubs in Africa include CcHub in Nigeria, Pan-African incubator MEST, and Kenya’s iHub, with over 200 resident members. More of these organizations are receiving funds from DFIs, such as the World Bank, and aid agencies, including France’s $76 million African tech fund.

Blue-chip companies such as Google and Microsoft are also providing money and support. In 2018 Facebook opened its own Hub_NG in Lagos with partner CcHub, to foster startups using AI and machine learning.

I visited a teeth-straightening startup and found out I needed a root canal

Going to the dentist can be anxiety-inducing. Unfortunately, it was no different for me last week when I went to discuss Uniform Teeth’s recent $4 million seed funding round from Lerer Hippeau, Refactor Capital, Founder’s Fund and Slow Ventures.

Uniform Teeth is a clear teeth aligner startup that competes with the likes of Invisalign and Smile Direct Club. The startup takes a One Medical-like approach in that it provides real, licensed orthodontists to see you and treat your bite.

“For us, we’re really focused on transforming the orthodontic experience,” Uniform Teeth CEO Meghan Jewitt told me at the startup’s flagship dental office in San Francisco. “There are a lot of health care companies out there that are taking areas that aren’t very customer-centric.”

Jewitt, who spent a couple of years at One Medical as director of operations, pointed to One Medical, Oscar Insurance and 23andMe as examples of companies taking a very customer-centric approach.

“We are really interested in doing the same for the orthodontics space,” she said.

Ahead of the first visit, patients use the Uniform app to take photos of their teeth and their bite. During the initial visit, patients receive a panoramic scan and 3D imaging to confirm what type of work needs to be done.

Last week during my visit, Jewitt and Uniform Teeth co-founder Dr. Kjeld Aamodt showed me the technology Uniform uses for its patient evaluations.

In the GIF above, you can see I received a 3D panoramic X-ray. The process took about 10 seconds and it’s about the same exposure to X-rays as a flight from San Francisco to Los Angeles, Dr. Aamodt said.

“With that information, we’re able to see the health of your roots, your teeth, the bone, your jaw joints, check for anything that could get worse during treatment,” Dr. Aamodt said.

Below, you can see the 3D scan.

Next is looking in-between the teeth. From here, the idea is to get a much more holistic view, Dr. Aamodt said. This is where things got interesting.

If you look at the bottom left of the photo, under my back bottom tooth, you can see a dark circle below the tooth. Dr. Aamodt gently pointed that out to me.

“That tells me there’s bacteria living inside of your jaw,” he explained. “A lot of people have this. It’s pretty common so don’t beat yourself up for it.”

This is when he told me I’d likely need to get a root canal to get rid of it. Mild panic ensued.

Dr. Aamodt went on to explain that, if I were a patient of his looking to get my teeth straightened, he would recommend that I first get the root canal before any teeth movement. That’s because, he explained, moving teeth at that point could potentially result in further infection.

“The concern about that is when we move a tooth with that, the infection will get worse and you could risk losing that tooth,” he told me.

Although I was freaking out internally, I continued to move ahead with the process. Next up was the 3D scan, which results in something fancy called a 3D Cone Beam Computed Tomography. This, Dr. Aamodt said, is what really sets Uniform Teeth apart in precision tooth movement.

This process takes the place of dental impressions, which are made by biting into a tray with gooey material. I didn’t feel like getting my bottom teeth scanned, but below is what the top looks like.

At this point Uniform Teeth would share its recommendations with the patient. My personal recommendation was to go see my dentist and, if I’m interested in straightening my teeth, come back once my roots are in a healthy enough state.

From there, I’d receive a custom treatment plan that combines the X-ray plus 3D scan to predict how my teeth will move. After receiving the clear aligners in a couple of weeks, I’d check in with Dr. Aamodt every week via the mobile app. If something were to come up, I could always set up an in-person appointment. Most people average about two to three visits in total, Jewitt said. All of that would add up to about $3,500.

The reason Uniform Teeth requires in-office visits is because 75 percent or more of the cases require additional procedures. For example, some people require small, tooth-colored attachments to be placed onto the clear aligners. Those attachments can help move teeth in a more advanced way, Dr. Aamodt said.

“If you don’t have these, then you can tip some teeth but you can’t do all of the things to help improve the bite, to create a really lasting, beautiful, healthy smile,” he explained.

Uniform Teeth currently treats patients in San Francisco, but intends to open additional offices nationwide next year. As the company expands, the plan is to bring on board more full-time orthodontists.

“Right now, we’re an employment-based model and we’d really like to continue that because it allows us to control the experience and deliver a really high-quality service,” Jewitt said.

A lot of companies say they care about the customer when, in reality, they just care about making money. But I genuinely believe Uniform Teeth does care. After I left with my tail between my legs that day, I called my dentist to set up an appointment. The following day, my dentist confirmed what Dr. Aamodt found and proceeded to set me up to get a root canal. A few days later, Dr. Aamodt checked in with me via the mobile app to ask me how I was doing and to make sure I was getting it treated. I was pleased to let him know, as Olivia Pope likes to say, “It’s handled.”

BenevolentAI, which uses AI to develop drugs and energy solutions, nabs $115M at $2B valuation

In the ongoing race to build the best and smartest applications that tap into the advances of artificial intelligence, a startup out of London has raised a large round of funding to double down on solving persistent problems in areas like healthcare and energy . BenevolventAI announced today that it has raised $115 million to continue developing its core “AI brain” as well as different arms of the company that are using it specifically to break new ground in drug development and more.

This venture round values the company at $2.1 billion post-money, its founder and executive chairman Ken Mulvaney confirmed to TechCrunch. Investors in this round include previous backer Woodford Investment Management, and while Mulvaney said the company was not disclosing the names of any other investors, he added it was a mix of family offices and some strategic backers, with a majority coming from the US, but would not specify any more. Notably, Benevolent.AI does not have any backing from more traditional VCs, which more generally have been doubling down on investments in AI startups. Founded in 2013, the company has now raised over $200 million to date.

The core of Benevolent.AI’s business is focused around what Mulvaney describes as a “brain” built by a team of scientists — some of whom are disclosed, and some of whom are not for competitive reasons, Mulvaney said: there are 155 people working at the startup in all, with 300 projected by the end of this year. The brain has been created to ingest and compute billions of data points in specific areas such as health and material science, to help scientists better determine combinations that might finally solve persistently difficult problems in fields like medicine.

The crux of the issue in a field like drug development, for example, is that even as scientists identify the many permutations and strains of, say, a particular kind of cancer, each of these strains can mutate, and that is before you consider that each mutation might behave completely differently depending on which person develops the mutation.

This is precisely the kind of issue that AI which is massive computational power and “learning” from previous computations, can help address. (And Benevolvent.AI is not the only one taking this approach. Specifically in Cancer, others include Grail and Paige.AI.)

But even with the speed that AI brings to the table, it’s a very long, long game for Benevolent.AI. The division of Benevolent.AI that is focused on drugs, called Benevolent Bio, currently has two drugs in more advanced stages of development, Mulvaney said, although neither of those happen to be in the area of cancer. A Parkinson’s drug is currently in Phase 2B clinical trials, after years of work.

And an ALS medication currently in development — which Mulvaney says will aim to significantly extend the prospects for those who have been diagnosed with ALS — is about five years away from trials. It’s worth the effort to try, though: the best ALS medications on the market today at best only add about three month’s to a patient’s life expectancy.

Some of the long period of development is because with drugs, there large regulatory framework a company has to go through. “But we benefit from that,” Mulvaney said, “because it means you can actually then offer something in the market.” (Blood tests a la Theranos are very different in terms of regulatory requirements, he said.)

In part because of that long cycle, and also because Benevolent.AI has spotted an adjacent opportunity, the company has more recently also been extending applications from its “brain” to other adjacent areas that also tap into chemistry and biology, such as material science.

One area Mulvaney said is of particular interest is to see if Benevolent can create materials that can both withstand extreme heat — to allow engines to work at higher rates without risks — as well as chemicals that could essentially create the next generation of efficient batteries that could provide more power in smaller formats for longer periods.

“There has been little development beyond a lithium ion battery,” he noted, which may be fine for the Teslas of the world today. “But there is not enough lithium on this planet for us all to go electric, and there is not nearly enough energy density there unless you have thousands of batteries working together. We need other technology to provide more energy donation. That tech doesn’t exist yet because chemically it’s very difficult to do that.” And that spells opportunity for Benevolvent.AI.

Other areas where the startup hopes to move into over the coming months and years include agriculture, veterinary science, and other categories that sit alongside those Benevolent.AI is already tapping.

ReviveMed turns drug discovery into a big data problem and raises $1.5M to solve it

What if there’s a drug that already exists that could treat a disease with no known therapies, but we just haven’t made the connection? Finding that connection by exhaustively analyzing complex biomechanics within the body — with the help of machine learning, naturally — is the goal of ReviveMed, a new biotech startup out of MIT that just raised $1.5 million in seed funding.

Around the turn of the century, genomics was the big thing. Then, as the power to investigate complex biological processes improved, proteomics became the next frontier. We may have moved on again, this time to the yet more complex field of metabolomics, which is where ReviveMed comes in.

Leila Pirhaji, ReviveMed’s founder and CEO, began work on the topic during her time as a postgrad at MIT. The problem she and her colleagues saw was the immense complexity of interactions between proteins, which are encoded in DNA and RNA, and metabolites, a class of biomolecules with even greater variety. Hidden in these innumerable interactions somewhere are clues to how and why biological processes are going wrong, and perhaps how to address that.

“The interaction of proteins and metabolites tells us exactly what’s happening in the disease,” Pirhaji told me. “But there are over 40,000 metabolites in the human body. DNA and RNA are easy to measure, but metabolites have tremendous diversity in mass. Each one requires its own experiment to detect.”

As you can imagine, the time and money that would be involved in such an extensive battery of testing have made metabolomics difficult to study. But what Pirhaji and her collaborators at MIT decided was that it was similar enough to other “big noisy data set” problems that the nascent approach of machine learning could be effective.

“Instead of doing experiments,” Pirhaji said, “why don’t we use AI and our database?” ReviveMed, which she founded along with data scientist Demarcus Briers and biotech veteran Richard Howell, is the embodiment of that proposal.

Pharmaceutical companies and research organizations already have a mess of metabolites masses, known interactions, suspected but unproven effects, and disease states and outcomes. Plenty of experimentation is done, but the results are frustratingly vague owing to the inability to the inability to be sure about the metabolites themselves or what they’re doing. Most experimentation has resulted in partial understanding of a small proportion of known metabolites.

That data isn’t just a few drives’ worth of spreadsheets and charts, either. Not only does the data comprise drug-protein, protein-protein, protein-metabolite, and metabolite-disease interactions, but they’re including data that’s essentially never been analyzed: “We’re looking at metabolites that no one has looked at before.”

The information is sitting in an archive somewhere, gathering dust. “We actually have to go physically pick up the mass spectrometry files,” Pirhaji said. (“They’re huge,” she added.)

Once they got the data all in one place (Pirhaji described it as “a big hairball with millions of interactions,” in a presentation in March), they developed a model to evaluate and characterize everything in it, producing the kind of insights machine learning systems are known for.

The “hairball.”

The results were more than a little promising. In a trial run, they identified new disease mechanisms for Huntington’s, new therapeutic targets (i.e. biomolecules or processes that could be affected by drugs), and existing drugs that may affect those targets.

The secret sauce, or one ingredient anyway, is the ability to distinguish metabolites with similar masses (sugars or fats with different molecular configurations but the same number and type of atoms, for instance) and correlate those metabolites with both drug and protein effects and disease outcomes. The metabolome fills in the missing piece between disease and drug without any tests establishing it directly.

At that point the drug will, of course, require real-world testing. But although ReviveMed does do some verification on its own, this is when the company would hand back the results to its clients, pharmaceutical companies, which then take the drug and its new effect to market.

In effect, the business model is offering a low-cost, high-reward R&D as a service to pharma, which can hand over reams of data it has no particular use for, potentially resulting in practical applications for drugs that already have millions invested in their testing and manufacture. What wouldn’t Pfizer pay to determine that Robitussin also prevents Alzheimers? That knowledge is worth billions, and ReviveMed is offering a new, powerful way to check for such things with little in the way of new investment.

This is the kind of web of molecules and effects that the system sorts through.

ReviveMed, for its part, is being a bit more choosy than that — its focus is on untreatable diseases with a good chance that existing drugs affect them. The first target is fatty liver disease, which affects millions, causing great suffering and cost. And something like Huntington’s, in which genetic triggers and disease effects are known but not the intermediate mechanisms, is also a good candidate for which the company’s models can fill the gap.

The company isn’t reliant on Big Pharma for its data, though. The original training data was all public (though “very fragmented”) and it’s that on which the system is primarily based. “We have a patent on our process for getting this metabolome data and translating it into insights,” Pirhaji notes, although the work they did at MIT is available for anyone to access (it was published in Nature Methods, in case you were wondering).

But compared with genomics and proteomics, not much metabolomic data is public — so although ReviveMed can augment its database with data from clients, its researchers are also conducting hundreds of human tests on their own to improve the model.

The business model is a bit complicated as well — “It’s very case by case,” Pirhaji told me. A research hospital looking to collaborate and share data while sharing any results publicly or as shared intellectual property, for instance, would not be a situation where a lot of cash would change hands. But a top-5 pharma company — two of which ReviveMed already has dealings with — that wants to keep all the results for itself and has limitless coffers would pay a higher cost.

I’m oversimplifying, but you get the idea. In many cases, however, ReviveMed will aim to be a part of any intellectual property it contributes to. And of course the data provided by the clients goes into the model and improves it, which is its own form of payment. So you can see that negotiations might get complicated. But the company already has several revenue-generating pilots in place, so even at this early stage those complications are far from insurmountable.

Lastly there’s the matter of the seed round: $1.5 million, led by Rivas Capital along with TechU, Team Builder Ventures, and WorldQuant. This should allow them to hire the engineers and data scientists they need and expand in other practical ways. Placing well in a recent Google machine learning competition got them $200K worth of cloud computing credit, so that should keep them crunching for a while.

ReviveMed’s approach is a fundamentally modern one that wouldn’t be possible just a few years ago, such is the scale of the data involved. It may prove to be a powerful example of data-driven biotech as lucrative as it is beneficial. Even the early proof-of-concept and pilot work may provide help to millions or save lives — it’s not every day a company is founded that can say that.

Parsley Health picks up $10 million to reimagine healthcare

According to Parsley Health, the average adult spends 19 minutes with their physician every year. Seventy percent of the time, these short visits result in the prescription of a medication.

“According to the CDC, 70% of diseases in our country are chronic and lifestyle-driven,” said Parsley Health founder and CEO Dr. Robin Berzin. “And yet instead of addressing the root causes of health problems, medicine’s toolkit is limited to prescriptions and procedures, driving up costs while the average person gets sicker. The answer isn’t just another pill.”

Parsley Health, an annual membership service ($150/month), reimagines what medicine can be. The company focuses on the cause of an illness rather than simply throwing Band-Aids at the problem. But in order to do this, your doctor needs far more than 19 minutes of your time each year.

Today, Parsley announced the close of a $10 million Series A funding led by FirstMark Capital, with participation from Amplo, Trail Mix Ventures, Combine and The Chernin Group. Individual investors such as Dr. Mark Hyman, M.D., director of the Cleveland Clinic Center for Functional Medicine; Nat Turner, CEO of Flatiron Health; Neil Parikh, co-founder of Casper; and Dave Gilboa, co-founder of Warby Parker, also invested in the round.

As part of the financing, FirstMark Capital partner Catherine Ulrich will join the board.

Here’s how Parsley works:

When a user first signs up online, they enter in a wide range of data about themselves, from family health history to past procedures to symptoms and lifestyle. The user then schedules their first visit with their new doctor, which will last for 75 minutes, during which time the doctor will exhaustively go through that information to download a full picture of that patient’s health.

After that visit, the user has full transparency into their medical data and the doctor’s notes. The patient also leaves with a health plan, including lifestyle nutritional advice, and access to their own health coach. Parsley also writes prescriptions, when necessary, and refers patients to top-of-the-line specialists, if needed.

Membership includes five annual visits with their doctors (which rounds out to about four hours), as well as five sessions with their certified health coach. These coaches help patients stay on their health plan, whether it’s advice on physical exercise or getting better sleep or finding take-out places and menu items near their office to eat healthier meals.

Throughout a patient’s membership, they have full access to their medical data and doctor’s notes online, as well as unlimited direct messaging with their doctor. At Parsley, there is always a doctor on call to answer questions about semi-urgent issues like a UTI or a sinus infection.

All of Parsley’s doctors and health coaches are full-time employees at Parsley, and Dr. Berzin told TechCrunch that the company sees a lot of inbound from doctors who want to spend more time with patients and help solve the root of their problems.

Parsley also trains their doctors in functional medicine, which uses a systems-biology approach to better resolve and manage modern chronic disease, as part of Parsley’s clinical fellowship, where they are trained in evaluating thousands of biomarkers to diagnose and treat diseases at their origin.

Parsley is not the first in the space. Forward and One Medical also look to change the way that healthcare is provided in this country, while NextHealth Technologies is focused on supplemental treatments like IV treatments and cryo.

“When I tell people about Parsley, they say ‘wow! That’s what medicine should be’,” said Dr. Berzin. “People are really searching for something better than feeling like they’re paying more and more for healthcare while getting less and less. People are excited to invest in their health and wellness and to have a team that’s working to care for them.”

Parsley has clinics in San Francisco, New York and Los Angeles.

Editor’s Note: An earlier version of this article incorrectly stated that Parsley Health costs $150/year.

UK report urges action to combat AI bias

The need for diverse development teams and truly representational data-sets to avoid biases being baked into AI algorithms is one of the core recommendations in a lengthy Lords committee report looking into the economic, ethical and social implications of artificial intelligence, and published today by the upper House of the UK parliament.

“The main ways to address these kinds of biases are to ensure that developers are drawn from diverse gender, ethnic and socio-economic backgrounds, and are aware of, and adhere to, ethical codes of conduct,” the committee writes, chiming with plenty of extant commentary around algorithmic accountability.

“It is essential that ethics take centre stage in AI’s development and use,” adds committee chairman, Lord Clement-Jones, in a statement. “The UK has a unique opportunity to shape AI positively for the public’s benefit and to lead the international community in AI’s ethical development, rather than passively accept its consequences.”

The report also calls for the government to take urgent steps to help foster “the creation of authoritative tools and systems for auditing and testing training datasets to ensure they are representative of diverse populations, and to ensure that when used to train AI systems they are unlikely to lead to prejudicial decisions” — recommending a publicly funded challenge to incentivize the development of technologies that can audit and interrogate AIs.

“The Centre for Data Ethics and Innovation, in consultation with the Alan Turing Institute, the Institute of Electrical and Electronics Engineers, the British Standards Institute and other expert bodies, should produce guidance on the requirement for AI systems to be intelligible,” the committee adds. “The AI development sector should seek to adopt such guidance and to agree upon standards relevant to the sectors within which they work, under the auspices of the AI Council” — the latter being a proposed industry body it wants established to help ensure “transparency in AI”.

The committee is also recommending a cross-sector AI Code to try to steer developments in a positive, societally beneficial direction — though not for this to be codified in law (the suggestion is it could “provide the basis for statutory regulation, if and when this is determined to be necessary”).

Among the five principles they’re suggesting as a starting point for the voluntary code are that AI should be developed for “the common good and benefit of humanity”, and that it should operate on “principles of intelligibility and fairness”.

Though, elsewhere in the report, the committee points out it can be a challenge for humans to understand decisions made by some AI technologies — going on to suggest it may be necessary to refrain from using certain AI techniques for certain types of use-cases, at least until algorithmic accountability can be guaranteed.

“We believe it is not acceptable to deploy any artificial intelligence system which could have a substantial impact on an individual’s life, unless it can generate a full and satisfactory explanation for the decisions it will take,” it writes in a section discussing ‘intelligible AI’. “In cases such as deep neural networks, where it is not yet possible to generate thorough explanations for the decisions that are made, this may mean delaying their deployment for particular uses until alternative solutions are found.”

A third principle the committee says it would like to see included in the proposed voluntary code is: “AI should not be used to diminish the data rights or privacy of individuals, families or communities”.

Though this is a curiously narrow definition — why not push for AI not to diminish rights, period?

“It’s almost as if ‘follow the law’ is too hard to say,” observes Sam Smith, a coordinator at patient data privacy advocacy group, medConfidential, discussing the report.

Looking at the tech industry as a whole, it’s certainly hard to conclude that self-defined ‘ethics’ appear to offer much of a meaningful check on commercial players’ data processing and AI activities.

Topical case in point: Facebook has continued to claim there was nothing improper about the fact millions of people’s information was shared with professor Aleksandr Kogan. People “knowingly provided their information” is the company’s defensive claim.

Yet the vast majority of people whose personal data was harvested from Facebook by Kogan clearly had no idea what was possible under its platform terms — which, until 2015, allowed one user to ‘consent’ to the sharing of all their Facebook friends. (Hence ~270,000 downloaders of Kogan’s app being able to pass data on up to 87M Facebook users.)

So Facebook’s self-defined ‘ethical code’ has been shown to be worthless — aligning completely with its commercial imperatives, rather than supporting users to protect their privacy. (Just as its T&Cs are intended to cover its own “rear end”, rather than clearly inform people’s about their rights, as one US congressman memorably put it last week.)

“A week after Facebook were criticized by the US Congress, the only reference to the Rule of Law in this report is about exempting companies from liability for breaking it,” Smith adds in a MedConfidential response statement to the Lords report. “Public bodies are required to follow the rule of law, and any tools sold to them must meet those legal obligations. This standard for the public sector will drive the creation of tools which can be reused by all.”

Health data “should not be shared lightly”

The committee, which took evidence from Google -owned DeepMind as one of a multitude of expert witnesses during more than half a year’s worth of enquiry, touches critically on the AI company’s existing partnerships with UK National Health Service Trusts.

The first of which, dating from 2015 — and involving the sharing of ~1.6 million patients’ medical records with the Google-owned company — ran into trouble with the UK’s data protection regulator. The UK’s information commissioner concluded last summer that the Royal Free NHS Trust’s agreement with DeepMind had not complied with UK data protection law.

Patients’ medical records were used by DeepMind to develop a clinical task management app wrapped around an existing NHS algorithm for detecting a condition known as acute kidney injury. The app, called Streams, has been rolled out for use in the Royal Free’s hospitals — complete with PR fanfare. But it’s still not clear what legal basis exists to share patients’ data.

“Maintaining public trust over the safe and secure use of their data is paramount to the successful widespread deployment of AI and there is no better exemplar of this than personal health data,” the committee warns. “There must be no repeat of the controversy which arose between the Royal Free London NHS Foundation Trust and DeepMind. If there is, the benefits of deploying AI in the NHS will not be adopted or its benefits realised, and innovation could be stifled.”

The report also criticizes the “current piecemeal” approach being taken by NHS Trusts to sharing data with AI developers — saying this risks “the inadvertent under-appreciation of the data” and “NHS Trusts exposing themselves to inadequate data sharing arrangements”.

“The data held by the NHS could be considered a unique source of value for the nation. It should not be shared lightly, but when it is, it should be done in a manner which allows for that value to be recouped,” the committee writes.

A similar point — about not allowing a huge store of potential value which is contained within publicly-funded NHS datasets to be cheaply asset-stripped by external forces — was made by Oxford University’s Sir John Bell in a UK government-commissioned industrial strategy review of the life sciences sector last summer.

Despite similar concerns, the committee also calls for a framework for sharing NHS data be published by the end of the year, and is pushing for NHS Trusts to digitize their current practices and records — with a target deadline of 2022 — in “consistent formats” so that people’s medical records can be made more accessible to AI developers.

But worryingly, given the general thrust towards making sensitive health data more accessible to third parties, the committee does not seem to have a very fine-grained grasp of data protection in a health context — where, for example, datasets can be extremely difficult to render truly anonymous given the level of detail typically involved.

Although they are at least calling for the relevant data protection and patient data bodies to be involved in provisioning the framework for sharing NHS data, alongside Trusts that have already worked with DeepMind (and in one case received an ICO wrist-slap).

They write:

We recommend that a framework for the sharing of NHS data should be prepared and published by the end of 2018 by NHS England (specifically NHS Digital) and the National Data Guardian for Health and Care should be prepared with the support of the ICO [information commissioner’s office] and the clinicians and NHS Trusts which already have experience of such arrangements (such as the Royal Free London and Moorfields Eye Hospital NHS Foundation Trusts), as well as the Caldicott Guardians [the NHS’ patient data advocates]. This framework should set out clearly the considerations needed when sharing patient data in an appropriately anonymised form, the precautions needed when doing so, and an awareness of the value of that data and how it is used. It must also take account of the need to ensure SME access to NHS data, and ensure that patients are made aware of the use of their data and given the option to opt out.

As the Facebook-Cambridge Analytica scandal has clearly illustrated, opt-outs alone cannot safeguard people’s data or their legal rights — which is why incoming EU data protection rules (GDPR) beef up consent requirements to require a clear affirmative. (And it goes without saying that opt-outs are especially concerning in a medical context where the data involved is so sensitive — yet, at least in the case of a DeepMind partnership with Taunton and Somerset NHS Trust, patients do not even appear to have been given the ability to say no to their data being processed.)

Opt-outs (i.e. rather than opt-in systems) for data-sharing and self-defined/voluntary codes of ‘ethics’ demonstrably do very little to protect people’s legal rights where digital data is concerned — even if it’s true, for example, that Facebook holds itself in check vs what it could theoretically do with data, as company execs have suggested (one wonders what kind stuff they’re voluntarily refraining from, given what they have been caught trying to manipulate).

The wider risk of relying on consumer savvy to regulate commercial data sharing is that an educated, technologically aware few might be able to lock down — or reduce — access to their information; but the mainstream majority will have no clue they need to or even how it’s possible. And data protection for a select elite doesn’t sound very equitable.

Meanwhile, at least where this committee’s attitude to AI is concerned, developers and commercial entities are being treated with favorable encouragement — via the notion of a voluntary (and really pretty basic) code of AI ethics — rather than being robustly reminded they need to follow the law.

Given the scope and scale of current AI-fueled sandals, that risks the committee looking naive.

Though the government has made AI a strategic priority, and policies to foster and accelerate data-sharing to drive tech developments are a key part of its digital and industrial strategies. So the report needs to be read within that wider context.

The committee does add its voice to questions about whether/how legal liability will mesh with automated decision making — writing that “clarity is required” on whether “new mechanisms for legal liability and redress” are needed or not.

We recommend that the Law Commission consider the adequacy of existing legislation to address the legal liability issues of AI and, where appropriate, recommend to Government appropriate remedies to ensure that the law is clear in this area,” it says on this. “At the very least, this work should establish clear principles for accountability and intelligibility. This work should be completed as soon as possible.” 

But this isn’t exactly cutting edge commentary. Last month the government announced a three-year regulatory review focused on self-driving cars and the law, for instance. And the liability point is already generally well-aired — and in the autonomous cars case, at least, now having its tires extensively kicked in the UK.

What’s less specifically discussed in government circles is how AIs are demonstrably piling pressure on existing laws. And what — if anything — should be done to address those kind of AI-fueled breaking points. (Exceptions: Terrorist content spreading via online platforms has been decried for some years, with government ministers more than happy to make platforms and technologies their scapegoat and toughen laws; more recently hate speech on online platforms has also become a major political target for governments in Europe.)

The committee briefly touches on some of these societal pressure points in a section on AI’s impact on “social and political cohesion”, noting concerns raised to it about issues such as filter bubbles and the risk of AIs being used to manipulate elections. “[T]here is a rapidly growing need for public understanding of, and engagement with, AI to develop alongside the technology itself. The manipulation of data in particular will be a key area for public understanding and discussion in the coming months and years,” it writes here. 

However it has little in the way of gunpowder — merely recommending that research is commissioned into “the possible impact of AI on conventional and social media outlets”, and to investigate “measures which might counteract the use of AI to mislead or distort public opinion as a matter of urgency”.

Elsewhere in the report, it alsos raise an interesting concern about data monopolies — noting that investments by “large overseas technology companies in the UK economy” are “increasing consolidation of power and influence by a select few”, which it argues risks damaging the UK’s home-grown AI start-up sector.

But again there’s not much of substance in its response. The committee doesn’t seem to have formed its own ideas on how or even whether the government needs to address data being concentrating power in the hands of big tech — beyond calling for “strong” competition frameworks. This lack of conviction is attributed to hearing mixed messages on the topic from its witnesses. (Though may well also be related to the economic portion of the enquiry’s focus.)

“The monopolisation of data demonstrates the need for strong ethical, data protection and competition frameworks in the UK, and for continued vigilance from the regulators,” it concludes. “We urge the Government, and the Competition and Markets Authority, to review proactively the use and potential monopolisation of data by the big technology companies operating in the UK.”

The report also raises concerns about access to funding for UK AI startups to ensure they can continue scaling domestic businesses — recommending that a chunk of the £2.5BN investment fund at the British Business Bank, which the government announced in the Autumn Budget 2017, is “reserved as an AI growth fund for SMEs with a substantive AI component, and be specifically targeted at enabling such companies to scale up”.

No one who supports the startup cause would argue with trying to make more money available. But if data access has been sealed up by tech giants all the scale up funding in the world won’t help domestic AI startups break through that algorithmic ceiling.

Also touched on: The looming impact of Brexit, with the committee calling on the government to “commit to underwriting, and where necessary replacing, funding for European research and innovation programmes, after we have left the European Union” . Which boils down to another whistle in a now very long score of calls for replacement funding after the UK leaves the EU.

Funding for regulators is another concern, with a warning that the ICO must be “adequately and sustainably resourced” — as a result of the additional burden the committee expects AI to put on existing regulators.

This issue is also on the radar of the UK’s digital minister, Matt Hancock, who has said he’s considering what additional resources the ICO might need — such as the power to compel testimony from individuals. (Though the ICO itself has previously raised concerns that the minister and his data protection bill are risking undermining her authority.) For now it remains to be seen how well armed the agency will be to meet the myriad challenges generated and scaled by AI’s data processors.

“Blanket AI-specific regulation, at this stage, would be inappropriate,” the report adds. “We believe that existing sector-specific regulators are best placed to consider the impact on their sectors of any subsequent regulation which may be needed. We welcome that the Data Protection Bill and GDPR appear to address many of the concerns of our witnesses regarding the handling of personal data, which is key to the development of AI. The Government Office for AI, with the Centre for Data Ethics and Innovation, needs to identify the gaps, if any, where existing regulation may not be adequate. The Government Office for AI must also ensure that the existing regulators’ expertise is utilised in informing any potential regulation that may be required in the future.”

The committee’s last two starter principles for their voluntary AI code serve to underline how generously low the ethical bar is really being set here — boiling down to: AI shouldn’t be allowed to kill off free schools for our kids, nor be allowed to kill us — which may itself be another consequence of humans not always being able to clearly determine how AI does what it does or exactly what it might be doing to us.

Tony Fadell is worried about smartphone addiction

This weekend, former Apple engineer and consumer gadget legend Tony Fadell penned an op-ed for Wired. In it, he argued that smartphone manufacturers need to do a better job of educating users about how often they use their mobile phones, and the resulting dangers that overuse might bring about.

Take healthy eating as an analogy: we have advice from scientists and nutritionists on how much protein and carbohydrate we should include in our diet; we have standardised scales to measure our weight against; and we have norms for how much we should exercise.

But when it comes to digital “nourishment”, we don’t know what a “vegetable”, a “protein” or a “fat” is. What is “overweight” or “underweight”? What does a healthy, moderate digital life look like? I think that manufacturers and app developers need to take on this responsibility, before government regulators decide to step in – as with nutritional labelling. Interestingly, we already have digital-detox clinics in the US. I have friends who have sent their children to them. But we need basic tools to help us before it comes to that.

Plenty of studies have shown that too much screen time and internet/smartphone addiction can be damaging to our health, both physically and psychologically. And while there are other players involved in our growing dependence on our phones (yes, I’m talking to you, Facebook), the folks who actually build those screens have ample opportunity to make users more aware of their usage.

In his article, Fadell brings up ways that companies like Apple could build out features for this:

You should be able to see exactly how you spend your time and, if you wish, moderate your behaviour accordingly. We need a “scale” for our digital weight, like we have for our physical weight. Our digital consumption data could look like a calendar with our historical activity. It should be itemised like a credit-card bill, so people can easily see how much time they spend each day on email, for example, or scrolling through posts. Imagine it’s like a health app which tracks metrics such as step count, heart rate and sleep quality.

With this usage information, people could then set their own targets – like they might have a goal for steps to walk each day. Apple could also let users set their device to a “listen-only” or “read-only” mode, without having to crawl through a settings menu, so that you can enjoy reading an e-book without a constant buzz of notifications.

9to5Mac brought up a Bloomberg piece from February that not only shows Apple’s capability to build out this feature, but their willingness to do so for young people, with a reported new feature that would let parents see how much time their kids are staring at their screens.

Unlike Facebook, which has tweaked its algorithm to prioritize meaningful connection over time spent on the platform, Apple’s revenue is not dependent on how much you use your phone. So, maybe we’ll see a digital health feature added to Apple products in the future.

Motiv’s neat little fitness ring gets Android and Alexa support

I was pleasantly surprised by Motiv . Sure, my expectations were low for a fitness tracking ring, but pleasantly surprised is pleasantly surprised is still pleasantly surprised. The $200 Fitbit alternative gets a couple of key software upgrades this week, including, most notably, the addition of Android compatibility, along with some Alexa integration.

Initially launched as iOS-only, the Ring is taking baby steps toward working with the world’s most popular mobile operating system. It’s launching first as part of an open beta with, “a more comprehensive feature set” coming by middle of the year. But adventurous users can download the app from the Google Play Store right now.

The fitness tracking ring now works with Alexa, as well. Users can ask Amazon’s smart assistant to sync data and check their heart rate. More metrics are on the way by year’s end, in an attempt to save having to look at a phone screen every time, I suppose. After all, Motiv doesn’t seem likely to cram a tiny screen into the ring any time soon.

Speaking of Amazon, the Ring is now on sale through the online retail giant. Motiv will also be selling the ring at b8ta stores, for those who went to see it in person before dropping $200.

Self-care startup Shine raises $5 million Series A

Shine, an early arrival in market now teeming with self-care apps and services, has closed on $5 million in Series A funding, the company announced today, alongside the milestone of hitting 2 million active users. The round was led by existing investor by Comcast Ventures with betaworks, Felix Capital and The New York Times also participating.

The investment comes roughly two years after Shine launched its free service, a messaging bot aimed at younger users that doles out life advice and positive reinforcement on a daily basis through SMS texts or Facebook’s Messenger.

At the time, the idea that self-help could be put into an app or bot-like format was still a relatively novel concept. But today, digital wellness has become far more common with apps for everything from meditation to self-help to talk therapy.

“We’re proud that we were part of the catalyst to make well-being as am industry something that is so much more top-of-mind. We really sensed where the world was going and we were ahead of it,” says co-founder Naomi Hirabayashi, who built Shine along with her former DoSomething.org co-worker Marah Lidey. The founders had wanted to offer others something akin to the personal support system they had with each other, as close friends.

“Marah and I are both women of color, and we created this company from a very non-traditional background from an entrepreneurship standpoint – we didn’t go to business school,” Hirabayashi explains. “We saw there was something missing in the market because wellbeing companies didn’t really reach us – they didn’t speak to us. We didn’t see people that looked like us. We didn’t feel like the way they shared content sounded like how we spoke about the different wellbeing issues in our lives,” she says.

The company’s free messaging product, Shine Text, was the result of their frustrations with existing products. It tackles a timely theme every day in areas like confidence, productivity, mental health, happiness and more. And it isn’t just some sort of life-affirming text – Shine converses with you on the topic at hand using research-backed materials to help you better understand the information. It’s also presented in a style that makes Shine feel more like a friend chatting with you.

The service has grown to 2 million users across 189 countries, despite not being localized in other languages. 88 percent of users are under the age of 35, and 70 percent are female.

Shine attempted to generate revenue in the past with a life-coaching subscription, but users wanted to talk to a real person and the subscription was fairly steep at $15.99 per week. That product never emerged from testing, and the founders now refer to it as an “experiment.”

The company gave subscriptions another shot this past December, with the launch of a freemium (free with paid upgrades) app on iOS. The new app offers meditations, affirmations, and something called “Shine Stories.”

The meditations are short audio tracks voiced by influencers that help you with various challenges. There are quick hit meditations for recentering and relaxing, those where you can focus on handling a specific situation – like toxic friendships or online dating – and seven-day challenges that deal with a particular issue like burnout or productivity.

Affirmations are quick pep talks and Shine Stories are slightly longer – around five minutes-long, and also voiced by influencers.

“The biggest thing is that we want to meet the user where they are – and we know people are on the go,” says Hirabayashi. “You can expect a lot more to come in the future around how we combine this really exciting time that’s happening for audio consumption and the hunger that there is for audio content that’s motivational and makes you feel better.”

Asked specifically if the company was considering a voice-first app, like an Alexa skill, or perhaps a more traditional podcast, Hirabayashi said they weren’t yet sure, but didn’t plan on limiting the Shine Stories to a single platform indefinitely. But one thing they weren’t interested in doing in the near-term was introducing ads into Shine’s audio content.

The Shine app for iOS is a free download with some selection of its audio available to free users. Users can unlock the full library for $4.99 per month, billed as an annual subscription of $59.99, or $7.99 per month if paid monthly.

The founders declined to offer specifics on their conversions from free to paid members, but said it was “on par with industry standards.”

With the Series A now under its belt, Shine plans to double its 8-person team this year, launch the app on Android, continue to grow the business, including potentially launching new products.

Now the question is whether the millennials are actually so into self-care that they’ll pay. There are some signs that could be true – the top ten self-care apps pulled in $15 million last quarter, with meditation apps leading the way.

“We’re dominating the self-care routine of millennial women right now and we want to keep doing that,” Hirabayashi says.