Amazon develops a new way to help Alexa answer complex questions

Amazon’s Alexa AI team had developed a new training method for the virtual assistant that could greatly improve its ability to handle tricky questions. In a blog post, team lead Abdalghani Abujabal details the new method, which combines both text-based search and a custom-built knowledge graph, two methods which normally compete.

Abujabal suggests the following scenario: You ask Alexa “Which Nolan films won an Oscar but missed a Golden Globe?” The answer to this question asks a lot – you need to identify that the ‘Nolan’ referred to is director Christopher Nolan, figure out which movie he’s directed (even his role as ‘director’ for the resulting list needs to be inferred) and then cross-reference those which have one an Oscar with a list of those which have also won a Golden Globe, and identify those that are present on List A but not on List B.

Amazon’s method to provide a better answer to this difficult question opts for first gathering the most complete data set possible, and then automatically building a curated knowledge graph out of an initially high volume and very noisy (ie., filled with unnecessary data) data set using algorithms that the research team custom-created to deal with cutting the chaff and arriving at mostly meaningful results.

The system devised by Amazon is actually relatively simple on its face – or rather, it combines two relatively simple methods, including a basic web search, that essentially just crawls the web for results using the full text of the question asked – just like if you’d typed “Which Nolan films won an Oscar but missed a Golden Globe?” into Google, for instance (researchers used multiple web engines in reality). The system then grabs the top ten ranked pages and breaks them down into identified names and grammar units.

On top of that resulting data set, Alexa AI’s approach then looks for clues in the structure of sentences to flag and weight significant sentences in the top texts, like “Nolan directed Inception,” and discounts the rest. This builds the ad-hoc knowledge graph, which they then asses to identify “cornerstones” within. A cornerstone is basically dead ringers for words in the original search string (ie., “Which Nolan films won an Oscar but missed a Golden Globe?”) and take those out, focusing instead of looking at the information in between as the source fo the actual answers to that question.

With some final weighting and sorting of the remaining data, the algorithm correctly returns “Inception” as the answer, and Amazon’s team found that this method actually beat out state-of-the-art approaches that were much more involved but that focused on just text search, or just building a curated knowledge graph in isolation. Still, they think they can tweak their approach to be even better, which is good news for Alexa users hoping their smart speakers will be able to settle heated debates about advanced Trival Pursuit questions.

via Click on the link for the full article

Capital One breach said to also affect other major companies

The data breach at Capital One may be the “tip of the iceberg” and may affect other major companies, according to security researchers.

Israeli security firm CyberInt said Vodafone, Ford, Michigan State University and the Ohio Department of Transportation may have also fallen victim to the same data breach that saw over 106 million credit applications and files stolen from a cloud server run by Capital One by an alleged hacker, Paige Thompson, a Seattle resident, who was taken into FBI custody earlier this week.

Reports from Forbes and security reporter Brian Krebs indicating that Capital One may not have been the only company affected, pointing to “one of the world’s biggest telecom providers, an Ohio government body, and a major U.S. university,” according to Slack messages sent by the alleged hacker.

Krebs posted a screenshot of a list of files purportedly stolen by the alleged hacker. The stolen data contained filenames including car maker “Ford” and Italian financial services company “Unicredit.”

The Justice Department said Thompson may face additional charges — suggesting other companies may have been involved.

We reached out to several of those named by CyberInt with mixed results. Only the Ohio Department of Transportation confirmed it had data stolen, and was working with the FBI. “At this point, however, we can confirm that the information in the referenced file contained only publicly available data and no private information was stored there,” said spokesperson Erica Hawkins.

Ford spokesperson Monique Brentley told TechCrunch that it’s “investigating the situation to determine if Ford information is involved.”

Meanwhile, Vodafone spokesperson Adam Liversage said the telecom giant was “not aware” of its data stolen in the Capital One breach.

And a spokesperson for Michigan State University said it receives “hundreds of threats and attacks on our system” and said it was “hard to know if one recently was the alleged hacker from the Capital One situation.”

“Our teams are looking into but at this point we have no information to share,” said spokesperson Emily Guerrant.

The hack of Capital One is the most significant data breach this year. Data was stolen from an Amazon Web Services-based storage bucket, which included more than 140,000 Social Security numbers and over a million Canadian Social Insurance numbers, as well as other personal information.

Capital One said it learned of the breach through a third-party who reportedly saw the alleged hacker’s claims and boasts about the thefts.

Security researcher John Wethington told TechCrunch that that based on public information — including the Slack channel the alleged hacker was a member — likely other companies had data stolen.

“Based on the information gathered from publicly available information on the alleged hackers Github and Gitlab accounts as well as public information from the Slack channel it’s clear that organizations including Ford, Vodafone and others are possible victims of what appears to be a massive sensitive data hacking spree,” he said.

As of the time of writing, Thompson faces five years in prison and a fine of up to $250,000.

via Click on the link for the full article

What happened to the sharing economy?

A few years ago, Silicon Valley couldn’t stop using a trendy buzzword — the sharing economy. The good old top-down economic model with a clear separation between service providers and clients was falling apart. And huge tech companies disrupted entire industries, from Airbnb to Taskrabbit, Uber, Etsy and Getaround.

When you retrospectively look at the sharing economy boom of the early 2010s, many of the principles that defined that generation of startups have slowly disappeared. Instead of a huge societal shift, the sharing economy is slowly fading away.

What is the sharing economy?

In the past, if you wanted to buy a good or a service, you would ask a company or a professional to provide it.

You’d buy something from a company in particular because you knew it would be the exact thing you need. That’s why plenty of companies spent huge amounts of money to build a brand and a reputation. If you just bought a car, chances are you’ll see thousands of ads for cars before you buy your next car.

And that’s also why distribution channels have been key, especially in commoditized markets with low brand differentiation. For instance, when you buy a new printer, chances are you just head to an electronics store or type “printer” on your favorite e-commerce website. If HP doesn’t have a distribution deal with those stores, you’ll just buy an Epson printer.

If your neighbor wants a new printer in a couple of years, you might recommend the same printer, but you may have forgotten where you bought it. There’s little differentiation between distribution channels in that case.

The marketplace model

The sharing economy happened because a group of entrepreneurs wanted to invent new distribution channels. Sure, some traditional distribution channels secured exclusive rights to sell specific products.

But those startups made a radical change. They wanted to work on a completely new inventory of goods or services.

via Click on the link for the full article

Daily Crunch: Facebook wants to build brain-controlled wearables

Facebook reveals its research into brain-controlled wearable devices (yes, really), iPhone sales dip and Samsung announces a new Galaxy Tab. Here’s your Daily Crunch for July 31, 2019.

1. Facebook is exploring brain control for AR wearables

Facebook revealed that it’s working with UCSF to research a brain-computer interface as a way to control future augmented reality interfaces. The company says the approach would involve “a non-invasive wearable device that lets people type just by imagining what they want to say.”

The company acknowledged that there are some thorny privacy issues here: “Neuroethical design is one of our program’s key pillars — we want to be transparent about what we’re working on so that people can tell us their concerns about this technology.”

2. Apple’s revenue growth slows as iPhone sales dip 12% year-over-year

Across categories, iPhone revenue had the biggest year-over-year dip, going from $29.5 billion in last year’s Q3 to just $26 billion this most recent quarter.

3. Samsung targets iPad Pro with the Galaxy Tab S6

Samsung’s latest tablet is going after the same slice of creatives targeted by the iPad Pro and various Surface devices. Its most appealing feature may be the addition of the latest Qualcomm Snapdragon processor.

NEW YORK, NY – APRIL 03: The Spotify banner hangs from the New York Stock Exchange (NYSE) on the morning that the music streaming service begins trading shares at the NYSE on April 3, 2018 in New York City. (Photo by Spencer Platt/Getty Images)

4. Spotify hits 108M paying users and 232M overall, but its average revenue per user declines

“We missed on subs… That’s on us,” the company said.

5. The maker of popular selfie app Facetune just landed $135 million at a unicorn valuation

Facetune, a photo-editing app that empowers users to cover their gray hairs, refine their jaw lines and reshape their noses, was first introduced around six years ago. Its parent company Lightricks is based in Jerusalem and has 260 employees supporting six products across three divisions.

6. How the new ‘Lion King’ came to life

Even though the film looks like a live-action remake of “The Lion King,” every shot (except for the first) was created on a computer.

7. The dreaded 10x, or, how to handle exceptional employees

The very concept of a 10x engineer seems so… five years ago. (Extra Crunch membership required.)

8. Bindu Reddy, co-founder and CEO at RealityEngines, is coming to TechCrunch Sessions: Enterprise

RealityEngines is creating research-driven cloud services that can reduce some of the inherent complexity of working with AI tools.

via Click on the link for the full article

Aspire raises $32.5M to help SMEs secure fast finance in Southeast Asia

Aspire, a Singapore-based startup that helps SMEs secure working capital, has raised $32.5 million in a new financing round to expand its presence in several Southeast Asian markets.

The Series A round for the one-and-a-half-year old startup was funded by MassMutual Ventures South Asia. Arc Labs and existing investors Y Combinator — Aspire graduated from YC last year — Hummingbird, and Picus Capital also participated in the round. Aspire has raised about $41.5 million to date.

Aspire operates a neo-banking-like platform to help small and medium-sized enterprises (SMEs) quickly and easily secure working capital of up to about $70,000.

AspireAccount, the startup’s flagship product, provides merchants and startups with instant credit limit for daily business expenses, as well as a business-to-business acceptance and other tools to help them manage their cash flow.

“I saw the problem while trying to rally small businesses trying to grow in the digital economy,” Andrea Baronchelli, founder and CEO of Aspire told TechCrunch last year. “The problem is really about providing working capital to small business owners,” said Baronchelli, who served as a CMO for Alibaba’s Lazada platform for four years.

Aspire currently operates in Thailand, Indonesia, Singapore, and Vietnam. The startup said it will use the fresh capital to scale its footprints in those markets. Additionally, Aspire is building a scalable marketplace banking infrastructure that will use third-party financial service providers to “create a unique digital banking experience for its SME customers.”

The startup is also working on a business credit card that will be linked to each business account by as early as this year, it said.

Baronchelli did not reveal how many business customers Aspire has, but said the startup has seen “30% month-on-month growth” since beginning operations in January 2018. Additionally, Aspire expects to amass more than 100,000 business accounts by next year.

Southeast Asia’s digital economy is slated to grow more than six-fold to reach more than $200 billion per year, according to a report co-authored by Google. But for many emerging startups and businesses, getting financial services from a bank and securing working capital have become major pain points.

A growing number of startups are beginning to address these SMEs’ needs. In India, for instance, NiYo Bank and Open have amassed millions of businesses through their neo-banking platforms. Both of these startups have raised tens of millions of dollars in recent months. Drip Capital, which helps businesses in developing markets secure working capital, raised $25 million last week.

via Click on the link for the full article

As tech changes homelessness, libraries roll with the punches

The warmth and quiet of the library have ever been a draw for those suffering from homelessness, but the past decade has piled more responsibilities on the shoulders of these institutions. The digital resources they provide are more important than ever for the homeless, but libraries have warily embraced their new role.

It is needless to recount here what most city-dwellers already know, that the homeless situation is critical in many cities, and that it is a hugely complex problem in both causes and potential solutions.

But it’s worth noting that the closure of mental institutions in the ’80s created an enduring and poorly addressed population of deeply ill homeless, compounded by veterans of conflicts in the ’90s — compounded again by rapid gentrification and the rising cost of living in most metropolitan areas.

And as the backdrop for all this, the rise of the information age — the future, as in William Gibson’s most continuously relevant epigram, but unequally distributed. As industries were reinvented, homeless people were systematically excluded from systems that barely tolerated them in the first place.

But it has not all been bad news. The introduction of smartphones and widespread wi-fi allowed — as the future made its way downward through the social strata — for communication, information, and entertainment. I used to do a double take when I saw a homeless person typing away at their phone, but the idea that phones are “luxuries” and that these people might be feigning destitution gave way quickly to the understanding that these devices are as necessary for someone in dire straits as they are for anyone else.

Even government services and associated aid organizations have evolved, putting crucial information like shelter updates, phone numbers, job-hunting resources, and important paperwork online and even in a mobile-friendly format. Programs like the U.S. Digital Service have been working on that lately but the infrastructure they are revamping is often decades old. It’s a work in progress.

Libraries have changed as well — obviously the book-centric model that predominated the 20th century has moved on to a hybrid one where digital resources are as important as physical ones. And although the homeless have always found their way into libraries for one reason or another, be it help putting together a resume or just to get out of the cold, they are coming in record numbers and to share resources that are being spread increasingly thin.

Consider something as simple as computer and internet access. Personal computers long ago graduated from something you’d sit and do work at for half an hour, yet that is the model around which most library access is organized. It’s also a source of judgment for homeless people using public computers: How can someone monopolize such a resource just to browse reddit or watch YouTube? Shouldn’t they be looking for a job and then leaving after their allotted 45 minutes?

Libraries were always sources of education, but that has become more pronounced recently as they’ve shifted from being the ones who store information to those who provide free and open access to it. With the combination of how that information is used and who needs these services, this involves a transformation not just of purpose but of architecture: Becoming a place where people come and stay rather than a place people visit.

That transformation doesn’t come equally easily to all libraries or branches. It may be that a small, underfunded library happens to be near a shelter or bus station and attracts more of the homeless than it can serve, and indeed more than intend to use the library for its “intended” purpose. Though these facilities were designed to provide short-term refuge for any and all, they’re generally not equipped or staffed to handle the volume or types of people who find their way in and stay sometimes from open to close.

But some libraries are being proactive about both the way they provide access and in contacting at-risk populations where they are instead of waiting for them to come to a crowded central branch in desperation.

“Having internet access and wi-fi access is critically important for homeless populations,” said SPL communications director Andra Addison. “Many cannot afford a computer or cannot afford the cost of data for their phones or electronic devices. This is important when looking for work or completing school assignments. Our librarians visit homeless encampments where they bring wi-fi hotspots and other resources.”

The library has nearly a thousand portable wi-fi devices, which have been checked out some 27,000 times since the program started in 2015. That may be the difference between being able to answer a job-related email in time or not, or being in touch with family during a crucial moment.

SPL and the San Francisco Public Library have initiated other social programs as well. As the homeless crisis has worsened, so too has the strain on libraries, and the latter have taken steps to address the problem rather than the symptoms.

That means social workers at library branches frequented by homeless people who are schooled not just in how to interact with what can sometimes be an intimidating population, but how to offer them lasting help. The library is a conduit to information, right? That already includes helping people with job searches and schoolwork — why shouldn’t it also be a way for the homeless and mentally ill to get directed to the help they need?

To this end libraries have had to specialize in populations — the recently released from prison, veterans, teens, those suffering from addiction, and so on.

“Libraries welcome and serve everyone, no matter their age, background or income level. Libraries are also particularly committed to helping the underserved, particularly the insecurely housed,” Addison said. If that’s the mission, then that’s the mission — if fulfilling that mission looks different today than it did ten, twenty, or fifty years ago, that just means we’ve successfully evolved the model.

Computers, smartphones, and the internet are at the core of this change not just because it is the way things get done these days, but because they have the possibility to systematically improve access for the unfortunate as well as the fortunate. But that transition too is a painful one — when the eye of the technotopians is forever pointed upwards and outwards, to look backwards and downwards at the people it has left behind.

Libraries are not the only ones that must adapt if we are to build a truly inclusive environment in tech. Startups, funding, even hardware makers should be looking at making it their responsibility not just to reach higher heights but to lift up the lowest among us.


This post was written as part of the SF Homelessness Project, a yearly event in which news organizations highlight the causes of and solutions to homelessness throughout the nation.

via Click on the link for the full article

Impossible Foods goes to the grocery store

After receiving approval from the Food and Drug Administration, Impossible Foods has cleared the last regulatory hurdle it faced to rolling out in grocery stores.

The company is targeting a September release of Impossible products on grocery store shelves, joining its competitor Beyond Meat on grocery store shelves.

The news comes as the company said it inked a major supply agreement with the OSI Group, a food processing company to increase the availability of its Impossible Burger.

Impossible Foods has been facing shortages of its product, which it can’t make fast enough to meet growing customer demand.

The supply constraints have been especially acute as the company inks more deals with fast food vendors like Burger King, White Castle, and Qdoba to supply its Impossible protein patty and ground meal to a growing number of outlets.

Impossible Foods products are now served in over 10,000 locations around the world.

Earlier this year, the company hired Dennis Woodside and Sheetal Shah to scale up its manufacturing operations and help manage its growth into international markets. The company began selling its product in Singapore earlier this summer.

May not only saw new executives joining the Impossible team, but a new capital infusion as well. Impossible Foods picked up $300 million in financing from investors including Khosla Ventures, Bill Gates, Google Ventures, Horizons Ventures, UBS, Viking Global Investors, Temasek, Sailing Capital, and Open Philanthropy Project.

With the new FDA approval, Impossible Foods will now be able to go head to head with its chief rival, Beyond Meat. The regulatory approval will also help to dispel questions that have swirled around the safety of its innovative soy leghemoglobin that have persisted since the company began its expansion across the U.S.

Last July, the company received a no-questions letter from the FDA, which confirmed that the company’s heme was safe to eat, according to a panel of food-safety experts.

The remaining obstacle for the company, was whether or not the company’s “heme” could be considered a color additive. That approval — the use of heme as a color additive — is what the FDA announced today.

“We’ve been engaging with the FDA for half a decade to ensure that we are completely compliant with all food-safety regulations—for the Impossible Burger and for future products and sales channels,” said Impossible Foods Chief Legal Officer Dana Wagner. “We have deep respect for the FDA as champion of US food safety, and we’ve always gone above and beyond to comply with every food-safety regulation and to provide maximum transparency about our ingredients so that our customers can have 100% confidence in our product.”

via Click on the link for the full article

Prodly announces $3.5M seed to automate low code cloud deployments

Low code programming is supposed to make things easier on companies, right? Low code means you can count on trained administrators instead of more expensive software engineers to handle most tasks, but like any issue solved by technology, there are always unintended consequences. While running his former company, Steelbrick, which he sold to Salesforce in 2015 for $360 million, Max Rudman identified a persistent problem with low-code deployments. He decided to fix it with automation and testing, and the idea for his latest venture, Prodly, was born.

The company announced a $3.5 million seed round today, but more important than the money is the customer momentum. In spite of being a very early-stage startup, the company already has 100 customers using the product, a testament to the fact that other people were probably experiencing that same pain point Rudman was feeling, and there is a clear market for his idea.

As Rudman learned with his former company, going live with the data on a platform like Salesforce is just part of the journey. If you are updating configuration and pricing information on a regular basis, that means updating all the tables associated with that information. Sure, it’s been designed to be point and click, but if you have changes across 48 tables, it becomes a very tedious task, indeed.

The idea behind Prodly is to automate much of the configuration, provide a testing environment to be sure all of the information is correct, and finally automate deployment. For now, the company is just concentrating on configuration, but with the funding it plans to expand the product to solve the other problems as well.

Rudman is careful to point out that his company’s solution is not built strictly for the Salesforce platform. The startup is taking aim at Salesforce admins for its first go-round, but he sees the same problem with other cloud services that make heavy use of trained administrators to make changes.

“The plan is to start with Salesforce, but this problem actually exists on most cloud platforms — ServiceNow, Workday — none of them have the tools we have focused on for admins, and making the admins more productive and building the tooling that they need to efficiently manage a complex application,” Rudman told TechCrunch.

Customers include Nutanix, Johnson & Johnson, Splunk, Tableau and Verizon (which owns this publication). The $3.5 million round was led by Shasta Ventures with participation from Norwest Venture Partners.

via Click on the link for the full article

Google’s Titan security keys come to Japan, Canada, France and the UK

Google today announced that its Titan Security Key kits are now available in Canada, France, Japan and the UK. Until now, these keys, which come in a kit with a Bluetooth key and a standard USB-A dongle, were only available in the U.S.

The keys provide an extra layer of security on top of your regular login credentials. They provide a second authentication factor to keep your account safe and replace more low-tech two-factor authentication systems like authentication apps or SMS messages. When you use those methods, you still have to type the code into a form, after all. That’s all good and well until you end up on a well-designed phishing page. Then, somebody could easily intercept your code and quickly reuse it to breach your account — and getting a second factor over SMS isn’t exactly a great idea to begin with, but that’s a different story.

Authentication keys use a number of cryptographic techniques to ensure that you are on a legitimate site and aren’t being phished. All of this, of course, only works on sites that support hardware security keys, though that number continues to grow.

The launch of Google’s Titan keys came as a bit of a surprise, given that Google had long had a good relationship with Yubico and previously provided all of its employees with that company’s keys. The original batch of keys also featured a security bug in the Bluetooth key. That bug was hard to exploit, but nonetheless, Google offered free replacements to all Titan Key owners.

In the U.S., the Titan Key kit sells for $50. In Canada, it’ll go for $65 CAD. In France, it’ll be €55, while in the UK it’ll retail for £50 and in Japan for ¥6,000. Free delivery is included.


via Click on the link for the full article

DeepMind touts predictive healthcare AI ‘breakthrough’ trained on heavily skewed data

DeepMind, the Google-owned UK AI research firm, has published a research letter in the journal Nature in which it discusses the performance of a deep learning model for continuously predicting the future likelihood of a patient developing a life-threatening condition called acute kidney injury (AKI). 

The company says its model is able to accurately predict that a patient will develop AKI “within a clinically actionable window” up to 48 hours in advance. 

In a blog post trumpeting the research, DeepMind couches it as a breakthrough — saying the paper demonstrates artificial intelligence can predict “one of the leading causes of avoidable patient harm” up to two days before it happens.

“This is our team’s biggest healthcare research breakthrough to date,” it adds, “demonstrating the ability to not only spot deterioration more effectively, but actually predict it before it happens.”

Even a surface read of the paper raises some major caveats, though.

Not least that the data used to train the model skews overwhelmingly male: 93.6%. This is because DeepMind’s AI was trained using patient data provided by the US Department of Veteran Affairs (VA).

The research paper states that females comprised just 6.38% of patients in the training dataset. “Model performance was lower for this demographic,” it notes, without saying how much lower.  

A summary of dataset statistics also included in the paper indicates that 18.9% of patients were black, although there is no breakout for the proportion of black women in the training dataset. (Logic suggests it’s likely to be less than 6.38%.) No other ethnicities are broken out.

Asked about the model’s performance capabilities across genders and different ethnicities, a DeepMind spokeswoman told us: “In women, it predicted 44.8% of all AKI early, in men 56%, for those patients where gender was known. The model performance was higher on African American patients — 60.4% of AKIs detected early compared to 54.1% for all other ethnicities in aggregate.”

“This research is just the first step,” she confirmed. “For the model to be applicable to a general population, future research is needed, using a more representative sample of the general population in the data that the model is derived from.

“The data set is representative of the VA population, and we acknowledge that this sample is not representative of the US population.  As with all deep learning models it would need further, representative data from other sources before being used more widely.

“Our next step would be to work closely with [the VA] to safely validate the model through retrospective and prospective observational studies, before hopefully exploring how we might conduct a prospective interventional study to understand how the prediction might impact care outcomes in a clinical setting.”

“To do this kind of work, we need the right kind of data,” she added. “The VA uses the same EHR [electronic health records] system (widely recognized as one of the most comprehensive EHRs) in all its hospitals and sites, which means the dataset is also very comprehensive, clean, and well-structured.”

So what DeepMind’s ‘breakthrough’ research paper neatly underlines is the reflective relationship between AI outputs and training inputs.

In a healthcare setting, where instructive outputs could be the difference between life and death, it’s not the technology that’s king; it’s access to representative datasets that’s key — that’s where the real value lies.

This suggests there’s huge opportunity for countries with taxpayer-funded public healthcare systems to structure and unlock the value contained in medical data they hold on their populations to develop their own publicly owned healthcare AIs.

Indeed, that was one of the recommendations of a 2017 industrial strategy review of the UK’s life sciences sector.

Oxford University’s Sir John Bell, who led the review, summed it up in comments to the Guardian newspaper, when he said: “Most of the value is the data. The worst thing we could do is give it away for free.”

Streams app evaluation

DeepMind has also been working with healthcare data in the UK.

Reducing the time it takes for clinicians to identify when a patient develops AKI has been the focus of an app development project it’s been involved with since 2015 — co-developing an alert and clinical task management app with doctors working for the country’s National Health Service (NHS).

That app, called Streams, which makes use of an NHS algorithm for detecting AKI, has been deployed in several NHS hospitals. And, also today, DeepMind and its app development partner NHS trust are releasing an evaluation of Streams’ performance, led by University College London.

The results of the evaluation have been published in two papers, in the Nature Digital Medicine and the Journal of Medical Internet Research.

In its blog DeepMind claims the evaluations show the​ ​app​ “​improved​ ​the​ ​quality​ ​of​ ​care​ ​for​ ​ patients​ ​by​ ​speeding​ ​up​ ​detection​ ​and​ ​preventing​ ​missed​ ​cases”, further claiming ​clinicians​ ​”were​ ​able​ ​to​ ​respond​ ​to​ ​urgent​ ​AKI​ ​cases​ ​in​ ​14​ ​minutes​ ​or​ ​less” — and suggesting that ​using​ ​existing​ ​systems​ “​might​ ​otherwise​ ​have​ ​taken​ ​many​ ​hours”.​ ​

It also claims a reduction in the cost of care to the NHS — ​from​ ​£11,772​ ​to​ ​£9,761​ ​for​ ​a hospital​ ​admission​ ​for​ ​a​ ​patient​ ​with​ ​AKI.​ ​

Though it’s worth emphasizing that under its current contracts with NHS trusts DeepMind provides the Streams service for free. So any cost reduction claims also come with some major caveats.

Simply put: We don’t know the future costs of data-driven, digitally delivered healthcare services — because the business models haven’t been defined yet. (Although DeepMind has previously suggested pricing could be based on clinical outcomes.)

“A​ccording​ ​to​ ​the​ ​evaluation,​ ​the​ ​app​ ​has​ ​improved​ ​the​ ​experience​ ​of​ ​clinicians​ ​responsible​ ​for​ ​ treating​ ​AKI,​ ​saving​ ​them​ ​time​ ​which​ ​would​ ​previously​ ​have​ ​been​ ​spent​ ​​trawling​ ​through​ ​paper,​ ​ pager​ ​alerts​ ​and​ ​multiple​ ​desktop​ ​systems,” DeepMind also writes now of Streams.

However, again, the discussion contained in the evaluation papers contains rather more caveats than DeepMind’s PR does — flagging a large list of counter considerations, such as training costs and the risks of information overload (and over-alerting) making it more difficult to triage and manage care needs, as well as concluding that more studies are needed to determine wider clinical impacts of the app’s use.

Here’s the conclusion to one of the papers, entitled A Qualitative Evaluation of User Experiences of a Digitally Enabled Care Pathway in Secondary Care:

Digital technologies allow early detection of adverse events and of patients at risk of deterioration, with the potential to improve outcomes. They may also increase the efficiency of health care professionals’ working practices. However, when planning and implementing digital information innovations in health care, the following factors should also be considered: the provision of clinical training to effectively manage early detection, resources to cope with additional workload, support to manage perceived information overload, and the optimization of algorithms to minimize unnecessary alerts.

A second paper, looking at Streams’ impact on clinical outcomes and associated healthcare costs, concludes that “digitally enabled clinical intervention to detect and treat AKI in hospitalized patients reduced health care costs and possibly reduced cardiac arrest rates”.

“Its impact on other clinical outcomes and identification of the active components of the pathway requires clarification through evaluation across multiple sites,” it adds.

To be clear, the current Streams app for managing AKI alerts does not include AI as a predictive tool. The evaluations being published today are of clinicians using the app as a vehicle for task management and push notification-style alerts powered by an NHS algorithm.

But the Streams app is a vehicle that DeepMind and its parent company Google want to use to drive AI-powered diagnosis and prediction onto hospital wards.

Hence DeepMind also working with US datasets to try to develop a predictive AI model for AKI. (It backed away from an early attempt to use Streams patient data to train AI, after realizing it would need to gain additional clearances from UK regulators.)

Every doctor now carries a smartphone. So an app is clearly the path of least resistance for transforming a service that’s been run on paper on pagers for longer than Google’s existed.

The wider intent behind DeepMind’s app collaboration with London’s Royal Free NHS Trust was stated early on — to build “powerful general-purpose learning algorithms”, an ambition expressed in a Memorandum of Understanding between the pair that has since been cancelled following a major data governance scandal.

The background to the scandal — which we covered extensively in 2016 and 2017 — is that the medical records of around 1.6 million Royal Free NHS Trust patients were quietly passed to DeepMind during the development phase of Streams. Without, as it subsequently turned out, a valid legal basis for the data-sharing.

Patients had not been asked for their consent to their sensitive medical data being shared with the Google-owned company. The regulator concluded they would not have had a reasonable expectation of their medical data ending up there.

The trust was ordered to audit the project — though not the original data-sharing arrangement that had caused the controversy in the first place. It was not ordered to remove DeepMind’s access to the data.

Nor were NHS patients whose data passed through Streams during the app evaluation phase asked for their consent to participate in the UCL/DeepMind/Royal Free study; a note on ‘ethical approval’ in the evaluation papers says UCL judged it fell under the remit of a service evaluation (rather than research) — hence “no participant consent was required”.

It’s an unfortunate echo of the foundational consent failure attached to Streams, to say the very least.

Despite all this, the Royal Free and DeepMind have continue to press on with their data-sharing app collaboration. Indeed, DeepMind is pressing on the accelerator — with its push to go beyond the NHS’ AKI algorithm.

Commenting in a statement included in DeepMind’s PR, Dr​ ​Chris​ ​Streather,​ ​Royal​ ​Free​ ​London​’s ​chief​ ​medical​ ​officer​ ​and​ ​deputy​ ​chief​ ​executive,​ ​enthuses: “The​ ​ findings​ ​of​ ​the​ ​Streams​ ​evaluation​ ​are​ ​incredibly​ ​encouraging​ ​and​ ​we​ ​are​ ​delighted​ ​that​ ​our​ ​partnership​ ​with​ ​DeepMind​ ​Health​ ​has​ ​improved​ ​the​ ​outcomes​ ​for​ ​patients.​ ​

“Digital​ ​technology​ ​is​ ​the​ ​way​ ​forward​ ​for​ ​the​ ​NHS.​ ​In​ ​the​ ​same​ ​way​ ​as​ ​we​ ​can​ ​receive​ ​transport​ ​ and​ ​weather​ ​alerts​ ​on​ ​our​ ​mobile​ ​devices,​ ​doctors​ ​and​ ​nurses​ ​should​ ​benefit​ ​from​ ​tools​ ​which​ ​put​ ​ potentially​ ​life-saving​ ​information​ ​directly​ ​into​ ​their​ ​hands.​

“In​ ​the​ ​coming​ ​months,​ ​we​ ​will​ ​be​ ​introducing​ ​the​ ​app​ ​to​ ​clinicians​ ​at​ ​Barnet​ ​Hospital​ ​as​ ​well​ ​as​ ​ exploring​ ​the​ ​potential​ ​to​ ​develop​ ​solutions​ ​for​ ​other​ ​life-threatening​ ​conditions​ ​like​ ​sepsis.”​

Scramble for NHS data

The next phase of Google-DeepMind’s plan for Streams may hit more of a blocker, though.

Last year DeepMind announced the app would be handed off to its parent — to form part of Google’s own digital health push. Thereby contradicting DeepMind’s own claims, during the unfolding scandal when it had said Google would not have access to people’s medical records.

More like: ‘No access until Google owns all the data and IP’, then…

As we said at the time, it was quite the trust shock.

Since then the Streams app hand-off from DeepMind to Google appears to have been on pause.

Last year the Royal Free Trust said it could not happen without its approval.

Asked now whether it will be signing new contracts for Streams with Google a spokesperson told us: “At present, the Royal Free London’s contract with DeepMind remains unchanged. As with all contractual agreements with suppliers, any changes or future contracts will follow information governance and data protection regulations. The trust will continue to be the data controller at all times, which means it is responsible for all patient information.”

The trust declined to answer additional questions — including whether it intends to deploy a version of Streams that includes predictive AI model at NHS hospitals; and whether or not patients will be given an opt out for their data being shared with Google.

It’s not clear what its plans are. Although DeepMind’s and Google’s is clearly for Streams to be the conduit for predictive AIs to be pushed onto NHS wards. Its blog aggressively pushes the case for adding AI to Streams.

To the point of talking down the latter in order to hype the former. The DeepMind Health sales pitch is evolving from ‘you need this app’ to ‘you need this AI’… With the follow on push to ‘give us your data’.

“Critically, these early findings from the Royal Free suggest that, in order to improve patient outcomes even further, clinicians need to be able to intervene before AKI can be detected by the current NHS algorithm — which is why our research on AKI is so promising,” it writes. “These results comprise the building blocks for our long-term vision of preventative healthcare, helping doctors to intervene in a proactive, rather than reactive, manner.

“Streams doesn’t use artificial intelligence at the moment, but the team now intends to find ways to safely integrate predictive AI models into Streams in order to provide clinicians with intelligent insights into patient deterioration.”

In its blog DeepMind also makes a point of reiterating that Streams will be folded into Google — writing: “As we announced in November 2018, the Streams team, and colleagues working on translational research in healthcare, will be joining Google in order to make a positive impact on a global scale.”

“The combined experience, infrastructure and expertise of DeepMind Health teams alongside Google’s will help us continue to develop mobile tools that can support more clinicians, address critical patient safety issues and could, we hope, save thousands of lives globally,” it adds, ending with its customary ‘hope’ that its technology will save lives — yet still without any hard data to prove all the big claims it makes for AI-powered predictive healthcare’s potential. 

As we’ve said before, for its predictive AI to deliver anything of value Google really needs access to data the NHS holds. Hence the big PR push. And the consent-overriding scramble for NHS data.

Responding to DeepMind’s news, Sam Smith, coordinator at health data privacy advocacy group  medConfidential told us: “The history of opportunists using doctors to take advantage of patients to further their own interests is as long as it is sordid. Some sagas drag on for years. Google has used their international reach to use data on the US military what they said they’d do in the UK, before it became clear they misled UK regulators and broken UK law.”

In a blog post the group added: “In recent weeks, Google & YouTube, Facebook & Instagram, and other tech companies have come under increasing pressure to accept they have a duty of care to their users. Can Google DeepMind say how its project with the Royal Free respects the Duty of Confidence that every NHS body has to its patients? How does the VA patient data they did use correspond to the characteristics of patients the RFH sees?

“Google DeepMind received the RFH data -– up to 10 years’ of hospital treatments -– of 1.6 million patients. We expect its press release to confirm how many of those 1.6 million people actually had their data displayed in the app, and whether they were used for testing alongside the US military data.”

via Click on the link for the full article