SoftBank’s Deepcore and accelerator Zeroth team up to hunt early stage AI opportunities

Two early stage AI programs are joining forces because, even in the world of artificial intelligence, two heads are better than one.

Hong Kong-based accelerator Zerothwhich recently grabbed a majority investment from Animoca Brands — and Deepcore, a Japanese incubator and fund that is part of the SoftBank group, are pairing up to use their resources on deal sourcing and other collaboration around artificial intelligence.

The two seem complementary, with Deepcore focused on starting new ventures and investing in AI companies more generally, while Zeroth operates Asia’s first accelerator program targeted at AI and machine learning startups. It recently bagged $3 million through a deal that sees Animoca Brands take a 67 percent share stake in Deepcore’s operating business and provide a check for its investment arm.

SoftBank launched Deepcore earlier this year to give the organization a foothold in early AI projects. The company operates a co-working/incubation/R&D facility — Kernel Hongo — in addition to an investment arm called Deepcore Tokyo.

Zeroth was founded two years ago and it has graduated 33 companies from three batches to date, taking an average of six percent equity. Some of those graduates have gone on to raise from other investors, including Fano Labs (which is now Accosys) which took money from Horizons Ventures, the VC firm founded by Hong Kong’s richest man Li Ka-Shing, and Japan’s Laboratik. It has also made eight investments in blockchain startups.

“It’s very excited to see the Zeroth ecosystem grow,” founder Tak Lo told TechCrunch in a statement. “Ultimately, this ecosystem is about building more and more opportunities for our founders to build great companies.”

China’s hottest news app Jinri Toutiao announces new CEO

You may not have heard of ByteDance, but you probably know its red-hot video app TikTok, which gobbled up Musical.ly in August. The Beijing-based company also runs a popular news aggregator called Jinri Toutiao, which means “today’s headlines” in Chinese, and the app just assigned a new CEO.

At a company event on Saturday, Chen Lin, an early ByteDance employee, made his first appearance as Toutiao’s new CEO. That means Toutiao’s creator Zhang Yiming has handed the helm to Chen, who previously headed product management for the news app.

Zhang’s not going anywhere though. A company spokesperson told TechCrunch that he remains as the CEO of ByteDance, which operates a slew of media apps besides TikTok and Toutiao to lock horns with China’s tech giants Baidu, Alibaba, and Tencent.

The story of ByteDance started when Zhang created Toutiao in 2012. The news app collects articles and videos from third-party providers and uses AI algorithms to personalize content for users. Toutiao flew off the shelves and soon went on to incubate new media products, including a Quora-like Q&A platform and TikTok, known as Douyin in China.

The handover may signal a need for Zhang to step back from daily operations in his brainchild and oversee strategies for ByteDance, which has swollen into the world’s highest-valued startup. The company spokesperson did not provide further details on the reshuffle.

Toutiao itself is installed on over 240 million monthly unique devices, according to data analytics firm iResearch. TikTok and Douyin collectively command 500 million monthly active users around the world, while Musical.ly has a userbase of 100 million, the company previously announced.

Toutiao’s success has prompted Tencent, which is best known for creating WeChat and controlling a large slice of China’s gaming market, to build its own AI-powered news app. Toutiao’s fledgling advertising business has also stepped on the toes of Baidu, which makes the bulk of its income from search ads. More recently, Toutiao muscled in on Alibaba’s territory with an ecommerce feature.

At the Saturday event, Chen also shared updates that hint at Toutiao’s growing ambition. For one, the news goliath is working to help content providers cash in through a suite of tools, for instance, ecommerce sales and virtual gifts from livestreaming. The move is poised to help Toutiao retain quality creators as the race to grab digital eyeball time intensifies in China.

Toutiao also recently launched its first wave of “mini programs,” or stripped-down versions of native apps that operate inside super apps like Toutiao. Tencent has proven the system to be a big traffic driver after WeChat mini programs crossed two million daily users.

Lastly, Toutiao said it will take more proactive measures to monitor what users consume. In recent months, the news app has run afoul of media regulators who slashed it for hosting illegal and “inappropriate” content. Douyin has faced similar criticisms. While ByteDance prides itself on automated distribution, the company has demonstrated a willingness to abide with government rules by hiring thousands of human censors and using AI to filter content.

How cities can fix tourism hell

A steep and rapid rise in tourism has left behind a wake of economic and environmental damage in cities around the globe. In response, governments have been responding with policies that attempt to limit the number of visitors who come in. We’ve decided to spare you from any more Amazon HQ2 talk and instead focus on why cities should shy away from reactive policies and should instead utilize their growing set of technological capabilities to change how they manage tourists within city lines.

Consider this an ongoing discussion about Urban Tech, its intersection with regulation, issues of public service, and other complexities that people have full PHDs on. I’m just a bitter, born-and-bred New Yorker trying to figure out why I’ve been stuck in between subway stops for the last 15 minutes, so please reach out with your take on any of these thoughts: @Arman.Tabatabai@techcrunch.com.
  

The struggle for cities to manage “Overtourism”

Well – it didn’t take long for the phrase “overtourism” to get overused. The popular buzzword describes the influx of tourists who flood a location and damage the quality of life for full-time residents. The term has become such a common topic of debate in recent months that it was even featured this past week on Oxford Dictionaries’ annual “Words of the Year” list.

But the expression’s frequent appearance in headlines highlights the growing number of cities plagued by the externalities from rising tourism.

In the last decade, travel has become easier and more accessible than ever. Low-cost ticketing services and apartment-rental companies have brought down the costs of transportation and lodging; the ubiquity of social media has ticked up tourism marketing efforts and consumer demand for travel; economic globalization has increased the frequency of business travel; and rising incomes in emerging markets have opened up travel to many who previously couldn’t afford it.

Now, unsurprisingly, tourism has spiked dramatically, with the UN’s World Tourism Organization (UNWTO) reporting that tourist arrivals grew an estimated 7% in 2017 – materially above the roughly 4% seen consistently since 2010. The sudden and rapid increase of visitors has left many cities and residents overwhelmed, dealing with issues like overcrowding, pollution, and rising costs of goods and housing.

The problems cities face with rising tourism are only set to intensify. And while it’s hard for me to imagine when walking shoulder-to-shoulder with strangers on tight New York streets, the number of tourists in major cities like these can very possibly double over the next 10 to 15 years.

China and other emerging markets have already seen significant growth in the middle-class and have long runway ahead. According to the Organization for Economic Co-operation and Development (OECD), the global middle class is expected to rise from the 1.8 billion observed in 2009 to 3.2 billion by 2020 and 4.9 billion by 2030. The new money brings with it a new wave of travelers looking to catch a selfie with the Eiffel Tower, with the UNWTO forecasting international tourist arrivals to increase from 1.3 billion to 1.8 billion by 2030.

With a growing sense of urgency around managing their guests, more and more cities have been implementing policies focused on limiting the number of tourists that visit altogether by imposing hard visitor limits, tourist taxes or otherwise.

But as the UNWTO points out in its report on overtourism, the negative effects from inflating tourism are not solely tied to the number of visitors in a city but are also largely driven by touristy seasonality, tourist behavior, the behavior of the resident population, and the functionality of city infrastructure. We’ve seen cities with few tourists, for example, have experienced similar issues to those experienced in cities with millions.

While many cities have focused on reactive policies that are meant to quell tourism, they should instead focus on technology-driven solutions that can help manage tourist behavior, create structural changes to city tourism infrastructure, while allowing cities to continue capturing the significant revenue stream that tourism provides.

Smart city tech enabling more “tourist-ready” cities

THOMAS COEX/AFP/Getty Images

Yes, cities are faced with the headwind of a growing tourism population, but city policymakers also benefit from the tailwind of having more technological capabilities than their predecessors. With the rise of smart city and Internet of Things (IoT) initiatives, many cities are equipped with tools such as connected infrastructure, lidar-sensors, high-quality broadband, and troves of data that make it easier to manage issues around congestion, infrastructure, or otherwise.

On the congestion side, we have already seen companies using geo-tracking and other smart city technologies to manage congestion around event venues, roads, and stores. Cities can apply the same strategies to manage the flow of tourist and resident movement.

And while you can’t necessarily prevent people from people visiting the Louvre or the Coliseum, cities are using a variety of methods to incentivize the use of less congested space or disperse the times in which people flock to highly-trafficked locations by using tools such as real-time congestion notifications, data-driven ticketing schedules for museums and landmarks, or digitally-guided tours through uncontested routes.

Companies and municipalities in cities like London and Antwerp are already working on using tourist movement tracking to manage crowds and help notify and guide tourists to certain locations at the most efficient times. Other cities have developed augmented reality tours that can guide tourists in real-time to less congested spaces by dynamically adjusting their routes.

A number of startups are also working with cities to use collected movement data to help reshape infrastructure to better fit the long-term needs and changing demographics of its occupants. Companies like Stae or Calthorpe Analytics use analytics on movement, permitting, business trends or otherwise to help cities implement more effective zoning and land use plans. City planners can use the same technology to help effectively design street structure to increase usable sidewalk space and to better allocate zoning for hotels, retail or other tourist-friendly attractions.

Focusing counter-overtourism efforts on smart city technologies can help adjust the behavior and movement of travelers in a city through a number of avenues, in a way tourist caps or tourist taxes do not.

And at the end of the day, tourism is one of the largest sources of city income, meaning it also plays a vital role in determining the budgets cities have to plow back into transit, roads, digital infrastructure, the energy grid, and other pain points that plague residents and travelers alike year-round. And by disallowing or disincentivizing tourism, cities can lose valuable capital for infrastructure, which can subsequently exacerbate congestion problems in the long-run.

Some cities have justified tourist taxes by saying the revenue stream would be invested into improving the issues overtourism has caused. But daily or upon-entry tourist taxes we’ve seen so far haven’t come close to offsetting the lost revenue from disincentivized tourists, who at the start of 2017 spent all-in nearly $700 per day in the US on transportation, souvenirs and other expenses according to the U.S. National Travel and Tourism Office.

In 2017, international tourism alone drove to $1.6 trillion in earnings and in 2016, travel & tourism accounted for roughly 1 in 10 jobs in the global economy according to the World Travel and Tourism Council. And the benefits of travel are not only economic, with cross-border tourism promoting transfers of culture, knowledge and experience.

But to be clear, I don’t mean to say smart city technology initiatives alone are going to solve overtourism. The significant wave of growth in the number of global travelers is a serious challenge and many of the issues that result from spiking tourism, like housing affordability, are incredibly complex and come down to more than just data. However, I do believe cities should be focused less on tourist reduction and more on solutions that enable tourist management.

Utilizing and allocating more resources to smart city technologies can not only more effectively and structurally limit the negative impacts from overtourism, but it also allows cities to benefit from a significant and high growth tourism revenue stream. Cities can then create a virtuous cycle of reinvestment where they plow investment back into its infrastructure to better manage visitor growth, resident growth, and quality of life over the long-term. Cities can have their cake and eat it too.

And lastly, some reading while in transit:

UN warns over human rights impact of a ‘digital welfare state’

The UN special rapporteur on extreme poverty and human rights has raised concerns about the UK’s rush to apply digital technologies and data tools to socially re-engineer the delivery of public services at scale, warning in a statement today that the impact of a digital welfare state on vulnerable people will be “immense”.

He has also called for stronger laws and enforcement of a rights-based legal framework to ensure that the use of technologies like AI for public service provision does not end up harming people.

“There are few places in government where these developments are more tangible than in the benefit system,” writes professor Philip Alston. “We are witnessing the gradual disappearance of the postwar British welfare state behind a webpage and an algorithm. In its place, a digital welfare state is emerging. The impact on the human rights of the most vulnerable in the UK will be immense.”

It’s a timely intervention, with UK ministers also now pushing to accelerate the use of digital technologies to transform the country’s free-at-the-point-of-use healthcare system.

Alston’s statement also warns that the push towards automating public service delivery — including through increasing use of AI technologies — is worryingly opaque.

“A major issue with the development of new technologies by the UK government is a lack of transparency. The existence, purpose and basic functioning of these automated government systems remains a mystery in many cases, fuelling misconceptions and anxiety about them,” he writes, adding: “Evidence shows that the human rights of the poorest and most vulnerable are especially at risk in such contexts.”

So, much like tech giants in their unseemly disruptive rush, UK government departments are presenting shiny new systems as sealed boxes — and that’s also a blocker to accountability.

“Central and local government departments typically claim that revealing more information on automation projects would prejudice its commercial interests or those of the IT consultancies it contracts to, would breach intellectual property protections, or would allow individuals to ‘game the system’,” writes Alston. “But it is clear that more public knowledge about the development and operation of automated systems is necessary.”

Radical social re-engineering

He argues that the “rubric of austerity” framing of domestic policies put in place since 2010 is misleading — saying the government’s intent, using the trigger of the global financial crisis, has rather been to transform society via a digital takeover of state service provision.

Or, at he puts it: “In the area of poverty-related policy, the evidence points to the conclusion that the driving force has not been economic but rather a commitment to achieving radical social re-engineering.”

Alston’s assessment follows a two week visit to the UK during which he spoke to people across British society, touring public service and community provided institutions such as job centers and food banks; meeting with ministers and officials across all levels of government, as well as opposition politicians; and talking to representatives from civil society institutions, including front line workers.

His statement discusses in detail the much criticized overhaul of the UK’s benefits system, in which the government has sought to combine multiple benefits into a single so-called Universal Credit, zooming in on the “highly controversial” use of “digital-by-default” service provision here — and wondering why “some of the most vulnerable and those with poor digital literacy had to go first in what amounts to a nationwide digital experiment”.

“Universal Credit has built a digital barrier that effectively obstructs many individuals’ access to their entitlements,” he warns, pointing to big gaps in digital skills and literacy for those on low incomes and also detailing how civil society has been forced into a lifeline support role — despite its own austerity-enforced budget constraints.

“The reality is that digital assistance has been outsourced to public libraries and civil society organizations,” he writes, suggesting that for the most vulnerable in society, a shiny digital portal is operating more like a firewall.

“Public libraries are on the frontline of helping the digitally excluded and digitally illiterate who wish to claim their right to Universal Credit,” he notes. “While library budgets have been severely cut across the country, they still have to deal with an influx of Universal Credit claimants who arrive at the library, often in a panic, to get help claiming benefits online.”

Alston also suggests that digital-by-default is — in practice — “much closer to digital only”, with alternative contact routes, such as a telephone helpline, being actively discouraged by government — leading to “long waiting times” and frustrating interactions with “often poorly trained” staff.

Human cost of automated errors

His assessment highlights how automation can deliver errors at scale too — saying he was told by various experts and civil society organizations of problems with the Real Time Information (RTI) system that underpins Universal Credit.

The RTI is supposed to takes data on earnings submitted by employers to one government department (HMRC) and share it with DWP to automatically calculate monthly benefits. But if incorrect (or late) earnings data is passed there’s a knock on impact on the payout — with Alston saying government has chosen to give the automated system the “benefit of the doubt” over and above the claimant.

Yet here a ‘computer says no’ response can literally mean a vulnerable person not having enough money to eat or properly heat their house that month.

“According to DWP, a team of 50 civil servants work full-time on dealing with the 2% of the millions of monthly transactions that are incorrect,” he writes. “Because the default position of DWP is to give the automated system the benefit of the doubt, claimants often have to wait for weeks to get paid the proper amount, even when they have written proof that the system was wrong. An old-fashioned pay slip is deemed irrelevant when the information on the computer is different.”

Another automated characteristic of the benefits system he discusses segments claimants into low, medium and high risk — in contexts such as ‘Risk-based verification’.

This is also problematic as Alston points out that people flagged as ‘higher risk’ are being subject to “more intense scrutiny and investigation, often without even being aware of this fact”.

“The presumption of innocence is turned on its head when everyone applying for a benefit is screened for potential wrongdoing in a system of total surveillance,” he warns. “And in the absence of transparency about the existence and workings of automated systems, the rights to contest an adverse decision, and to seek a meaningful remedy, are illusory.”

Summing up his concerns he argues that for automation to have positive political — and democratic — outcomes it must be accompanied by adequate levels of transparency so that systems can be properly assessed.

Rule of law, not ‘ethics-washing’

“There is nothing inherent in Artificial Intelligence and other technologies that enable automation that threatens human rights and the rule of law. The reality is that governments simply seek to operationalize their political preferences through technology; the outcomes may be good or bad. But without more transparency about the development and use of automated systems, it is impossible to make such an assessment. And by excluding citizens from decision-making in this area we may set the stage for a future based on an artificial democracy,” he writes.

“Transparency about the existence, purpose, and use of new technologies in government and participation of the public in these debates will go a long way toward demystifying technology and clarifying distributive impacts. New technologies certainly have great potential to do good. But more knowledge may also lead to more realism about the limits of technology. A machine learning system may be able to beat a human at chess, but it may be less adept at solving complicated social ills such as poverty.”

His statement also raises concerns about new institutions that are currently being set up by the UK government in the area of big data and AI, which are intended to guide and steer developments — but which he notes “focus heavily on ethics”.

“While their establishment is certainly a positive development, we should not lose sight of the limits of an ethics frame,” he warns. “Ethical concepts such as fairness are without agreed upon definitions, unlike human rights which are law. Government use of automation, with its potential to severely restrict the rights of individuals, needs to be bound by the rule of law and not just an ethical code.”

Calling for existing laws to be strengthened to properly regulate the use of digital technologies in the public sector, Alston also raises an additional worry — warning over a rights carve out the government baked into updated privacy laws for public sector data (a concern we flagged at the start of this year).

On this he notes: “While the EU General Data Protection Regulation includes promising provisions related to automated decision-making 37 and Data Protection Impact Assessments, it is worrying that the Data Protection Act 2018 creates a quite significant loophole to the GDPR for government data use and sharing in the context of the Framework for Data Processing by Government.”

BlackBerry is buying Cylance for $1.4 billion to continue its push into cybersecurity

BlackBerry was best known for keyboard-totting smartphones, but their demise in recent years has seen the Canadia firm pivot towards enterprise services and in particular cybersecurity. That strategy takes a big step further forward today after BlackBerry announced the acquisition of AI-based cybersecurity company Cylance for a cool $1.4 billion.

Business Insider reported that a deal was close last week, and that has proven true with BlackBerry paying the full amount in cash up front. The deal is set to close before February 2019 — the end of BlackBerry’s current financial year — and it will see Cylance operate as a separate business unit within BlackBerry’s business.

Business Insider’s report suggested Cylance was preparing to go public until BlackBerry swooped in. That suggests BlackBerry wanted Cylance pretty badly, badly enough to part with a large chunk of the $2.4 billion cash pile that it was sitting on prior to today.

Cylance was founded in 2015 by former McAfee/Intel duo Stuart McClure (CEO) and Ryan Permeh (chief scientist) and it differentiates itself by using artificial intelligence, machine learning and more to proactively analyze and detect threats for its customers, which it said include Fortune 100 organizations and governments.

The company has raised nearly $300 million to date from investors that include Blackstone, DFJ, Khosla Ventures, Dell Technologies and KKR. Cylance is headquartered in Irvine, California, with global offices in Ireland, the Netherlands and Japan.

“Cylance’s leadership in artificial intelligence and cybersecurity will immediately complement our entire portfolio, UEM and QNX in particular. We are very excited to onboard their team and leverage our newly combined expertise. We believe adding Cylance’s capabilities to our trusted advantages in privacy, secure mobility, and embedded systems will make BlackBerry Spark indispensable to realizing the Enterprise of Things,” said BlackBerry CEO John Chen in a statement.

Chen has overseen BlackBerry’s move into enterprise services since his arrival in 2013 as part of a takeover by financial holdings firm Fairfax. Initially, things got off to a rocky start but the strategy has borne fruit. The stock price was $6.51 when Chen joined, it closed Thursday at $8.86 down from a peak of $12.66 in January. While some of the progress has been erased this year, Chen has signed on to retain the top role at BlackBerry until at least 2023, giving him a potential 10-year tenure with the company that was once the world’s number one mobile brand.

Standard Cognition raises $40M to replace retailers’ cashiers with cameras

The Amazon Go store requires hundreds of cameras to detect who’s picking up what items. Standard Cognition needs just 27 to go after the $27 trillion market of equipping regular shops with autonomous retail technology.

Walk into one of its partners’ stores and overhead cameras identify you by shape and movement, not facial recognition. Open up its iOS or Android app and a special light pattern flashes, allowing the cameras to tie you to your account and payment method. Grab whatever you want, and just walk out. Standard Cognition will bill you. It even works without an app. Shop like normal and then walk up to kiosk screen, the cameras tell it what items you nabbed, and you can pay with cash or credit card. That means Standard Cognition stores never exclude anyone, unlike Amazon Go.

“Our tagline has been ‘rehumanizing retail’” co-founder Michael Suswal tells me. “We’re removing the machines that are between people: conveyor belts, cash registers, scanners…”

The potential to help worried merchants compete with Amazon has drawn a new $40 million Series A funding round to Standard Cognition, led by Alexis Ohanian and Garry Tan’s Initialized Capital . CRV, Y Combinator, and Draper Associates joined the round that builds on the startup’s $11 million in seed funding. Just a year old, Standard Cognition already has 40 employees, but plans to hire 70 to 80 more over the next 6 months so it can speed up deployment to more partners. Suswal wouldn’t reveal Standard Cognition’s valuation but said the round was roughly in line with the traditional percentage startups sell in an A round, That’s usually about 20 to 25 percent, indicating the startup could be valued around $160 million to $200 million pre-money.

Instead of some lofty tech solution that requires a whole new store to be built around it, Standard Cognition gets retailers to pay for the capital expenditures to install its low number of ceiling cameras and a computer to run them. They can even alter their store layout without working with an engineer as they pay a monthly SAAS fee based on their store’s size, SKUs, and product changes.

Standard Cognition’s founding team

Amazon Go uses thousands of cameras to track what you pick up

Suswal tells me “Retailers’ two biggest complaints are long lines and poor customer service.” Standard Cognition lets stores eliminate the lines and reassign cashiers to become concierges who make sure customers find the perfect products. “It’s already fun to shop, but I think it’s going to become a lot more fun in the future” Suswal predicts.

Having seven co-founders is pretty atypical for startups, but it’s helped Standard Cognition move quickly. The crew came together while all working at the SEC. They’d meet up as part of a technology research group, discussing the latest findings on computer vision and machine learning. Suswal recalls that “After about a year, we said ‘if we were going to productize this somehow, what would we do?” They settled on retail, and narrowed it down to autonomous checkout. Then a bombshell dropped. Amazon Go, the first truly signficant cashierless store, was announced.

“We initially thought ‘oh no, this is bad.’ And then we quickly came to our senses that this was the best thing that could happen” Suswal explains. Retailers would be desperate for assistance to fight off Amazon. So the squad quit their jobs and started Standard Cognition.

In September, Standard Cognition opened a 1,900 sq ft flagship test store on Market St in San Francisco, besting Amazon to the punch. Customers can stuff items in their bags, reconsider and put some back, and stroll out of the store with no stop at the cashier. Standard Cognition claims its camera system is 99 percent accurate, and is trained to identify the suspicious movements and behavior of shoplifters.

The store is part of a sudden wave of autonomous retail startups including Zippen that opened the first one in SF, fellow Y Combinator startup Inokyo that launched a bare-bones pop-up in Mountain View, and Trigo Vision which is partnering with Israeli an grocery chain for more than 200 stores.

Now with plenty of capital and eager customers, Standard Cognition is equipping stores for its first four partners — all public companies. Three refuse to be named but include US grocery, drug store, and convenience store businesses. The fourth is Japan’s pharmacy chain Yakuodo. Standard Cognition is already working on its store mapping for its cameras and will begin camera installation next month, though it will be a little while until it opens.

Japan is the perfect market for Standard Cognition because their aging population has produced a labor shortage. “They literally can’t find people to work in their stores” Suswal explains. Autonomous checkout could keep Japanese retailers growing. And because 70 percent of transactions in Japan are cash-based, it also forced the startup to learn how to handle payments outside of its app. That could make Standard Cognition appealing for retailers that want to embrace the future without abandoning the past.

Getting long-running retail businesses to invest in evolving may be the startup’s biggest challenge. Since they have to pay up front for the installation, they’re gambling that the system will reliably increase sales or at least decrease labor costs. But if it makes their stores too confusing, they could see an exodus of customers instead of an influx.

As for Standard Cognition’s impact on the labor class, Suswal admits that “the major chains will have some reduction . . . no one is going to get fired but fewer people will get hired.” He believes his tech could actually save some jobs too. “I was walking around NYC talking to (small chains and mom-and-pop) retailers about problems they face, and an alarming number of them told me ‘we’re closing in a year. We’re closing in 6 months.’ And it was all tied to the next minimum wage hike” Suswal tells me.

Reducing labor costs could keep those shops viable. “These stores can stay open with a reduction of labor so people are keeping their jobs, not losing them” he claims. Whether that proves true will take some time, but at least Standard Cognition’s tech could incentivize merchants to retrain their clerks for more fulfilling roles as concierges.

UK watchdog has eyes on Google-DeepMind’s health app hand-off

The shock news yesterday that Google is taking over a health app rolled out to UK hospitals over the past few years by its AI division, DeepMind, has caught the eye of the country’s data protection watchdog — which said today that it’s monitoring developments.

An ICO spokesperson told us: “An ICO investigation and an independent audit into the use of Google Deepmind’s Streams service by the Royal Free both highlighted the importance of clear and effective governance when NHS bodies use third parties to provide digital services, particularly to ensure the original purpose for processing personal data is respected.

“We expect all the measures set out in our undertaking, and in the audit, should remain in place even if the identity of the third party changes. We are continuing to monitor the situation.”

We’ve reached out to DeepMind and Google for a response.

The project is already well known to the ICO because, following a lengthy investigation, it ruled last year that the NHS Trust which partnered with DeepMind had broken UK law by passing 1.6 million+ patients’ medical records to the Google owned company during the app’s development.

The Trust agreed to make changes to how it works with DeepMind, with the ICO saying it needed to establish “a proper legal basis” for the data-sharing, as well as share more information about how it handles patients’ privacy.

It also had to submit to an external audit — which was carried out by Linklaters. Though — as we reported in June — this only looked at the current working of the Streams app.

The auditors did not address the core issue of patient data being passed without a legal basis when the app was under construction. And the ICO didn’t sound too happy about that either.

While regulatory actions kicked off in spring 2016, the sanctions came after Streams had already been rolled out to hospital wards — starting with the Royal Free NHS Trust’s own hospitals.

DeepMind also inked additional five-year Streams deals with a handful of other Trusts before the ICO’s intervention, including Imperial College Healthcare NHS Trust and Taunton & Somerset.

Those Trusts are now facing being switched to having Google as their third party app provider.

Until yesterday DeepMind had maintained it operates autonomously from Google, with founder Mustafa Suleyman writing in 2016 that: “We’ve been clear from the outset that at no stage will patient data ever be linked or associated with Google accounts, products or services.”

Two years on and, in their latest Medium blog, the DeepMind co-founders write about how excited they are that the data is going to Google.

Patients might have rather more mixed feelings, given that most people have never been consulted about any of this.

The lack of a legal basis for DeepMind obtaining patient data to develop Streams in the first place remains unresolved. And Google becoming the new data processor for Streams only raises fresh questions about information governance — and trust.

Meanwhile the ICO has not yet given a final view on Streams’ continued data processing — but it’s still watching.

Google gobbling DeepMind’s health app might be the trust shock we need

DeepMind’s health app being gobbled by parent Google is both unsurprising and deeply shocking.

First thoughts should not be allowed to gloss over what is really a gut punch.

It’s unsurprising because the AI galaxy brains at DeepMind always looked like unlikely candidates for the quotidian, margins-focused business of selling and scaling software as a service. The app in question, a clinical task management and alerts app called Streams, does not involve any AI.

The algorithm it uses was developed by the UK’s own National Health Service, a branch of which DeepMind partnered with to co-develop Streams.

In a blog post announcing the hand-off yesterday, “scaling” was the precise word the DeepMind founders chose to explain passing their baby to Google . And if you want to scale apps Google does have the well oiled machinery to do it.

At the same time Google has just hired Dr. David Feinberg, from US health service organization Geisinger, to a new leadership role which CNBC reports as being intended to tie together multiple, fragmented health initiatives and coordinate its moves into the $3TR healthcare sector.

The company’s stated mission of ‘organizing the world’s information and making it universally accessible and useful’ is now seemingly being applied to its own rather messy corporate structure — to try to capitalize on growing opportunities for selling software to clinicians.

That health tech opportunities are growing is clear.

In the UK, where Streams and DeepMind Health operates, the minister for health, Matt Hancock, a recent transplant to the portfolio from the digital brief, brought his love of apps with him — and almost immediately made technology one of his stated priorities for the NHS.

Last month he fleshed his thinking out further, publishing a future of healthcare policy document containing a vision for transforming how the NHS operates — to plug in what he called “healthtech” apps and services, to support tech-enabled “preventative, predictive and personalised care”.

Which really is a clarion call to software makers to clap fresh eyes on the sector.

In the UK the legwork that DeepMind has done on the ‘apps for clinicians’ front — finding a willing NHS Trust to partner with; getting access to patient data, with the Royal Free passing over the medical records of some 1.6 million people as Streams was being developed in the autumn of 2015; inking a bunch more Streams deals with other NHS Trusts — is now being folded right back into Google.

And this is where things get shocking.

Trust demolition

Shocking because DeepMind handing the app to Google — and therefore all the patient data that sits behind it — goes against explicit reassurances made by DeepMind’s founders that there was a firewall sitting between its health experiments and its ad tech parent, Google.

“In this work, we know that we’re held to the highest level of scrutiny,” wrote DeepMind co-founder Mustafa Suleyman in a blog post in July 2016 as controversy swirled over the scope and terms of the patient data-sharing arrangement it had inked with the Royal Free. “DeepMind operates autonomously from Google, and we’ve been clear from the outset that at no stage will patient data ever be linked or associated with Google accounts, products or services.”

As law and technology academic Julia Powles, who co-wrote a research paper on DeepMind’s health foray with the New Scientist journalist, Hal Hodson, who obtained and published the original (now defunct) patient data-sharing agreement, noted via Twitter: “This isn’t transparency, it’s trust demolition.”

Turns out DeepMind’s patient data firewall was nothing more than a verbal assurance — and two years later those words have been steamrollered by corporate reconfiguration, as Google and Alphabet elbow DeepMind’s team aside and prepare to latch onto a burgeoning new market opportunity.

Any fresh assurances that people’s sensitive medical records will never be used for ad targeting will now have to come direct from Google. And they’ll just be words too. So put that in your patient trust pipe and smoke it.

The Streams app data is also — to be clear — personal data that the individuals concerned never consented to being passed to DeepMind. Let alone to Google.

Patients weren’t asked for their consent nor even consulted by the Royal Free when it quietly inked a partnership with DeepMind three years ago. It was only months later that the initiative was even made public, although the full scope and terms only emerged thanks to investigative journalism.

Transparency was lacking from the start.

This is why, after a lengthy investigation, the UK’s data protection watchdog ruled last year that the Trust had breached UK law — saying people would not have reasonably expected their information to be used in such a way.

Nor should they. If you ended up in hospital with a broken leg you’d expect the hospital to have your data. But wouldn’t you be rather shocked to learn — shortly afterwards or indeed years and years later — that your medical records are now sitting on a Google server because Alphabet’s corporate leaders want to scale a fat healthtech profit?

In the same 2016 blog post, entitled “DeepMind Health: our commitment to the NHS”, Suleyman made a point of noting how it had asked “a group of respected public figures to act as Independent Reviewers, to examine our work and publish their findings”, further emphasizing: “We want to earn public trust for this work, and we don’t take that for granted.”

Fine words indeed. And the panel of independent reviewers that DeepMind assembled to act as an informal watchdog in patients’ and consumers’ interests did indeed contain well respected public figures, chaired by former Liberal Democrat MP Julian Huppert.

The panel was provided with a budget by DeepMind to carry out investigations of the reviewers’ choosing. It went on to produce two annual reports — flagging a number of issues of concern, including, most recently, warning that Google might be able to exert monopoly power as a result of the fact Streams is being contractually bundled with streaming and data access infrastructure.

The reviewers also worried whether DeepMind Health would be able to insulate itself from Alphabet’s influence and commercial priorities — urging DeepMind Health to “look at ways of entrenching its separation from Alphabet and DeepMind more robustly, so that it can have enduring force to the commitments it makes”.

It turns out that was a very prescient concern since Alphabet/Google has now essentially dissolved the bits of DeepMind that were sticking in its way.

Including — it seems — the entire external reviewer structure…

A DeepMind spokesperson told us that the panel’s governance structure was created for DeepMind Health “as a UK entity”, adding: “Now Streams is going to be part of a global effort this is unlikely to be the right structure in the future.”

It turns out — yet again — that tech industry DIY ‘guardrails’ and self-styled accountability are about as reliable as verbal assurances. Which is to say, not at all.

This is also both deeply unsurprisingly and horribly shocking. The shock is really that big tech keeps getting away with this.

None of the self-generated ‘trust and accountability’ structures that tech giants are now routinely popping up with entrepreneurial speed — to act as public curios and talking shops to draw questions away from what’s they’re actually doing as people’s data gets sucked up for commercial gain — can in fact be trusted.

They are a shiny distraction from due process. Or to put it more succinctly: It’s PR.

There is no accountability if rules are self-styled and therefore cannot be enforced because they can just get overwritten and goalposts moved at corporate will.

Nor can there be trust in any commercial arrangement unless it has adequately bounded — and legal — terms.

This stuff isn’t rocket science nor even medical science. So it’s quite the pantomime dance that DeepMind and Google have been merrily leading everyone on.

It’s almost as if they were trying to cause a massive distraction — by sicking up faux discussions of trust, fairness and privacy — to waste good people’s time while they got on with the lucrative business of mining everyone’s data.