Tech Advocacy Group That Includes Apple Meeting This Wednesday to Discuss Online Privacy

Members of the Information Technology Industry Council plan to meet this Wednesday, June 27 in San Francisco to discuss “how to tackle growing questions and concerns about consumer privacy online.”

The news comes from Axios, and members of ITI in attendance will reportedly include Apple, Facebook, IBM, Microsoft, Intel, Qualcomm, Samsung, Dropbox, and more — although specific attendees have not been confirmed by the organization.

ITI has organized all-day meetings that will focus on topics about online privacy in the wake of Europe’s General Data Protection Regulation and the Facebook/Cambridge Analytica data scandal.

ITI CEO Dean Garfield told Axios that tech companies are aware there’s a “new sense of urgency around consumer privacy.” The organization also said that the new meet-up of tech leaders is “not a direct result” of alleged conversations brewing within the Trump administration about a U.S. “counter-weight” to Europe’s GDPR.

“Just because Europe has taken a comprehensive approach doesn’t mean our different approach is deficient,” Garfield said. “And just because Europe is early doesn’t mean it’s best or final. But we should always be thinking about how we evolve to make sure consumers have trust in our products.”

In that report last week, Trump advisor Gail Slater was said to have discussed a U.S. version of GDPR with Garfield, although Slater stated the White House has no desire to create a “U.S. clone” of Europe’s rules. Slater claimed that “giving consumers more control over their data” and “more access to their data” are high marks of the GDPR, suggesting these aspects would be emphasized in the U.S. law if it ever comes to pass.

While lawmakers and advocacy groups discuss online customer privacy, individual companies have promised some form of enhanced user privacy on a global scale in the wake of GDPR. For Apple, the company launched a new Data & Privacy website that lets users download all of the data associated with their Apple ID. Prior to GDPR, last September Apple revamped its privacy website so that its various policies could be more accessible and easy-to-read for its customers.

Note: Due to the political nature of the discussion regarding this topic, the discussion thread is located in our Politics, Religion, Social Issues forum. All forum members and site visitors are welcome to read and follow the thread, but posting is limited to forum members with at least 100 posts.

Tag: privacy

Discuss this article in our forums

If We Care About the Internet, We Have to Be Willing to Do Our Part

Whether it’s playing dungeons and dragons over voice chat with my college friends hundreds of miles away, reading the latest movie reviews for summer blockbusters I’ll watch once they come out on video, or simply paying electrical bills, the Internet has become an important part of my life.

Yet, while I have come to rely on the Internet, I don’t always do what is best for it.

I don’t always patch my connected devices or applications, leaving them vulnerable to compromise and use in a botnet. I don’t look for security when buying an app or a device, let alone look at the privacy policies.

While I know I am hurting the overall security of the Internet, I find myself thinking, “I’m just one person, how much damage could I do?”

Unfortunately, according to one recent survey, there are a lot of people who act just like me. 

The results from the 2018 CIGI-Ipsos Global Survey on Internet Security and Trust* suggest that many users fail to make security a priority as they shop for Internet of Things (IoT) devices. (IoT refers to “scenarios where network connectivity and computing capability extends to objects, sensors and everyday items not normally considered computers, allowing these devices to generate, exchange and consume data with minimal human intervention,” and can include consumer products, durable goods, cars and trucks, industrial and utility components, and sensors.)

According to one estimate, IoT is projected to grow to 38.5 billion connected devices in 2020, up from 13.4 billion in 2015. Each of these devices, whether a thermostat, car, fitness tracker, or something else, will be connected to the Internet. And, if left unsecured, these devices can be used to form networks of Internet-connected externally controlled devices (“botnets”), that can be used to attack infrastructure, online businesses – even you and me.

As more IoT products are brought online, it is critical that they have good security to avoid being pressed into a botnet. But manufacturers will only make them secure if there’s a market for it.

The survey suggests that while 52% of users would be willing to pay more for better product security, the other 48% of users would not. Respondents were also asked to rank several attributes by importance in influencing their decision to buy an application or connected device: price, security, privacy policy, functionality, ease of use, brand reputation, and appearance. While “security” scored better than the other options, only 31% of respondents placed it as the most important attribute influencing their decision to buy an application or connected device. And only 14% of respondents placed “privacy policy” as the most influential attribute.

If these results are indicative of the general trend, with nearly half of consumers unwilling to pay more for better security and only some placing security as their top priority when buying a device, will there be enough market demand to push manufacturers to make more secure products? I’m not sure.

But I do know we can do better. 

Our actions (or lack of actions) can have a significant impact on other Internet users and services. When we choose the poorly-secured product because it is cheaper, we encourage IoT manufacturers to prioritize price over security.  When nearly half of us refuse to pay more for better security we almost guarantee it.

Let’s all do better. Here are five actions I’ll take to make the Internet safer and its future brighter:

  • Learn to shop smart, especially for connected devices. I’ll also be willing to pay a little more to be more secure.My post on shopping for connected toys and Mozilla’s guide to shopping for connected gifts are both great places to start.
  • Update your devices and its applications.Anything that’s Internet connected, from light bulbs to your thermostat, should be updated. Updating your devices can help keep them safe from known vulnerabilities. If you are unsure about how to do this, the device manufacturer should have clear instructions on its website.
  • Turn on encryption if availableTake a few minutes to see if your devices or services are already using encryption or if you need to turn it on.
  • Take steps to make your home network more secure.  By protecting your home network, you limit your exposure to online threats and help mitigate the risk a connecteddevice on your network may pose to others. An easy way to make your network more secure is to use encryption, a strong password, and firewall for your home WiFi network. Firewalls are often built in to routers and only have to be turned on. The manufacturer should have clear instructions on its website about how to do this.
  • Use a strong password. If a connected-device or app comes with password protection, make sure you use a strong password. Do not just use the default password, a simple guessable password, or a password that uses easily-accessible personal information. This article provides advice for creating a strong password that you can still remember.

Rather than asking myself “how much damage can I do?” I should be asking “how much good can I do?” Even though I’m only one person, remember the survey. None of us are alone in this. Even small actions, if done by many, can have a big impact.

Lets all do our part to make the Internet safer.

 


*CIGI (Center for International Governance Innovation) and Ipsos, with support from the Internet Society, conducted the survey in 25 economies (Australia, Brazil, Canada, China, Egypt, France, Germany, Great Britain, Hong Kong, India, Indonesia, Italy, Japan, Kenya, Mexico, Nigeria, Pakistan, Poland, South Africa, South Korea, Sweden, Tunisia, Turkey and the United States) with 25,262 Internet users. This is CIGI’s 4th Global Survey on Internet Security and Trust and it covers a range of issues including: Internet trust, privacy, e-commerce, online habits, the Internet of Things (IoT), and emerging technologies.

The post If We Care About the Internet, We Have to Be Willing to Do Our Part appeared first on Internet Society.

Supreme Court Rules Police Need Warrants to Obtain a User’s Smartphone Location Data

The United States Supreme Court today ruled that the government “is required” to obtain a warrant if it wants to gain access to data found on a civilian’s smartphone, but only when it’s related to the user’s location data (via The New York Times).

Image via Wikimedia Commons


The decision is expected to have major implications for digital privacy moving forward as it pertains to legal cases, and could cause ripples in unlawful search and seizure cases that involve personal information held by companies like emails, texts, internet searches, bank records, and more.

In a major statement on privacy in the digital age, the Supreme Court ruled on Friday that the government generally needs a warrant to collect troves of location data about the customers of cellphone companies.

But Chief Justice John G. Roberts Jr., writing for the majority, said the decision was limited. “We hold only that a warrant is required in the rare case where the suspect has a legitimate privacy interest in records held by a third party,” the chief justice wrote. The court’s four more liberal justices joined his opinion.

Today’s vote in the case Carpenter v. United States came down to a 5-4 ruling, and originally emerged from armed robberies of Radio Shacks and other stores in Detroit dating back to 2010.

In the case, prosecutors relied on “months of records” obtained from smartphone makers to help prove their case, ultimately showing communication between Timothy Ivory Carpenter outside of a robbery location — with his smartphone nearby — and his accomplices inside of the location. The companies reportedly turned over 127 days’ worth of Carpenter’s records, with information as specific as whether or not he slept at home on any given night or if he went to church on Sunday mornings.

This led to the question by the Supreme Court justices as to whether the prosecutors violated the Fourth Amendment in discovering so much data on Carpenter’s movements. Now, police will have to receive a warrant issued by the court in order to obtain any smartphone data as it relates to the owner’s location data.

As the case continued, Apple and other technology companies filed a brief in August 2017 arguing against “rigid analog-era” Fourth Amendment rules. The brief deliberately stayed neutral on the topic of choosing sides, but urged the Supreme Court to continue bringing the Fourth Amendment law into the modern era. The companies stated that customers should not be “forced to relinquish Fourth Amendment protections” against intrusion by the government, simply because they choose to use modern technology.

Note: Due to the political nature of the discussion regarding this topic, the discussion thread is located in our Politics, Religion, Social Issues forum. All forum members and site visitors are welcome to read and follow the thread, but posting is limited to forum members with at least 100 posts.

Discuss this article in our forums

White House Reportedly Interested in Developing ‘Counter-Weight’ to Europe’s GDPR Privacy Laws

Last month, Europe implemented its General Data Protection Regulation in an effort to protect the data of all individuals within the European Union, with some aspects affecting users worldwide. According to a new report by Axios, the White House is “in the early stages” of figuring out what a federal approach to online data privacy would look like in the United States.

So far, special assistant to President Trump on tech, telecom, and cyber policy Gail Slater has met with industry groups about the issue. Discussions include possible “guardrails” for the use of personal data online, according to a few sources familiar with the talks. Furthermore, Slater has talked about the implementation of GDPR with Dean Garfield, CEO of the Information Technology Industry Council, which represents tech companies like Apple and Google.

Image via Wikimedia Commons


Slater and the Trump administration have reportedly referred to the U.S. proposal as a “counter-weight to GDPR,” aimed at ensuring that the European law doesn’t become the global standard of online privacy, sources said. Still, Slater also stated that there is no desire to create a “U.S. clone” of the European rules.

Axios theorized that one possible outcome from the conversations could be an executive order that leads to the development of a privacy framework for U.S. citizens.

One option is an executive order directing one or more agencies to develop a privacy framework. That could direct the National Institute of Standards and Technology, an arm of the Commerce Department, to work with industry and other experts to come up with guidelines, according to two sources.

An executive order could also kick off a public-private partnership to lay out voluntary privacy best practices, which could become de-facto standards, according to sources.

News about the potential new privacy practices comes as “pressure” is being placed on lawmakers in the U.S., following high-profile data breaches like the Facebook/Cambridge Analytica scandal. Beginning with reports in March, it was discovered that Facebook was connected with consulting firm Cambridge Analytica, which itself was tied to Trump’s 2016 presidential campaign. Using a survey app called “This Is Your Digital Life,” the firm secretly amassed data from millions of Facebook users that targeted and attempted to sway votes in the election.

Slater claimed that “giving consumers more control over their data” and “more access to their data” are high marks of the GDPR, suggesting these aspects would be emphasized in the U.S. law.

“We’re talking through what, if anything, the administration could and should be doing” on privacy, Slater said at a conference hosted last month by the National Venture Capital Association

In the wake of GDPR, Apple itself launched a new Data & Privacy website that lets users download all of the data associated with their Apple ID. While the feature was limited to Apple accounts registered in the European Union, Iceland, Liechtenstein, Norway, and Switzerland at launch, Apple said it will roll out the service worldwide “in the coming months.”

Note: Due to the political nature of the discussion regarding this topic, the discussion thread is located in our Politics, Religion, Social Issues forum. All forum members and site visitors are welcome to read and follow the thread, but posting is limited to forum members with at least 100 posts.

Discuss this article in our forums

Blockchain browser Brave starts opt-in testing of on-device ad targeting

Brave, an ad-blocking web browser with a blockchain-based twist, has started trials of ads that reward viewers for watching them — the next step in its ambitious push towards a consent-based, pro-privacy overhaul of online advertising.

Brave’s Basic Attention Token (BAT) is the underlying micropayments mechanism it’s using to fuel the model. The startup was founded in 2015 by former Mozilla CEO Brendan Eich, and had a hugely successful initial coin offering last year.

In a blog post announcing the opt-in trial yesterday, Brave says it’s started “voluntary testing” of the ad model before it scales up to additional user trials.

These first tests involve around 250 “pre-packaged ads” being shown to trial volunteers via a dedicated version of the Brave browser that’s both loaded with the ads and capable of tracking users’ browsing behavior.

The startup signed up Dow Jones Media Group as a partner for the trial-based ad content back in April.

People interested in joining these trials are being asked to contact its Early Access group — via community.brave.com.

Brave says the test is intended to analyze user interactions to generate test data for training its on-device machine learning algorithms. So while its ultimate goal for the BAT platform is to be able to deliver ads without eroding individual users’ privacy via this kind of invasive tracking, the test phase does involve “a detailed log” of browsing activity being sent to it.

Though Brave also specifies: “Brave will not share this information, and users can leave this test at any time by switching off this feature or using a regular version of Brave (which never logs user browsing data to any server).”

“Once we’re satisfied with the performance of the ad system, Brave ads will be shown directly in the browser in a private channel to users who consent to see them. When the Brave ad system becomes widely available, users will receive 70% of the gross ad revenue, while preserving their privacy,” it adds.

The key privacy-by-design shift Brave is working towards is moving ad targeting from a cloud-based ad exchange to the local device where users can control their own interactions with marketing content, and don’t have to give up personal data to a chain of opaque third parties (armed with hooks and data-sucking pipes) in order to do so.

Local device ad targeting will work by Brave pushing out ad catalogs (one per region and natural language) to available devices on a recurring basis.

“Downloading a catalog does not identify any user,” it writes. “As the user browses, Brave locally matches the best available ad from the catalog to display that ad at the appropriate time. Brave ads are opt-in and consent-based (disabled by default), and engineered to operate without leaking the user’s personal data from their device.”

It couches this approach as “a more efficient and direct opportunity to access user attention without the inherent liabilities and risks involved with large scale user data collection”.

Though there’s still a ways to go before Brave is in a position to prove out its claims — including several more testing phases.

Brave says it’s planning to run further studies later this month with a larger set of users that will focus on improving its user modeling — “to integrate specific usage of the browser, with the primary goal of understanding how behavior in the browser impacts when to deliver ads”.

“This will serve to strengthen existing modeling and data classification engines and to refine the system’s machine learning,” it adds.

After that it says it will start to expand user trials — “in a few months” — focusing testing on the impact of rewards in its user-centric ad system.

“Thousands of ads will be used in this phase, and users will be able to earn tokens for viewing and interacting with ads,” it says of that.

Brave’s initial goal is for users to be able to reward content producers via the utility BAT token stored in a payment wallet baked into the browser. The default distributes the tokens stored in a users’ wallet based on time spent on Brave-verified websites (though users can also make manual tips).

Though payments using BAT may also ultimately be able to do more.

Its roadmap envisages real ad revenue and donation flow fee revenue being generated via its system this year, and also anticipates BAT integration into “other apps based on open source & specs for greater ad buying leverage and publisher onboarding”.

Keepsafe launches a privacy-focused mobile browser

Keepsafe, the company behind the private photo app of the same name, is expanding its product lineup today with the release of a mobile web browser.

Co-founder and CEO Zouhair Belkoura argued that all of Keepsafe’s products (which also include a VPN app and a private phone number generator) are united not just by a focus on privacy, but by a determination to make those features simple and easy-to-understand — in contrast to what Belkoura described as “how security is designed in techland,” with lots of jargon and complicated settings.

Plus, when it comes to your online activity, Belkoura said there are different levels of privacy. There’s the question of the government and large tech companies accessing our personal data, which he argued people care about intellectually, but “they don’t really care about it emotionally.”

Then there’s “the nosy neighbor problem,” which Belkoura suggested is something people feel more strongly about: “A billion people are using Gmail and it’s scanning all their email [for advertising], but if I were to walk up to you and say, ‘Hey, can I read your email?’ you’d be like, ‘No, that’s kind of weird, go away.’ ”

It looks like Keepsafe is trying to tackle both kinds of privacy with its browser. For one thing, you can lock the browser with a PIN (it also supports Touch ID, Face ID and Android Fingerprint).

Keepsafe browser tabs

Then once you’re actually browsing, you can either do it in normal tabs, where social, advertising and analytics trackers are blocked (you can toggle which kinds of trackers are affected), but cookies and caching are still allowed — so you stay logged in to websites, and other session data is retained. But if you want an additional layer of privacy, you can open a private tab, where everything gets forgotten as soon as you close it.

While you can get some of these protections just by turning on private/incognito mode in a regular browser, Belkoura said there’s a clarity for consumers when an app is designed specifically for privacy, and the app is part of a broader suite of privacy-focused products. In addition, he said he’s hoping to build meaningful integrations between the different Keepsafe products.

Keepsafe Browser is available for free on iOS and Android.

When asked about monetization, Belkoura said, “I don’t think that the private browser per se is a good place to directly monetize … I’m more interested in saying this is part of the Keepsafe suite and there are other parts of the Keepsafe Suite that we’ll charge you money for.”

Verizon and others call a conditional halt on sharing location with data brokers

Verizon is cutting off access to its mobile customers’ real-time locations to two third-party data brokers “to prevent misuse of that information going forward.” The company announced the decision in a letter sent to Senator Ron Wyden (D-OR), who along with others helped reveal improper usage and poor security at these location brokers. It is not, however, getting out of the location-sharing business altogether.

(Update: AT&T and Sprint have also begun the process of ending their location aggregation services — with a caveat, of which below.)

Verizon sold bulk access to its customers’ locations to the brokers in question, LocationSmart and Zumigo, which then turned around and resold that data to dozens of other companies. This isn’t necessarily bad — there are tons of times when location is necessary to provide a service the customer asks for, and supposedly that customer would have to okay the sharing of that data. (Disclosure: Verizon owns Oath, which owns TechCrunch. This does not affect our coverage.)

That doesn’t seem to have been the case at LocationSmart customer Securus, which was selling its data directly to law enforcement so they could find mobile customers quickly and without all that fuss about paperwork and warrants. And then it was found that LocationSmart had exposed an API that allowed anyone to request mobile locations freely and anonymously, and without collecting consent.

When these facts were revealed by security researchers and Sen. Wyden, Verizon immediately looked into it, they reported in a letter sent to the Senator.

“We conducted a comprehensive review of our location aggregator program,” wrote Verizon CTO Karen Zacharia. “As a result of this review, we are initiating a process to terminate our existing agreements for the location aggregator program.”

“We will not enter into new location aggregation arrangements unless and until we are comfortable that we can adequately protect our customers’ location data through technological advancements and/or other practices,” she wrote later in the letter. In other words, the program is on ice until it can be secured.

Although Verizon claims to have “girded” the system with “mechanisms designed to protect against misuse of our customers’ location data,” the abuses in question clearly slipped through the cracks. Perhaps most notable is the simple fact that Verizon itself does not seem to need to be informed whether a customer has consented to having their location polled. That collection is the responsibility of “the aggregator or corporate customer.”

In other words, Verizon doesn’t need to ask the customer, and the company it sells the data to wholesale doesn’t need to ask the customer — the requirement devolves to the company buying access from the wholesaler. In Securus’s case, it had abstracted things one step further, allowing law enforcement full access when it said it had authority to do so, but apparently without checking, AT&T wrote in its own letter to Sen. Wyden.

And there were 75 other corporate customers. Don’t worry, someone is keeping track of them. Right?

These processes are audited, Verizon wrote, but apparently not an audit that finds things like the abuse by Securus or a poorly secured API. Perhaps how this happened is among the “number of internal questions” raised by the review.

When asked for comment, a Verizon representative offered the following statement:

When these issues were brought to our attention, we took immediate steps to stop it. Customer privacy and security remain a top priority for our customers and our company. We stand-by that commitment to our customers.

And indeed while the program itself appears to have been run with a laxity that should be alarming to all those customers for whom Verizon claims to be so concerned, some of the company’s competitors have yet to take similar action. AT&T, T-Mobile and Sprint were also named by LocationSmart as partners. Their own letters to Sen. Wyden stressed that their systems were similar to the others, with similar safeguards (that were similarly eluded).

In a press release announcing that his pressure on Verizon had borne fruit, Sen. Wyden called on the others to step up:

Verizon deserves credit for taking quick action to protect its customers’ privacy and security. After my investigation and follow-up reports revealed that middlemen are selling Americans’ location to the highest bidder without their consent, or making it available on insecure web portals, Verizon did the responsible thing and promptly announced it was cutting these companies off. In contrast, AT&T, T-Mobile, and Sprint seem content to continuing to sell their customers’ private information to these shady middle men, Americans’ privacy be damned.

AT&T actually announced that it is ending its agreements as well, after Sen. Wyden’s call to action was published, and Sprint followed shortly afterwards. AT&T said it “will be ending [its] work with these aggregators for these services as soon as is practical in a way that preserves important, potential lifesaving services like emergency roadside assistance.” Sprint stopped working with LocationSmart last month and is now “beginning the process of terminating its current contracts with data aggregators to whom we provide location data.”

What’s missing from these statements? Among other things: what and how many companies they’re working with, whether they’ll pursue future contracts, and what real changes will be made to prevent future problems like this. Since they’ve been at this for a long time and have had a month to ponder their next course of actions, I don’t think it’s unreasonable to expect more than a carefully worded statement about “these aggregators for these services.”

T-Mobile CEO John Legere tweeted that the company “will not sell customer location data to shady middlemen.” Of course, that doesn’t really mean anything. I await substantive promises from the company pertaining to this “pledge.”

The FCC, meanwhile, has announced that it is looking into the issue — with the considerable handicap that Chairman Ajit Pai represented Securus back in 2012 when he was working as a lawyer. Sen. Wyden has called on him to recuse himself, but that has yet to happen.

I’ve asked Verizon for further clarification on its arrangements and plans, specifically whether it has any other location-sharing agreements in place with other companies. These aren’t, after all, the only players in the game.

UK report warns DeepMind Health could gain ‘excessive monopoly power’

DeepMind’s foray into digital health services continues to raise concerns. The latest worries are voiced by a panel of external reviewers appointed by the Google-owned AI company to report on its operations after its initial data-sharing arrangements with the U.K.’s National Health Service (NHS) ran into a major public controversy in 2016.

The DeepMind Health Independent Reviewers’ 2018 report flags a series of risks and concerns, as they see it, including the potential for DeepMind Health to be able to “exert excessive monopoly power” as a result of the data access and streaming infrastructure that’s bundled with provision of the Streams app — and which, contractually, positions DeepMind as the access-controlling intermediary between the structured health data and any other third parties that might, in the future, want to offer their own digital assistance solutions to the Trust.

While the underlying FHIR (aka, fast healthcare interoperability resource) deployed by DeepMind for Streams uses an open API, the contract between the company and the Royal Free Trust funnels connections via DeepMind’s own servers, and prohibits connections to other FHIR servers. A commercial structure that seemingly works against the openness and interoperability DeepMind’s co-founder Mustafa Suleyman has claimed to support.

There are many examples in the IT arena where companies lock their customers into systems that are difficult to change or replace. Such arrangements are not in the interests of the public. And we do not want to see DeepMind Health putting itself in a position where clients, such as hospitals, find themselves forced to stay with DeepMind Health even if it is no longer financially or clinically sensible to do so; we want DeepMind Health to compete on quality and price, not by entrenching legacy position,” the reviewers write.

Though they point to DeepMind’s “stated commitment to interoperability of systems,” and “their adoption of the FHIR open API” as positive indications, writing: “This means that there is potential for many other SMEs to become involved, creating a diverse and innovative marketplace which works to the benefit of consumers, innovation and the economy.”

“We also note DeepMind Health’s intention to implement many of the features of Streams as modules which could be easily swapped, meaning that they will have to rely on being the best to stay in business,” they add. 

However, stated intentions and future potentials are clearly not the same as on-the-ground reality. And, as it stands, a technically interoperable app-delivery infrastructure is being encumbered by prohibitive clauses in a commercial contract — and by a lack of regulatory pushback against such behavior.

The reviewers also raise concerns about an ongoing lack of clarity around DeepMind Health’s business model — writing: “Given the current environment, and with no clarity about DeepMind Health’s business model, people are likely to suspect that there must be an undisclosed profit motive or a hidden agenda. We do not believe this to be the case, but would urge DeepMind Health to be transparent about their business model, and their ability to stick to that without being overridden by Alphabet. For once an idea of hidden agendas is fixed in people’s mind, it is hard to shift, no matter how much a company is motivated by the public good.”

We have had detailed conversations about DeepMind Health’s evolving thoughts in this area, and are aware that some of these questions have not yet been finalised. However, we would urge DeepMind Health to set out publicly what they are proposing,” they add. 

DeepMind has suggested it wants to build healthcare AIs that are capable of charging by results. But Streams does not involve any AI. The service is also being provided to NHS Trusts for free, at least for the first five years — raising the question of how exactly the Google-owned company intends to recoup its investment.

Google of course monetizes a large suite of free-at-the-point-of-use consumer products — such as the Android mobile operating system; its cloud email service Gmail; and the YouTube video sharing platform, to name three — by harvesting people’s personal data and using that information to inform its ad targeting platforms.

Hence the reviewers’ recommendation for DeepMind to set out its thinking on its business model to avoid its intentions vis-a-vis people’s medical data being viewed with suspicion.

The company’s historical modus operandi also underlines the potential monopoly risks if DeepMind is allowed to carve out a dominant platform position in digital healthcare provision — given how effectively its parent has been able to turn a free-for-OEMs mobile OS (Android) into global smartphone market OS dominance, for example.

So, while DeepMind only has a handful of contracts with NHS Trusts for the Streams app and delivery infrastructure at this stage, the reviewers’ concerns over the risk of the company gaining “excessive monopoly power” do not seem overblown.

They are also worried about DeepMind’s ongoing vagueness about how exactly it works with its parent Alphabet, and what data could ever be transferred to the ad giant — an inevitably queasy combination when stacked against DeepMind’s handling of people’s medical records.

“To what extent can DeepMind Health insulate itself against Alphabet instructing them in the future to do something which it has promised not to do today? Or, if DeepMind Health’s current management were to leave DeepMind Health, how much could a new CEO alter what has been agreed today?” they write.

“We appreciate that DeepMind Health would continue to be bound by the legal and regulatory framework, but much of our attention is on the steps that DeepMind Health have taken to take a more ethical stance than the law requires; could this all be ended? We encourage DeepMind Health to look at ways of entrenching its separation from Alphabet and DeepMind more robustly, so that it can have enduring force to the commitments it makes.”

Responding to the report’s publication on its website, DeepMind writes that it’s “developing our longer-term business model and roadmap.”

“Rather than charging for the early stages of our work, our first priority has been to prove that our technologies can help improve patient care and reduce costs. We believe that our business model should flow from the positive impact we create, and will continue to explore outcomes-based elements so that costs are at least in part related to the benefits we deliver,” it continues.

So it has nothing to say to defuse the reviewers’ concerns about making its intentions for monetizing health data plain — beyond deploying a few choice PR soundbites.

On its links with Alphabet, DeepMind also has little to say, writing only that: “We will explore further ways to ensure there is clarity about the binding legal frameworks that govern all our NHS partnerships.”

“Trusts remain in full control of the data at all times,” it adds. “We are legally and contractually bound to only using patient data under the instructions of our partners. We will continue to make our legal agreements with Trusts publicly available to allow scrutiny of this important point.”

“There is nothing in our legal agreements with our partners that prevents them from working with any other data processor, should they wish to seek the services of another provider,” it also claims in response to additional questions we put to it.

We hope that Streams can help unlock the next wave of innovation in the NHS. The infrastructure that powers Streams is built on state-of-the-art open and interoperable standards, known as FHIR. The FHIR standard is supported in the UK by NHS Digital, NHS England and the INTEROPen group. This should allow our partner trusts to work more easily with other developers, helping them bring many more new innovations to the clinical frontlines,” it adds in additional comments to us.

“Under our contractual agreements with relevant partner trusts, we have committed to building FHIR API infrastructure within the five year terms of the agreements.”

Asked about the progress it’s made on a technical audit infrastructure for verifying access to health data, which it announced last year, it reiterated the wording on its blog, saying: “We will remain vigilant about setting the highest possible standards of information governance. At the beginning of this year, we appointed a full time Information Governance Manager to oversee our use of data in all areas of our work. We are also continuing to build our Verifiable Data Audit and other tools to clearly show how we’re using data.”

So developments on that front look as slow as we expected.

The Google-owned U.K. AI company began its push into digital healthcare services in 2015, quietly signing an information-sharing arrangement with a London-based NHS Trust that gave it access to around 1.6 million people’s medical records for developing an alerts app for a condition called Acute Kidney Injury.

It also inked an MoU with the Trust where the pair set out their ambition to apply AI to NHS data sets. (They even went so far as to get ethical signs-off for an AI project — but have consistently claimed the Royal Free data was not fed to any AIs.)

However, the data-sharing collaboration ran into trouble in May 2016 when the scope of patient data being shared by the Royal Free with DeepMind was revealed (via investigative journalism, rather than by disclosures from the Trust or DeepMind).

None of the ~1.6 million people whose non-anonymized medical records had been passed to the Google-owned company had been informed or asked for their consent. And questions were raised about the legal basis for the data-sharing arrangement.

Last summer the U.K.’s privacy regulator concluded an investigation of the project — finding that the Royal Free NHS Trust had broken data protection rules during the app’s development.

Yet despite ethical questions and regulatory disquiet about the legality of the data sharing, the Streams project steamrollered on. And the Royal Free Trust went on to implement the app for use by clinicians in its hospitals, while DeepMind has also signed several additional contracts to deploy Streams to other NHS Trusts.

More recently, the law firm Linklaters completed an audit of the Royal Free Streams project, after being commissioned by the Trust as part of its settlement with the ICO. Though this audit only examined the current functioning of Streams. (There has been no historical audit of the lawfulness of people’s medical records being shared during the build and test phase of the project.)

Linklaters did recommend the Royal Free terminates its wider MoU with DeepMind — and the Trust has confirmed to us that it will be following the firm’s advice.

“The audit recommends we terminate the historic memorandum of understanding with DeepMind which was signed in January 2016. The MOU is no longer relevant to the partnership and we are in the process of terminating it,” a Royal Free spokesperson told us.

So DeepMind, probably the world’s most famous AI company, is in the curious position of being involved in providing digital healthcare services to U.K. hospitals that don’t actually involve any AI at all. (Though it does have some ongoing AI research projects with NHS Trusts too.)

In mid 2016, at the height of the Royal Free DeepMind data scandal — and in a bid to foster greater public trust — the company appointed the panel of external reviewers who have now produced their second report looking at how the division is operating.

And it’s fair to say that much has happened in the tech industry since the panel was appointed to further undermine public trust in tech platforms and algorithmic promises — including the ICO’s finding that the initial data-sharing arrangement between the Royal Free and DeepMind broke U.K. privacy laws.

The eight members of the panel for the 2018 report are: Martin Bromiley OBE; Elisabeth Buggins CBE; Eileen Burbidge MBE; Richard Horton; Dr. Julian Huppert; Professor Donal O’Donoghue; Matthew Taylor; and Professor Sir John Tooke.

In their latest report the external reviewers warn that the public’s view of tech giants has “shifted substantially” versus where it was even a year ago — asserting that “issues of privacy in a digital age are if anything, of greater concern.”

At the same time politicians are also gazing rather more critically on the works and social impacts of tech giants.

Although the U.K. government has also been keen to position itself as a supporter of AI, providing public funds for the sector and, in its Industrial Strategy white paper, identifying AI and data as one of four so-called “Grand Challenges” where it believes the U.K. can “lead the world for years to come” — including specifically name-checking DeepMind as one of a handful of leading-edge homegrown AI businesses for the country to be proud of.

Still, questions over how to manage and regulate public sector data and AI deployments — especially in highly sensitive areas such as healthcare — remain to be clearly addressed by the government.

Meanwhile, the encroaching ingress of digital technologies into the healthcare space — even when the techs don’t even involve any AI — are already presenting major challenges by putting pressure on existing information governance rules and structures, and raising the specter of monopolistic risk.

Asked whether it offers any guidance to NHS Trusts around digital assistance for clinicians, including specifically whether it requires multiple options be offered by different providers, the NHS’ digital services provider, NHS Digital, referred our question on to the Department of Health (DoH), saying it’s a matter of health policy.

The DoH in turn referred the question to NHS England, the executive non-departmental body which commissions contracts and sets priorities and directions for the health service in England.

And at the time of writing, we’re still waiting for a response from the steering body.

Ultimately it looks like it will be up to the health service to put in place a clear and robust structure for AI and digital decision services that fosters competition by design by baking in a requirement for Trusts to support multiple independent options when procuring apps and services.

Without that important check and balance, the risk is that platform dynamics will quickly dominate and control the emergent digital health assistance space — just as big tech has dominated consumer tech.

But publicly funded healthcare decisions and data sets should not simply be handed to the single market-dominating entity that’s willing and able to burn the most resource to own the space.

Nor should government stand by and do nothing when there’s a clear risk that a vital area of digital innovation is at risk of being closed down by a tech giant muscling in and positioning itself as a gatekeeper before others have had a chance to show what their ideas are made of, and before even a market has had the chance to form.