Planck Re scores $12M Series A to simplify insurance underwriting with artificial intelligence

Planck Re, a startup that wants to simplify insurance underwriting with artificial intelligence, announced today that it has raised a $12 million Series A. The funding was led by Arbor Ventures, with participation from Viola FinTech and Eight Roads. Co-founder and CEO Elad Tsur tells TechCrunch that the capital will be used to expand Planck Re’s product line into more segments, including retail, contractors, IT and manufacturing, and grow its research and development team in Israel and North American sales team.

The Tel Aviv and New York-based startup plans to focus first on its business in the United States, where it has already launched pilot programs with several insurance carriers. Tsur says that Planck Re’s clients generally use it to help underwrite insurance for small to medium-sized businesses, including business owner policies, which cover property and liability risks, and workers’ compensation.

Founded in 2016 by Tsur, Amir Cohen and David Schapiro, Planck Re poses its technology as a more efficient and accurate alternative to the lengthy risk assessment questionnaire insurers ask clients to fill out. Its platform crawls the internet for publicly available data, including images, text, videos, social media profiles and public records, to build profiles of SMBs seeking insurance coverage. Then it analyzes that data to help carriers figure out their potential risk.

Before launching Planck Re, Tsur and Cohen founded Bluetail, a data mining startup that was acquired by Salesforce in 2012, where it served as the base technology for Salesforce Einstein. Schapiro was previously CEO of financial analytics company Earnix.

There are already a handful of startups, including SoftBank-backed Lemonade, Trōv, Cover, Hippo and Swyfft, that use algorithms to make picking and buying insurance policies easier for consumers, but AI-based underwriting is still a nascent category. One example is Flyreel, which focuses on underwriting property insurance and recently signed a deal with Microsoft to accelerate its go-to-market strategy.

Tsur says Planck Re is developing more dedicated algorithms to meet the evolving needs of insurance providers. For example, many underwriters now want to know if clients in photography use aerial imaging equipment, so Planck Re’s imaging process capabilities automatically check images for that information.

He adds that being able to automate underwriting enables carriers to find new distribution channels, including allowing customers to apply for insurance online without needing to fill out any forms. Planck Re also continues to monitor and underwrite policies, which means if a customer’s risk profile changes, insurers can react quickly.

In a statement, Arbor Ventures vice president and head of Israel Lior Simon said, “We are excited to partner with Planck Re and the driven, entrepreneurial team. Insurance companies are thirsty for actionable data, to assess risk, gain real time insights and enhance customer understanding. Planck Re aims to empower them through a streamlined digital approach, which we believe will truly alter the insurance industry.”

A robotic astronaut named CIMON is on its way to the ISS

There’s a new astronaut on its way to the International Space Station this morning aboard SpaceX’s most recent resupply launch, and it’s only the size of a medicine ball. CIMON (Crew Interactive Mobile Companion) is an artificial intelligence assistant designed by Airbus and IBM to assist the European Space Agency’s astronauts in everyday tasks aboard the ISS . Weighing in at just 11 pounds and roughly the size of a medicine ball, this minute astronaut is equipped with the neural network strength of IBM’s Watson.

Crew members will be able to correspond with CIMON via voice commands and access a database of procedures. CIMON will also be able to detect the crew members’ moods and react accordingly, Till Eisenberg, CIMON project lead at Airbus, told SPACE.com.

In a February press release announcing CIMON’s arrival, Airbus said that CIMON’s emotional intelligence, in addition to its friendly face and voice, will help it operate like a true crew member aboard the station. To start, CIMON even has a built-in friend.

Before setting off today, CIMON has been trained alongside German astronaut Alexander Gerst to recognize Gerst’s voice and face and help him complete three different tasks while aboard the ISS. CIMON will help the geophysicist and volcanologist study crystals on the space station, solve a Rubik’s cube using video data and play the role of an “intelligent camera” to document a medical experiment on-board.

CIMON’s mission with Gerst will take place between this June and October 2018, but Airbus hopes that in the future CIMON will be able to observe crew members on longer missions and help scientists learn more about the social dynamics involved in extended space flight — an issue that will be paramount for any dreams of Martian colonies to come.

AI-fueled market intelligence firm Signal Media takes $16M to tackle more targets

Signal Media has closed a $16 million Series B funding round for an AI-fueled approach to media monitoring and b2b market intelligence. The UK startup uses machine learning techniques to filter through external information at scale — generating real-time insights for its customers, including for reputation management and decision support purposes.

Its system analyzes more than 2.8M global online, print, television, radio and regulatory sources, translated in real time in over 100 languages from 200 markets — serving up a customized overview of public conversations, market movements and issues pertinent to the client.

The Series B funding round was led by GMG Ventures, an independent VC fund whose limited partner is The Scott Trust, owner of The Guardian news organisation; although MMC Ventures was the largest investor. The round also included a debt facility from Kreos Capital. Other investors include Frontline, Hearst Ventures, Reed Elsevier Ventures and LocalGlobe.

Signal Media announced a £5.8M Series A at the end of 2016. It also took in debt financing last year, and seed funding back in 2015. And says the total raised to date is north of $27M.

On the customer side, Signal Media has more than 200 clients at this point — including the likes of Allen & Overy, Amnesty International, British Airways, E.ON, TalkTalk, Thomas Cook and Whitbread. While its headcount has doubled to almost 100 people in a year, and it notes it’s more than quadrupled revenues since its Series A raise in September 2016 — claiming 300 per cent year-on-year growth in 2017-18.

While PR and communications is where the company has focused its initial product push, it wants to broaden out into risk management for tax and compliance, and business development in financial and legal services — using the latest investor cash injection to train its AIs on more tasks.

Specifically, says CEO and founder David Benigson, it will be using the funding to build out the AI-driven features of the product.

“Alongside our topics and entities, we’ve built a range of product features, from quote detection, to automatic clustering and auto-translation. We’re already working on a sentiment system that explores the meaning behind entire articles, mentions within an article and tone. We think it’s mad to try and put a numerical -1 to +1 score of ‘sentiment’ around an article — because depending on the reader’s context, a quote in an article could be good and the article could be bad, yet for another reader, the reverse might be true! We’re going beyond this kind of thinking. What kind of good is it? What impact will it have? These are the questions we’re trying to give answers to,” he tells TechCrunch.

“We’re also working on an entirely new way to deliver information to our users by understanding more about their world up front, so the AI can automatically surface contextual articles that are of interest — but that haven’t been actively searched for. Imagine the system itself saying ‘hey, you should really read this, it’s really worth reading considering your interest in X’. That’s what we mean when we talk about the ‘unknown unknowns’ we’re trying to help people see.”

“We’re working with researchers to automate some of the lower-level research work like producing biographies, or company ‘cheat-sheets’,” he adds. “What does this all boil down to? We’re confident we can get to a point relatively quickly where you could be shown an article you didn’t know about, with a competitor or M&A opportunity you haven’t been aware of, give you an immediate idea of what other people think about that competitor, and then give you an instant 2-page summary of that company with all their key information, coupled with some insight into where they sit in a market. It’s really exciting!”

Another focus for the funding is expanding further into the US — where it sees growth potential.

Benigson says a fifth of its revenue currently comes from the market, but he touts the number as “improving daily as we build the team out there”.

The typical Signal Media buyer is currently the head of PR or head of comms in a professional services, financial or legal firm — who’s after reputation management software. But Benigson says increasingly the CEO or wider leadership team is wants to tap into real-time updates for business decision support purposes.

“We always had a vision that Signal would be used horizontally to surface business intelligence,” he says — citing regulation tracking as one of the nascent use-cases it’s interested in. “How does public sentiment and policy impact regulation down the line?”

And it’s certainly true that political risk has stepped up for businesses in all sorts of sectors in recent years — as tech-fueled disruption has generated societal and political pressures, alongside a range of volatile geopolitical events.

On the regulatory tracking front, Benigson says it’s working with a major consultancy on an enterprise-wide application of the service aimed at surfacing global regulatory information.

“The idea is to help risk professionals provide high-quality analysis and insight in real-time, so clients can stay ahead of the change by planning for regulation before it’s even announced. How is regulation X or Y going to unfold? What are the different ways that you could adopt or implement that particular change in regulation?” he explains. “We’re not there yet, but the aspiration of the business is to augment those professionals with decision support.”

Some existing clients have also been using the service to source new business — including for spotting potential M&A targets — and this is another area Signal Media wants to nurture and grow.

“For example, we trained the system to source content on financially distressed firms, and it’s used by the M&A teams at several banks to build their pipeline. For these kinds of clients, the first-mover advantage is crucial and can result in seven- or eight-figure deals, so the real-time alerts from Signal are very important if you’re going to beat your competitors to the punch,” he says.

“Imagine being able to see that the M&A rumors around a prospect are ramping over the course of a morning — calling at 1130 instead of 1201 when they make their announcement public might make all the difference.”

Another business intelligence use-case it wants to serve relates to executive moves — and Benigson says it’s “ramping the training of our AI” to help customers get wind of key job changes before rivals do.

In the reputation management space Signal Media is competing with a range of incumbents — such as Kantar and Cision — plus some newer approaches from the likes of Meltwater and RiskEye, though Benigson argues none of its rivals have “machine learning at their heart”. He names that as its “real differentiator” (vs, for example, the keyword based monitoring used by others).

He also argues it’s “sitting uniquely in the market offering decision support”. “The scope of what we are now able to do in terms of ‘real’ sentiment, automated, readable summaries and contextual analysis is far beyond the traditional scope of reputation management,” he claims.

“We’re also interested in so much more than just reputation and the value of earned media for a CMO (although this is certainly something we help track). My dream is that ultimately we will consolidate insights across risk, reputation, and opportunity back into a single real time dashboard for any C-level professional so that a CEO has as much transparency, clarity, granularity as to what’s happening outside of their organization as inside it.”

While not yet profitable — owing to its focus to date on growth, Benigson expresses confidence in being able to scale the business and achieve profitability down the line. “When we look at the dashboard of AOV, quota, lead time, conversion rate, all of them are trending in the right direction,” he says. “There’s a clear unit-economic pathway to profitability, and we’ll go for it when we’re comfortable about our market position.”

Commenting in a statement, Simon Menashy, partner at MMC Ventures, adds: “We first backed Signal to develop their market-leading AI media monitoring tool, innovating in a space where every large company has a solution but few are happy with it. It’s now become clear that their technology can be applied to solve all kinds of other information problems businesses face. We’re delighted to be significantly increasing our investment and backing Signal’s next phase of growth.”

Talking to Google Duplex: Google’s human-like phone AI feels revolutionary

NEW YORK—Evidently, I didn’t walk into a run-of-the-mill press event. Roughly two months after its annual I/O conference, Google this week invited Ars and several other journalists to the THEP Thai Restaurant in New York City. The company bought out the restaurant for the day, cleared away the tables, and built a little presentation area complete with a TV, loudspeaker, and chairs. Next to the TV was a podium with the Thai restaurant’s actual phone—not some new company smartphone, the ol’ analogue restaurant line.

We all knew what we were getting into. At I/O 2018, Google shocked the world with a demo of “Google Duplex,” an AI system for accomplishing real-world tasks over the phone. The short demo felt like the culmination of Google’s various voice-recognition and speech-synthesis capabilities: Google’s voice bot could call up businesses and make an appointment on your behalf, all while sounding shockingly similar—some would say deceivingly similar—to a human. Its demo even came complete with artificial speech disfluencies like “um” and “uh.”

The short, pre-recorded I/O showcase soon set off a firestorm of debate on the Web. People questioned the ethics of an AI that pretended to be human, wiretap laws were called into question, and some even questioned if the demo was faked. Other than promising Duplex would announce itself as a robot in the future, Google had been pretty quiet about the project since the event.

Read 49 remaining paragraphs | Comments

Alexa for hotels lets guests order room service, control in-room smart devices

Hotel rooms will serve as the newest homes for Amazon’s Alexa starting this summer. Amazon announced a special version of its virtual assistant, Alexa for Hospitality, that will live across Echo devices placed in hotels, vacation rentals, and other similar locations.

Alexa in these devices will be able to do special things for both hospitality professionals and their customers. Amazon claims its Alexa for Hospitality experience will let hotel professionals “deepen engagement” through its voice controls that customers can use. Hotels can also customize some of the experience that they want their customers to have by choosing default music services, creating special Alexa Skills that only their guests can use, and monitor device online status and other connectivity issues.

Guests staying in a room with an Echo device will likely find the experience either convenient or invasive. Guests can ask Alexa to do things like order room service, answer questions about hotel services, control some in-room connected devices like lights and blinds, and more. Alexa Skills will also be available, so guests can use a Skill such as Flight Tracker to check the status of their flight before checking out.

Read 5 remaining paragraphs | Comments

Prisma co-founders raise $1M to build a social app called Capture

Two of the co-founders of the art filter app Prisma have left to build a new social app.

Prisma, as you may recall, had a viral moment back in 2016 when selfie takers went crazy for the fine art spin the app’s AI put on photos — in just a few seconds of processing.

Downloads leapt, art selfies flooded Instagram, and similar arty effects soon found their way into all sorts of rival apps and platforms. Then, after dipping a toe into social waters with the launch of a feed of its own, the company shifted focus to b2b developer tools — and we understand it’s since become profitable.

But two of Prisma’s co-founders, Aleksey Moiseyenkov and Aram Hardy, got itchy feet when they had an idea for another app business. And they’ve both now left to set up a new startup, called Capture Technologies.

The plan is to launch the app — which will be called Capture — in Q4, with a beta planned for September or October, according to Hardy (who’s taking the CMO role).

They’ve also raised a $1M seed for Capture, led by US VC firm General Catalyst . Also investing are KPCB, Social Capital, Dream Machine VC (the seed fund of former TechCrunch co-editor, Alexia Bonatsos), Paul Heydon, and Russian Internet giant, Mail.Ru Group.

Josh Elman from Greylock Partners is also involved as an advisor.

Hardy says they had the luxury of being able to choose their seed investors, after getting a warmer reception for Capture than they’d perhaps expected — thinking it might be tough to raise funding for a new social app given how that very crowded space has also been monopolized by a handful of major platforms… (hi Facebook, hey Snap!)

But they also believe they’ve identified overlooked territory — where they can offer something fresh to help people interact with others in real-time.

They’re not disclosing further details about the idea or how the Capture app will work at this stage, as they’re busy building and Hardy says certain elements could change and evolve before launch day.

What they will say is that the app will involve AI, and will put the emphasis for social interactions squarely on the smartphone camera.

Speed will also be a vital ingredient, as it was with Prisma — literally fueling the app’s virality. “We see a huge move to everything which is happening right now, which is really real-time,” Hardy tells TechCrunch. “Even when we started Prisma there were lots of similar products which were just processing one photo for five, ten, 15 minutes, and people were not using it because it takes time.

“People want everything right now. Right here. So this is a trend which is taking place right now. People just want everything right now, right here. So we’re trying to give it to them.”

“Our team’s mission is to bring an absolutely new and unique experience to how people interact with each other. We would like to come up with something unique and really fresh,” adds Moiseyenkov, Capture’s CEO (pictured above left, with Hardy).

“We see a huge potential in new social apps despite the fact that there are too many huge players.”

Having heard the full Capture pitch from Hardy I can say it certainly seems like an intriguing idea. Though how exactly they go about selectively introducing the concept will be key to building the momentum needed to power their big vision for the app. But really that’s true of any social product.

Their idea has also hooked a strong line up of seed investors, doubtless helped by the pair’s prior success with Prisma. (If there’s one thing investors love more than a timely, interesting idea, it’s a team with pedigree — and these two certainly have that.)

“I’m happy to have such an amazing and experienced team,” adds Moiseyenkov, repaying the compliment to Capture’s investors.

“Your first investors are your team. You have to ask lots of questions like you do when you decide whether this or that person is a perfect fit for your team. Because investors and the team are those people with whom you’re going to build a great product. At the same time, investors ask lots of questions to you.”

Capture’s investors were evidently pleased enough with the answers their questions elicited to cut Capture its founding checks. And the startup’s team is already ten-strong — and hard at work to get a beta launched in fall.

The business is based in the US and Europe, with one office in Moscow, where Hardy says they’ve managed to poach some relevant tech talent from Russian social media giant vk.com; and another slated to be opening in a couple of weeks time, on Snap’s home turf of LA. 

“We’ll be their neighbors in Venice beach,” he confirms, though he stresses there will still be clear blue water between the two companies’ respective social apps, adding: “Snapchat is really a different product.”

UK report warns DeepMind Health could gain ‘excessive monopoly power’

DeepMind’s foray into digital health services continues to raise concerns. The latest worries are voiced by a panel of external reviewers appointed by the Google-owned AI company to report on its operations after its initial data-sharing arrangements with the U.K.’s National Health Service (NHS) ran into a major public controversy in 2016.

The DeepMind Health Independent Reviewers’ 2018 report flags a series of risks and concerns, as they see it, including the potential for DeepMind Health to be able to “exert excessive monopoly power” as a result of the data access and streaming infrastructure that’s bundled with provision of the Streams app — and which, contractually, positions DeepMind as the access-controlling intermediary between the structured health data and any other third parties that might, in the future, want to offer their own digital assistance solutions to the Trust.

While the underlying FHIR (aka, fast healthcare interoperability resource) deployed by DeepMind for Streams uses an open API, the contract between the company and the Royal Free Trust funnels connections via DeepMind’s own servers, and prohibits connections to other FHIR servers. A commercial structure that seemingly works against the openness and interoperability DeepMind’s co-founder Mustafa Suleyman has claimed to support.

There are many examples in the IT arena where companies lock their customers into systems that are difficult to change or replace. Such arrangements are not in the interests of the public. And we do not want to see DeepMind Health putting itself in a position where clients, such as hospitals, find themselves forced to stay with DeepMind Health even if it is no longer financially or clinically sensible to do so; we want DeepMind Health to compete on quality and price, not by entrenching legacy position,” the reviewers write.

Though they point to DeepMind’s “stated commitment to interoperability of systems,” and “their adoption of the FHIR open API” as positive indications, writing: “This means that there is potential for many other SMEs to become involved, creating a diverse and innovative marketplace which works to the benefit of consumers, innovation and the economy.”

“We also note DeepMind Health’s intention to implement many of the features of Streams as modules which could be easily swapped, meaning that they will have to rely on being the best to stay in business,” they add. 

However, stated intentions and future potentials are clearly not the same as on-the-ground reality. And, as it stands, a technically interoperable app-delivery infrastructure is being encumbered by prohibitive clauses in a commercial contract — and by a lack of regulatory pushback against such behavior.

The reviewers also raise concerns about an ongoing lack of clarity around DeepMind Health’s business model — writing: “Given the current environment, and with no clarity about DeepMind Health’s business model, people are likely to suspect that there must be an undisclosed profit motive or a hidden agenda. We do not believe this to be the case, but would urge DeepMind Health to be transparent about their business model, and their ability to stick to that without being overridden by Alphabet. For once an idea of hidden agendas is fixed in people’s mind, it is hard to shift, no matter how much a company is motivated by the public good.”

We have had detailed conversations about DeepMind Health’s evolving thoughts in this area, and are aware that some of these questions have not yet been finalised. However, we would urge DeepMind Health to set out publicly what they are proposing,” they add. 

DeepMind has suggested it wants to build healthcare AIs that are capable of charging by results. But Streams does not involve any AI. The service is also being provided to NHS Trusts for free, at least for the first five years — raising the question of how exactly the Google-owned company intends to recoup its investment.

Google of course monetizes a large suite of free-at-the-point-of-use consumer products — such as the Android mobile operating system; its cloud email service Gmail; and the YouTube video sharing platform, to name three — by harvesting people’s personal data and using that information to inform its ad targeting platforms.

Hence the reviewers’ recommendation for DeepMind to set out its thinking on its business model to avoid its intentions vis-a-vis people’s medical data being viewed with suspicion.

The company’s historical modus operandi also underlines the potential monopoly risks if DeepMind is allowed to carve out a dominant platform position in digital healthcare provision — given how effectively its parent has been able to turn a free-for-OEMs mobile OS (Android) into global smartphone market OS dominance, for example.

So, while DeepMind only has a handful of contracts with NHS Trusts for the Streams app and delivery infrastructure at this stage, the reviewers’ concerns over the risk of the company gaining “excessive monopoly power” do not seem overblown.

They are also worried about DeepMind’s ongoing vagueness about how exactly it works with its parent Alphabet, and what data could ever be transferred to the ad giant — an inevitably queasy combination when stacked against DeepMind’s handling of people’s medical records.

“To what extent can DeepMind Health insulate itself against Alphabet instructing them in the future to do something which it has promised not to do today? Or, if DeepMind Health’s current management were to leave DeepMind Health, how much could a new CEO alter what has been agreed today?” they write.

“We appreciate that DeepMind Health would continue to be bound by the legal and regulatory framework, but much of our attention is on the steps that DeepMind Health have taken to take a more ethical stance than the law requires; could this all be ended? We encourage DeepMind Health to look at ways of entrenching its separation from Alphabet and DeepMind more robustly, so that it can have enduring force to the commitments it makes.”

Responding to the report’s publication on its website, DeepMind writes that it’s “developing our longer-term business model and roadmap.”

“Rather than charging for the early stages of our work, our first priority has been to prove that our technologies can help improve patient care and reduce costs. We believe that our business model should flow from the positive impact we create, and will continue to explore outcomes-based elements so that costs are at least in part related to the benefits we deliver,” it continues.

So it has nothing to say to defuse the reviewers’ concerns about making its intentions for monetizing health data plain — beyond deploying a few choice PR soundbites.

On its links with Alphabet, DeepMind also has little to say, writing only that: “We will explore further ways to ensure there is clarity about the binding legal frameworks that govern all our NHS partnerships.”

“Trusts remain in full control of the data at all times,” it adds. “We are legally and contractually bound to only using patient data under the instructions of our partners. We will continue to make our legal agreements with Trusts publicly available to allow scrutiny of this important point.”

“There is nothing in our legal agreements with our partners that prevents them from working with any other data processor, should they wish to seek the services of another provider,” it also claims in response to additional questions we put to it.

We hope that Streams can help unlock the next wave of innovation in the NHS. The infrastructure that powers Streams is built on state-of-the-art open and interoperable standards, known as FHIR. The FHIR standard is supported in the UK by NHS Digital, NHS England and the INTEROPen group. This should allow our partner trusts to work more easily with other developers, helping them bring many more new innovations to the clinical frontlines,” it adds in additional comments to us.

“Under our contractual agreements with relevant partner trusts, we have committed to building FHIR API infrastructure within the five year terms of the agreements.”

Asked about the progress it’s made on a technical audit infrastructure for verifying access to health data, which it announced last year, it reiterated the wording on its blog, saying: “We will remain vigilant about setting the highest possible standards of information governance. At the beginning of this year, we appointed a full time Information Governance Manager to oversee our use of data in all areas of our work. We are also continuing to build our Verifiable Data Audit and other tools to clearly show how we’re using data.”

So developments on that front look as slow as we expected.

The Google-owned U.K. AI company began its push into digital healthcare services in 2015, quietly signing an information-sharing arrangement with a London-based NHS Trust that gave it access to around 1.6 million people’s medical records for developing an alerts app for a condition called Acute Kidney Injury.

It also inked an MoU with the Trust where the pair set out their ambition to apply AI to NHS data sets. (They even went so far as to get ethical signs-off for an AI project — but have consistently claimed the Royal Free data was not fed to any AIs.)

However, the data-sharing collaboration ran into trouble in May 2016 when the scope of patient data being shared by the Royal Free with DeepMind was revealed (via investigative journalism, rather than by disclosures from the Trust or DeepMind).

None of the ~1.6 million people whose non-anonymized medical records had been passed to the Google-owned company had been informed or asked for their consent. And questions were raised about the legal basis for the data-sharing arrangement.

Last summer the U.K.’s privacy regulator concluded an investigation of the project — finding that the Royal Free NHS Trust had broken data protection rules during the app’s development.

Yet despite ethical questions and regulatory disquiet about the legality of the data sharing, the Streams project steamrollered on. And the Royal Free Trust went on to implement the app for use by clinicians in its hospitals, while DeepMind has also signed several additional contracts to deploy Streams to other NHS Trusts.

More recently, the law firm Linklaters completed an audit of the Royal Free Streams project, after being commissioned by the Trust as part of its settlement with the ICO. Though this audit only examined the current functioning of Streams. (There has been no historical audit of the lawfulness of people’s medical records being shared during the build and test phase of the project.)

Linklaters did recommend the Royal Free terminates its wider MoU with DeepMind — and the Trust has confirmed to us that it will be following the firm’s advice.

“The audit recommends we terminate the historic memorandum of understanding with DeepMind which was signed in January 2016. The MOU is no longer relevant to the partnership and we are in the process of terminating it,” a Royal Free spokesperson told us.

So DeepMind, probably the world’s most famous AI company, is in the curious position of being involved in providing digital healthcare services to U.K. hospitals that don’t actually involve any AI at all. (Though it does have some ongoing AI research projects with NHS Trusts too.)

In mid 2016, at the height of the Royal Free DeepMind data scandal — and in a bid to foster greater public trust — the company appointed the panel of external reviewers who have now produced their second report looking at how the division is operating.

And it’s fair to say that much has happened in the tech industry since the panel was appointed to further undermine public trust in tech platforms and algorithmic promises — including the ICO’s finding that the initial data-sharing arrangement between the Royal Free and DeepMind broke U.K. privacy laws.

The eight members of the panel for the 2018 report are: Martin Bromiley OBE; Elisabeth Buggins CBE; Eileen Burbidge MBE; Richard Horton; Dr. Julian Huppert; Professor Donal O’Donoghue; Matthew Taylor; and Professor Sir John Tooke.

In their latest report the external reviewers warn that the public’s view of tech giants has “shifted substantially” versus where it was even a year ago — asserting that “issues of privacy in a digital age are if anything, of greater concern.”

At the same time politicians are also gazing rather more critically on the works and social impacts of tech giants.

Although the U.K. government has also been keen to position itself as a supporter of AI, providing public funds for the sector and, in its Industrial Strategy white paper, identifying AI and data as one of four so-called “Grand Challenges” where it believes the U.K. can “lead the world for years to come” — including specifically name-checking DeepMind as one of a handful of leading-edge homegrown AI businesses for the country to be proud of.

Still, questions over how to manage and regulate public sector data and AI deployments — especially in highly sensitive areas such as healthcare — remain to be clearly addressed by the government.

Meanwhile, the encroaching ingress of digital technologies into the healthcare space — even when the techs don’t even involve any AI — are already presenting major challenges by putting pressure on existing information governance rules and structures, and raising the specter of monopolistic risk.

Asked whether it offers any guidance to NHS Trusts around digital assistance for clinicians, including specifically whether it requires multiple options be offered by different providers, the NHS’ digital services provider, NHS Digital, referred our question on to the Department of Health (DoH), saying it’s a matter of health policy.

The DoH in turn referred the question to NHS England, the executive non-departmental body which commissions contracts and sets priorities and directions for the health service in England.

And at the time of writing, we’re still waiting for a response from the steering body.

Ultimately it looks like it will be up to the health service to put in place a clear and robust structure for AI and digital decision services that fosters competition by design by baking in a requirement for Trusts to support multiple independent options when procuring apps and services.

Without that important check and balance, the risk is that platform dynamics will quickly dominate and control the emergent digital health assistance space — just as big tech has dominated consumer tech.

But publicly funded healthcare decisions and data sets should not simply be handed to the single market-dominating entity that’s willing and able to burn the most resource to own the space.

Nor should government stand by and do nothing when there’s a clear risk that a vital area of digital innovation is at risk of being closed down by a tech giant muscling in and positioning itself as a gatekeeper before others have had a chance to show what their ideas are made of, and before even a market has had the chance to form. 

Accenture wants to beat unfair AI with a professional toolkit

Next week professional services firm Accenture will be launching a new tool to help its customers identify and fix unfair bias in AI algorithms. The idea is to catch discrimination before it gets baked into models and can cause human damage at scale.

The “AI fairness tool”, as it’s being described, is one piece of a wider package the consultancy firm has recently started offering its customers around transparency and ethics for machine learning deployments — while still pushing businesses to adopt and deploy AI. (So the intent, at least, can be summed up as: ‘Move fast and don’t break things’. Or, in very condensed corporate-speak: “Agile ethics”.) 

“Most of last year was spent… understanding this realm of ethics and AI and really educating ourselves, and I feel that 2018 has really become the year of doing — the year of moving beyond virtue signaling. And moving into actual creation and development,” says Rumman Chowdhury, Accenture’s responsible AI lead — who joined the company when the role was created, in January 2017.

“For many of us, especially those of us who are in this space all the time, we’re tired of just talking about it — we want to start building and solving problems, and that’s really what inspired this fairness tool.”

Chowdhury says Accenture is defining fairness for this purpose as “equal outcomes for different people”. 

“There is no such thing as a perfect algorithm,” she says. “We know that models will be wrong sometimes. We consider it unfair if there are different degrees of wrongness… for different people, based on characteristics that should not influence the outcomes.”

She envisages the tool having wide application and utility across different industries and markets, suggesting early adopters are likely those in the most heavily regulated industries — such as financial services and healthcare, where “AI can have a lot of potential but has a very large human impact”.

“We’re seeing increasing focus on algorithmic bias, fairness. Just this past week we’ve had Singapore announce an AI ethics board. Korea announce an AI ethics board. In the US we already have industry creating different groups — such as The Partnership on AI. Google just released their ethical guidelines… So I think industry leaders, as well as non-tech companies, are looking for guidance. They are looking for standards and protocols and something to adhere to because they want to know that they are safe in creating products.

“It’s not an easy task to think about these things. Not every organization or company has the resources to. So how might we better enable that to happen? Through good legislation, through enabling trust, communication. And also through developing these kinds of tools to help the process along.”

The tool — which uses statistical methods to assess AI models — is focused on one type of AI bias problem that’s “quantifiable and measurable”. Specifically it’s intended to help companies assess the data sets they feed to AI models to identify biases related to sensitive variables and course correct for them, as it’s also able to adjust models to equalize the impact.

To boil it down further, the tool examines the “data influence” of sensitive variables (age, gender, race etc) on other variables in a model — measuring how much of a correlation the variables have with each other to see whether they are skewing the model and its outcomes.

It can then remove the impact of sensitive variables — leaving only the residual impact say, for example, that ‘likelihood to own a home’ would have on a model output, instead of the output being derived from age and likelihood to own a home, and therefore risking decisions being biased against certain age groups.

There’s two parts to having sensitive variables like age, race, gender, ethnicity etc motivating or driving your outcomes. So the first part of our tool helps you identify which variables in your dataset that are potentially sensitive are influencing other variables,” she explains. “It’s not as easy as saying: Don’t include age in your algorithm and it’s fine. Because age is very highly correlated with things like number of children you have, or likelihood to be married. Things like that. So we need to remove the impact that the sensitive variable has on other variables which we’re considering to be not sensitive and necessary for developing a good algorithm.”

Chowdhury cites an example in the US, where algorithms used to determine parole outcomes were less likely to be wrong for white men than for black men. “That was unfair,” she says. “People were denied parole, who should have been granted parole — and it happened more often for black people than for white people. And that’s the kind of fairness we’re looking at. We want to make sure that everybody has equal opportunity.”

However, a quirk of AI algorithms is that when models are corrected for unfair bias there can be a reduction in their accuracy. So the tool also calculates the accuracy of any trade-off to show whether improving the model’s fairness will make it less accurate and to what extent.

Users get a before and after visualization of any bias corrections. And can essentially choose to set their own ‘ethical bar’ based on fairness vs accuracy — using a toggle bar on the platform — assuming they are comfortable compromising the former for the latter (and, indeed, comfortable with any associated legal risk if they actively select for an obviously unfair tradeoff).

In Europe, for example, there are rules that place an obligation on data processors to prevent errors, bias and discrimination in automated decisions. They can also be required to give individuals information about the logic of an automated decision that effects them. So actively choosing a decision model that’s patently unfair would invite a lot of legal risk.

 

While Chowdhury concedes there is an accuracy cost to correcting bias in an AI model, she says trade-offs can “vary wildly”. “It can be that your model is incredibly unfair and to correct it to be a lot more fair is not going to impact your model that much… maybe by 1% or 2% [accuracy]. So it’s not that big of a deal. And then in other cases you may see a wider shift in model accuracy.”

She says it’s also possible the tool might raise substantial questions for users over the appropriateness of an entire data-set — essentially showing them that a data-set is “simply inadequate for your needs”.

“If you see a huge shift in your model accuracy that probably means there’s something wrong in your data. And you might need to actually go back and look at your data,” she says. “So while this tool does help with corrections it is part of this larger process — where you may actually have to go back and get new data, get different data. What this tool does is able to highlight that necessity in a way that’s easy to understand.

“Previously people didn’t have that ability to visualize and understand that their data may actually not be adequate for what they’re trying to solve for.”

She adds: “This may have been data that you’ve been using for quite some time. And it may actually cause people to re-examine their data, how it’s shaped, how societal influences influence outcomes. That’s kind of the beauty of artificial intelligence as a sort of subjective observer of humanity.”

While tech giants may have developed their own internal tools for assessing the neutrality of their AI algorithms — Facebook has one called Fairness Flow, for example — Chowdhury argues that most non-tech companies will not be able to develop their own similarly sophisticated tools for assessing algorithmic bias.

Which is where Accenture is hoping to step in with a support service — and one that also embeds ethical frameworks and toolkits into the product development lifecycle, so R&D remains as agile as possible.

“One of the questions that I’m always faced with is how do we integrate ethical behavior in way that aligns with rapid innovation. So every company is really adopting this idea of agile innovation and development, etc. People are talking a lot about three to six month iterative processes. So I can’t come in with an ethical process that takes three months to do. So part of one of my constraints is how do I create something that’s easy to integrate into this innovation lifecycle.”

One specific draw back is that currently the tool has not been verified working across different types of AI models. Chowdhury says it’s principally been tested on models that use classification to group people for the purposes of building AI models, so it may not be suitable for other types. (Though she says their next step will be to test it for “other kinds of commonly used models”.)

More generally, she says the challenge is that many companies are hoping for a magic “push button” tech fix-all for algorithmic bias. Which of course simply does not — and will not — exist.

“If anything there’s almost an overeagerness in the market for a technical solution to all their problems… and this is not the case where tech will fix everything,” she warns. “Tech can definitely help but part of this is having people understand that this is an informational tool, it will help you, but it’s not going to solve all your problems for you.”

The tool was co-prototyped with the help of a data study group at the UK’s Alan Turing Institute, using publicly available data-sets. 

During prototyping, when the researchers were using a German data-set relating to credit risk scores, Chowdhury says the team realized that nationality was influencing a lot of other variables. And for credit risk outcomes they found decisions were more likely to be wrong for non-German nationals.

They then used the tool to equalize the outcome and found it didn’t have a significant impact on the model’s accuracy. “So at the end of it you have a model that is just as accurate as the previous models were in determining whether or not somebody is a credit risk. But we were confident in knowing that one’s nationality did not have undue influence over that outcome.”

A paper about the prototyping of the tool will be made publicly available later this year, she adds.