IBM has launched a software service that scans AI systems as they work in order to detect bias and provide explanations for the automated decisions being made — a degree of transparency that may be necessary for compliance purposes not just a company’s own due diligence.
The new trust and transparency system runs on the IBM cloud and works with models built from what IBM bills as a wide variety of popular machine learning frameworks and AI-build environments — including its own Watson tech, as well as Tensorflow, SparkML, AWS SageMaker, and AzureML.
It says the service can be customized to specific organizational needs via programming to take account of the “unique decision factors of any business workflow”.
The fully automated SaaS explains decision-making and detects bias in AI models at runtime — so as decisions are being made — which means it’s capturing “potentially unfair outcomes as they occur”, as IBM puts it.
It will also automatically recommend data to add to the model to help mitigate any bias that has been detected.
Explanations of AI decisions include showing which factors weighted the decision in one direction vs another; the confidence in the recommendation; and the factors behind that confidence.
IBM also says the software keeps records of the AI model’s accuracy, performance and fairness, along with the lineage of the AI systems — meaning they can be “easily traced and recalled for customer service, regulatory or compliance reasons”.
For one example on the compliance front, the EU’s GDPR privacy framework references automated decision making, and includes a right for people to be given detailed explanations of how algorithms work in certain scenarios — meaning businesses may need to be able to audit their AIs.
The IBM AI scanner tool provides a breakdown of automated decisions via visual dashboards — an approach it bills as reducing dependency on “specialized AI skills”.
However it is also intending its own professional services staff to work with businesses to use the new software service. So it will be both selling AI, ‘a fix’ for AI’s imperfections, and experts to help smooth any wrinkles when enterprises are trying to fix their AIs… Which suggests that while AI will indeed remove some jobs, automation will be busy creating other types of work.
Nor is IBM the first professional services firm to spot a business opportunity around AI bias. A few months ago Accenture outed a fairness tool for identifying and fixing unfair AIs.
So with a major push towards automation across multiple industries there also looks to be a pretty sizeable scramble to set up and sell services to patch any problems that arise as a result of increasing use of AI.
And, indeed, to encourage more businesses to feel confident about jumping in and automating more. (On that front IBM cites research it conducted which found that while 82% of enterprises are considering AI deployments, 60% fear liability issues and 63% lack the in-house talent to confidently manage the technology.)
In additional to launching its own (paid for) AI auditing tool, IBM says its research division will be open sourcing an AI bias detection and mitigation toolkit — with the aim of encouraging “global collaboration around addressing bias in AI”.
“IBM led the industry in establishing trust and transparency principles for the development of new AI technologies. It’s time to translate principles into practice,” said David Kenny, SVP of cognitive solutions at IBM, commenting in a statement. “We are giving new transparency and control to the businesses who use AI and face the most potential risk from any flawed decision making.”
A new research report has raised concerns about how in-home smart devices such as AI virtual voice assistants, smart appliances, and security and monitoring technologies could be gathering and sharing children’s data.
It calls for new privacy measures to safeguard kids and make sure age appropriate design code is included with home automation technologies.
Barassi wants the UK’s data protection agency to launch a review of what she terms “home life data” — meaning the information harvested by smart in-home devices that can end up messily mixing adult data with kids’ information — to consider its impact on children’s privacy, and “put this concept at the heart of future debates about children’s data protection”.
“Debates about the privacy implications of AI home assistants and Internet of Things focus a lot on the the collection and use of personal data. Yet these debates lack a nuanced understanding of the different data flows that emerge from everyday digital practices and interactions in the home and that include the data of children,” she writes in the report.
“When we think about home automation therefore, we need to recognise that much of the data that is being collected by home automation technologies is not only personal (individual) data but home life data… and we need to critically consider the multiple ways in which children’s data traces become intertwined with adult profiles.”
The report gives examples of multi-user functions and aggregated profiles (such as Amazon’s Household Profiles feature) as constituting a potential privacy risk for children’s privacy.
Another example cited is biometric data — a type of information frequently gathered by in-home ‘smart’ technologies (such as via voice or facial recognition tech) yet the report asserts that generic privacy policies often do not differentiate between adults’ and children’s biometric data. So that’s another grey area being critically flagged by Barassi.
She’s submitted the report to the ICO in response to its call for evidence and views on an Age Appropriate Design Code it will be drafting. This code is a component of the UK’s new data protection legislation intended to support and supplement rules on the handling of children’s data contained within pan-EU privacy regulation — by providing additional guidance on design standards for online information services that process personal data and are “likely to be accessed by children”.
And it’s very clear that devices like smart speakers intended to be installed in homes where families live are very likely to be accessed by children.
The report concludes:
There is no acknowledgement so far of the complexity of home life data, and much of the privacy debates seem to be evolving around personal (individual) data. It seems that companies are not recognizing the privacy implications involved in children’s daily interactions with home automation technologies that are not designed for or targeted at them. Yet they make sure to include children in the advertising of their home technologies. Much of the responsibility of protecting children is in the hands of parents, who struggle to navigate Terms and Conditions even after changes such as GDPR [the European Union’s new privacy framework]. It is for this reason that we need to find new measures and solutions to safeguard children and to make sure that age appropriate design code is included within home automation technologies.
“We’ve seen privacy concerns raised about smart toys and AI virtual assistants aimed at children, but so far there has been very little debate about home hubs and smart technologies aimed at adults that children encounter and that collect their personal data,” adds Barassi commenting in a statement.
“The very newness of the home automation environment means we do not know what algorithms are doing with this ‘messy’ data that includes children’s data. Firms currently fail to recognise the privacy implications of children’s daily interactions with home automation technologies that are not designed or targeted at them.
“Despite GDPR, it’s left up to parents to protect their children’s privacy and navigate a confusing array of terms and conditions.”
The report also includes a critical case study of Amazon’s Household Profiles — a feature that allows Amazon services to be shared by members of a family — with Barassi saying she was unable to locate any information on Amazon’s US or UK privacy policies on how the company uses children’s “home life data” (e.g. information that might have been passively recorded about kids via products such as Amazon’s Alexa AI virtual assistant).
“It is clear that the company recognizes that children interact with the virtual assistants or can create their own profiles connected to the adults. Yet I can’t find an exhaustive description or explanation of the ways in which their data is used,” she writes in the report. “I can’t tell at all how this company archives and sells my home life data, and the data of my children.”
Amazon does make this disclosure on children’s privacy — though it does not specifically state what it does in instances where children’s data might have been passively recorded (i.e. as a result of one of its smart devices operating inside a family home.)
We asked Amazon to clarify its handling of children’s data but at the time of writing the company had not responded to multiple requests for comment.
The EU’s new GDPR framework does require data processors to take special care in handling children’s data.
In its guidance on this aspect of the regulation the ICO writes: “You should write clear privacy notices for children so that they are able to understand what will happen to their personal data, and what rights they have.”
The ICO also warns: “The GDPR also states explicitly that specific protection is required where children’s personal data is used for marketing purposes or creating personality or user profiles. So you need to take particular care in these circumstances.”
For customer service, Ultimate.ai‘s thesis is it’s not humans or AI but humans and AI. The Helsinki- and Berlin-based startup has built an AI-powered suggestion engine that, once trained on clients’ data-sets, is able to provide real-time help to (human) staff dealing with customer queries via chat, email and social channels. So the AI layer is intended to make the humans behind the screens smarter and faster at responding to customer needs — as well as freeing them up from handling basic queries to focus on more complex issues.
AI-fuelled chatbots have fast become a very crowded market, with hundreds of so called ‘conversational AI’ startups all vying to serve the customer service cause.
Ultimate.ai stands out by merit of having focused on non-English language markets, says co-founder and CEO Reetu Kainulainen. This is a consequence of the business being founded in Finland, whose language belongs to a cluster of Eastern and Northern Eurasian languages that are plenty removed from English in sound and grammatical character.
“[We] started with one of the toughest languages in the world,” he tells TechCrunch. “With no available NLP [natural language processing] able to tackle Finnish, we had to build everything in house. To solve the problem, we leveraged state-of-the-art deep neural network technologies.
“Today, our proprietary deep learning algorithms enable us to learn the structure of any language by training on our clients’ customer service data. Core within this is our use of transfer learning, which we use to transfer knowledge between languages and customers, to provide a high-accuracy NLU engine. We grow more accurate the more clients we have and the more agents use our platform.”
Ultimate.ai was founded in November 2016 and launched its first product in summer 2017. It now has more than 25 enterprise clients, including the likes of Zalando, Telia and Finnair. It also touts partnerships with tech giants including SAP, Microsoft, Salesforce and Genesys — integrating with their Contact Center solutions.
“We partner with these players both technically (on client deployments) and commercially (via co-selling). We also list our solution on their Marketplaces,” he notes.
Up to taking in its first seed round now it had raised an angel round of €230k in March 2017, as well as relying on revenue generated by the product as soon as it launched.
The $1.3M seed round is co-led by Holtzbrinck Ventures and Maki.vc.
Kainulainen says one of the “key strengths” of Ultimate.ai’s approach to AI for text-based customer service touch-points is rapid set-up when it comes to ingesting a client’s historical customer logs to train the suggestion system.
“Our proprietary clustering algorithms automatically cluster our customer’s historical data (chat, email, knowledge base) to train our neural network. We can go from millions of lines of unstructured data into a trained deep neural network within a day,” he says.
“Alongside this, our state-of-the-art transfer learning algorithms can seed the AI with very limited data — we have deployed Contact Center automation for enterprise clients with as little as 500 lines of historical conversation.”
Ultimate.ai’s proprietary NLP achieves “state-of-the-art accuracy at 98.6%”, he claims.
It can also make use of what he dubs “semi-supervised learning” to further boost accuracy over time as agents use the tool.
“Finally, we leverage transfer learning to apply a single algorithmic model across all clients, scaling our learnings from client-to-client and constantly improving our solution,” he adds.
On the competitive front, it’s going up against the likes of IBM’s Watson AI. However Kainulainen argues that IBM’s manual tools — which he argues “require large onboarding projects and are limited in languages with no self-learning capabilities” — make that sort of manual approach to chatbot building “unsustainable in the long-term”.
He also contends that many rivals are saddled with “lengthy set-up and heavy maintenance requirements” which makes them “extortionately expensive”.
A closer competitor (in terms of approach) which he namechecks is TC Disrupt battlefield alum Digital Genius. But again they’ve got English language origins — so he flags that as a differentiating factor vs the proprietary NLP at the core of Ultimate.ai’s product (which he claims can handle any language).
“It is very difficult to scale out of English to other languages,” he argues. “It also uneconomical to rebuild your architecture to serve multi-language scenarios. Out of necessity, we have been language-agnostic since day one.”
“Our technology and team is tailored to the customer service problem; generic conversational AI tools cannot compete,” he adds. “Within this, we are a full package for enterprises. We provide a complete AI platform, from automation to augmentation, as well as omnichannel capabilities across Chat, Email and Social. Languages are also a key technical strength, enabling our clients to serve their customers wherever they may be.”
The multi-language architecture is not the only claimed differentiator, either.
Kainulainen points to the team’s mission as another key factor on that front, saying: “We want to transform how people work in customer service. It’s not about building a simple FAQ bot, it’s about deeply understanding how the division and the people work and building tools to empower them. For us, it’s not Superagent vs. Botman, it’s Superagent + Botman.”
So it’s not trying to suggest that AI should replace your entire customers service team but rather enhance your in house humans.
Asked what the AI can’t do well, he says this boils down to interactions that are transactional vs relational — with the former category meshing well with automation, but the latter (aka interactions that require emotional engagement and/or complex thought) definitely not something to attempt to automate away.
“Transactional cases are mechanical and AI is good at mechanical. The customer knows what they want (a specific query or action) and so can frame their request clearly. It’s a simple, in-and-out case. Full automation can be powerful here,” he says. “Relational cases are more frequent, more human and more complex. They can require empathy, persuasion and complex thought. Sometimes a customer doesn’t know what the problem is — “it’s just not working”.
“Other times are sales opportunities, which businesses definitely don’t want to automate away (AI isn’t great at persuasion). And some specific industries, e.g. emergency services, see the human response as so vital that they refuse automation entirely. In all of these situations, AI which augments people, rather than replaces, is most effective.
“We see work in customer service being transformed over the next decade. As automation of simple requests becomes the status-quo, businesses will increasingly differentiate through the quality of their human-touch. Customer service will become less labour intensive, higher skilled work. We try and imagine what tools will power this workforce of tomorrow and build them, today.”
On the ethics front, he says customers are always told when they are transferred to a human agent — though that agent will still be receiving AI support (i.e. in the form of suggested replies to help “bolster their speed and quality”) behind the scenes.
Ultimate.ai’s customers define cases they’d prefer an agent to handle — for instance where there may be a sales opportunity.
“In these cases, the AI may gather some pre-qualifying customer information to speed up the agent handle time. Human agents are also brought in for complex cases where the AI has had difficulty understanding the customer query, based on a set confidence threshold,” he adds.
Kainulainen says the seed funding will be used to enhance the scalability of the product, with investments going into its AI clustering system.
The team will also be targeting underserved language markets to chase scale — “focusing heavily on the Nordics and DACH [Germany, Austria, Switzerland]”.
“We are building out our teams across Berlin and Helsinki. We will be working closely with our partners – SAP, Microsoft, Salesforce and Genesys — to further this vision,” he adds.
Commenting on the funding in a statement, Jasper Masemann, investment manager at Holtzbrinck Ventures, added: “The customer service industry is a huge market and one of the world’s largest employers. Ultimate.ai addresses the main industry challenges of inefficiency, quality control and high people turnover with latest advancements in deep learning and human machine hybrid models. The results and customer feedback are the best I have seen, which makes me very confident the team can become a forerunner in this space.”
A fascinating project called Amadeus Code promises to out-Tay-Tay Tay Tay and out-Bon Bon Iver. The AI-based system uses data from previous musical hits to create entirely new compositions on the fly and darn if these crazy robots-songs aren’t pretty good.
The video above is a MIDI version of an AI-produced song and the video below shows the song full produced using non-AI human musicians. The results, while a little odd, are very impressive.
Jun Inoue, Gyo Kitagawa, and Taishi Fukuyama created Amadeus Code and all have experience in music and music production. Inoue is a renowned Japanese music producer and he has sold 10 million singles. Fukuyama worked at Echo Next and launched the first Music Hack Day in Tokyo. Fukujama is the director of the Hit Song Research Lab and went to Berklee College of Music.
“We have analyzed decades of contemporary songs and classical music, songs of economic and/or social impact, and have created a proprietary songwriting technology that is specialized to create top line melodies of songs. We have recently released Harmony Library, which gives users direct access to the songs that power the songwriting AI for Amadeus Code,” said Inoue. “We uniquely specialize in creating top line melodies for songs that can be a source of high quality inspiration for music professionals. We also do have plans that may overlap with other music AI companies in the market today in terms of offering hobbyists a service to quickly create completed audio tracks.”
When asked if AI will ever replace his favorite musicians, folks like Michael & Janet Jackson or George Gershwin, Inoue laughed.
“Absolutely not. This AI will not tell you about its struggles and illuminate your inner wolds through real human storytelling, which is ultimately what makes music so intimate and compelling. Similarly to how the sampler, drum machine, multitrack recorder and many other creative technologies have done in the past, we see AI to be a creative tool for artists to push the boundaries of popular music. When these AI tools eventually find their place in the right creative hands, it will have the potential to create a new entire economy of opportunities,” he said.
Researchers at the University of Maryland are adapting the techniques used by birds and bugs to teach drones how to fly through small holes at high speeds. The drone requires only a few sensing shots to define the opening and lets a larger drone fly through an irregularly shaped hole with no training.
Nitin J. Sanket, Chahat Deep Singh, Kanishka Ganguly, Cornelia Fermüller, and Yiannis Aloimonos created the project, called GapFlyt, to teach drones using only simple, insect-like eyes.
The technique they used, called optical flow, creates a 3D model using a very simple, monocular camera. By marking features in each subsequent picture, the drone can tell the shape and depth of holes based on what changed in each photo. Things closer to the drone move more than things further away, allowing the drone to see the foreground vs. the background.
As you can see in the video below, the researchers have created a very messy environment in which to test their system. The Bebop 2 drone with an NVIDIA Jetson TX2 GPU on board flits around the hole like a bee and then buzzes right through at 2 meters per second, a solid speed. Further, the researchers confused the environment by making the far wall similar to the closer wall, proving that the technique can work in novel and messy situations.
At a symposium in Washington DC on Friday, DARPA announced plans to invest $2 billion in artificial intelligence research over the next five years.
In a program called “AI Next,” the agency now has over 20 programs currently in the works and will focus on “enhancing the security and resiliency of machine learning and AI technologies, reducing power, data, performance inefficiencies and [exploring] ‘explainability’” of these systems.
“Machines lack contextual reasoning capabilities, and their training must cover every eventuality, which is not only costly, but ultimately impossible,” said director Dr. Steven Walker. “We want to explore how machines can acquire human-like communication and reasoning capabilities, with the ability to recognize new situations and environments and adapt to them.”
Artificial intelligence is a broad term that can encompass everything from intuitive search features to true machine learning, and all definitions rely heavily on consuming data to inform their algorithms and “learn.” DARPA has a long history of research and development in this space, but has recently seen its efforts surpassed by foreign powers like China, who earlier this summer announced plans to become an AI leader by 2030.
In many cases these AI are still in their infancy, but the technology — especially machine learning — has the potential to completely transform not only how users interact with their own technology but how corporate and governmental institutions use this technology to interact with their employees and citizens.
One particular concern with machine learning is the potential bias that can be incorporated into these systems as a result of the data they consume during training. If the data contains holes or misinformation, the machines can come to incorrect conclusions — such as which individuals are “more likely” to commit crimes — that can have devastating consequences. And, even more frighteningly, when organically coming to these conclusions the “learning” a machine is obscured in something called a black box.
In other words, even the researchers who design the algorithms can’t quite know how machines are reaching their conclusions.
That said, when handled with care and forethought, AI research can be a powerful source of innovation and advancement as well. As DARPA moves forward with its research, we will see how they handle these important technical and societal questions.
Crossing Minds, which is launching in our Disrupt SF 2018 Battlefield today, is an AI startup that focuses on recommendations. The company’s app, Hai, provides you with a wide range of entertainment recommendations, including books, music, shows, video games and restaurants, based on the data it can gather about you from services like Spotify, Netflix, Hulu and your Xbox.
The company’s co-founders Alexandre Robicquet (CEO) and Emile Contral (CTO) tell me that they want Hai, which is available for iOS and on the web, to become people’s central hub for their entertainment needs. Both founders have extensive experience in machine learning and also managed to bring Sebastian Thrun on as an advisor. The team describes Hai as the “first pure cross-domain recommendation engine truly focused on the user.”
Ahead of its launch, Crossing Minds raised $3.5 million from Index Ventures, Sound Ventures and You & Mr Jones Brandtech Ventures.
As the team told me, the idea for Crossing Minds and Hai came from their own need of wanting a smart recommendation engine that went beyond a single domain. To get started, they downloaded a few data sets and started experimenting. That was 2016. Those first experiments were successful, but to build a full-scale product, the team needed more data and cleaner data sets. That’s what Crossing Minds focused on over the course of the last year or so, which really isn’t a surprise, given that we’re dealing with rather messy data here, yet there’s no way to build a machine learning-based recommendation system without a lot of data.
Then, using techniques like transfer learning and other modern machine learning approaches, the team is able to take what it knows about you and apply that to other domains as well. “For example, when you read a biography of a band’s member, we can extract information that we can then relate to a movie or restaurants and so on,” Contral explained.
The app itself is organized around three tabs: A discovery tab that surfaces its recommendations; the “Ask me” tab for when you are looking for very specific recommendations (a movie on Netflix, maybe); and the training tab that allows you to train Hai’s algorithm. For movies and other content that’s immediately accessible on your phone or on the web, Hai will also show a “Watch Now” button.
On the technical side, Crossing Minds uses all of the usual machine learning frameworks, but one interesting twist here is that the team decided to build its own hardware infrastructure with off-the-shelf GPUs to train its models and for inference. In part that’s because renting GPUs from a major cloud provider by the hour can quickly get expensive, but the team also noted that owning the hardware allows them to have full control over it and also offers security benefits (though I’m sure the cloud providers would disagree with that last part).
Over the course of the last few months, the team tested Hai with about 1,000 beta testers. The company isn’t quite ready to launch Hai to everybody, but it’s now taking beta sign-ups and plans to open the service to a wider audience over time.
Only a few years ago, talking to your phone or computer felt really weird. These days, thanks to Alexa, the Google Assistant and (for its three users) Cortana and Bixby, it’s becoming the norm. At this year’s Disrupt SF 2018, we’ll sit down with AISense founder and CEO Sam Liang and Google’s Cathy Pearl to discuss the past, present and future of voice — both for interacting with computers but also as a way to help us capture and organize information.
AISense, on the other hand, may not be a household name yet, but its flagship product, Otter.ai, is quickly gaining a following. Otter.ai is a mobile and web app that automatically transcribes phone calls, lectures, interviews and meetings in real time. The team built its own voice recognition tech that can distinguish between speakers, making for pretty clean transcripts that aren’t always perfect but are still very usable. Otter.ai is also the exclusive provider of automatic meeting transcription for Zoom Video Communications.
We’ll be using Otter.ai to provide real-time transcripts of all the panels on the Disrupt stage next week, so you’ll be able to see it in action at the event.