Graph databases have always been useful to help find connections across a vast data set, and it turns out that capability is quite handy in artificial intelligence and machine learning too. Today, Neo4j, the makers of the open source and commercial graph database platform, announced the release of Neo4j 3.5, which has a number of new features aimed specifically at AI and machine learning.
Neo4j founder and CEO Emil Eifrem says he had recognized the connection between AI and machine learning and graph databases for awhile, but he says that it has taken some time for the market to catch up to the idea.
“There has been a lot momentum around AI and graphs…Graphs are very fundamental to AI. At the same time we were seeing some early use cases, but not really broad adoption, and that’s what we’re seeing right now,” he explained.
AI graph uses cases. Graphic: Neo4j
To help advance AI uses cases, today’s release includes a new full text search capability, which Eifrem says has been one of the most requested features. This is important because when you are making connections between entities, you have to be able to find all of the examples regardless of how it’s worded — for example, human versus humans versus people.
Part of that was building their own indexing engine to increase indexing speed, which becomes essential with ever more data to process. “Another really important piece of functionality is that we have improved our data ingestion very significantly. We have 5x end-to-end performance improvements when it comes to importing data. And this is really important for connected feature extraction, where obviously, you need a lot of data to be able to train the machine learning,” he said. That also means faster sorting of data too.
Other features in the new release include improvements to the company’s own Cypher database query language and better visualization of the graphs to give more visibility, which is useful for visualizing how machine learning algorithms work, which is known as AI explainability. They also announced support for the Go language and increased security.
Graph databases are growing increasingly important as we look to find connections between data. The most common use case is the knowledge graph, which is what lets us see connections in a huge data sets. Common examples include who we are connected to on a social network like Facebook, or if we bought one item, we might like similar items on an ecommerce site.
Other use cases include connected feature extraction, a common machine learning training techniques that can look at a lot of data and extract the connections, the context and the relationships for a particular piece of data, such as suspects in a criminal case and the people connected to them.
Neo4j has over 300 large enterprise customers including Adobe, Microsoft, Walmart, UBS and NASA. The company launched in 2007 and has raised $80 million. The last round was $36 million in November 2016.
Autonomous vehicles and robots have to know how to get from A to B without hitting obstacles or pedestrians — but how can they do so politely and without disturbing nearby humans? That’s what Stanford’s Jackrabbot project aims to learn, and now a redesigned robot will be cruising campus learning the subtleties of humans negotiating one another’s personal space.
“There are many behaviors that we humans subconsciously follow – when I’m walking through crowds, I maintain personal distance or, if I’m talking with you, someone wouldn’t go between us and interrupt,” said grad student Ashwini Pokle in a Stanford News release. “We’re working on these deep learning algorithms so that the robot can adapt these behaviors and be more polite to people.”
Of course there are practical applications pertaining to last mile problems and robotic delivery as well. What do you do if someone stops in front of you? What if there’s a group running up behind? Experience is the best teacher, as usual.
The JackRabbot project team with JackRabbot 2 (from left to right): Patrick Goebel, Noriaki Hirose, Tin Tin Wisniewski, Amir Sadeghian, Alan Federman, Silivo Savarese, Roberto Martín-Martín, Pin Pin Tea-mangkornpan and Ashwini Pokle
The new robot has a vastly improved sensor suite compared to its predecessor: two Velodyne lidar units giving 360 degree coverage, plus a set of stereo cameras making up its neck that give it another depth-sensing 360 degree view. The cameras and sensors on its head can also be pointed wherever needed, of course, just like ours. All this imagery is collated by a pair of new GPUs in its base/body.
Amir Sadeghian, one of the researchers, said this makes Jackrabbot 2 “one of the most powerful robots of its size that has ever been built.”
This will allow the robot to sense human motion with a much greater degree of precision than before, and also operate more safely. It will also give the researchers a chance to see how the movement models created by the previous robot integrate with this new imagery.
The other major addition is a totally normal-looking arm that Jackrabbot 2 can use to gesture to others. After all, we do it, right? When it’s unclear who should enter a door first or what side of a path they should take, a wave of the hand is all it takes to clear things up. Usually. Hopefully this kinked little gripper accomplishes the same thing.
Jackrabbot 2 can zoom around for several hours at a time, Sadeghian said. “At this stage of the project for safety we have a human with a safety switch accompanying the robot, but the robot is able to navigate in a fully autonomous way.”
Having working knowledge of how people use the space around them and how to predict their movements will be useful to startups like Kiwi, Starship, and Marble. The first time a delivery robot smacks into someone’s legs is the last time they consider ordering something via one.
Salespeople usually spend their days talking. They are on the phone and in meetings, but when it comes to updating Salesforce, they are back at the keyboard again typing notes and milestones, or searching for metrics about their performance. Today, Salesforce decided to change that by introducing Einstein Voice, a bit of AI magic that allows salespeople to talk to the program instead of typing.
In a world where Amazon Alexa and Siri make talking to our devices more commonplace in our non-work lives, it makes sense that companies are trying to bring that same kind of interaction to work.
In this case, you can conversationally enter information about a meeting, get daily briefings about key information on your day’s meetings (particularly nice for salespeople who spend their day in the car) and interact with Salesforce data dashboards by asking questions instead of typing queries.
All of these tools are designed to make life easier for busy salespeople. Most hate doing the administrative part of their jobs because if they are entering information, even if it will benefit them having a record in the long run, they are not doing their primary job, which is selling stuff.
For the meetings notes part, instead of typing on a smartphone, which can be a challenge anyway, you simply touch Meeting Debrief in the Einstein Voice mobile tool and start talking to enter your notes. The tool interprets what you’re saying. As with most transcription services, this is probably not perfect and will require some correcting, but should get you most of the way there.
It can also pick out key data like dates and deal amounts and let you set action items to follow up on.
Brent Leary, who is the founder and principal analyst at CRM Essentials says this is a natural progression for Salesforce as people get more comfortable using voice interfaces. “I think this will make voice-first devices and assistants as important pieces to the CRM puzzle from both a customer experience and an employee productivity perspective,” he told TechCrunch.
It’s worth pointing out that Tact.AI has been doing this for some time on top of Salesforce giving this type of voice interaction for Salesforce users. It’s likely ahead of Salesforce at this point, but Leary believes having Salesforce enter the voice arena will probably benefit the startup more than hurt it.
“The Salesforce tide will lift all boats, and companies like Tact will see their profile increased significantly because while Salesforce is the leader in the category, it’s share of the market is still less than 20% of the market,” he pointed out.
Einstein is Salesforce’s catch-all brand for its artificial intelligence layer. In this case it’s using natural language processing, voice recognition technology and other artificial intelligence pieces to interpret the person’s voice and transcribe what they are saying or understand their request better.
Typically, Salesforce starts with a small set of functionality and the builds on that over time. That’s very likely what they are doing here, coming out with a product announcement in time for Dreamforce, their massive customer conference next week,
IBM has launched a software service that scans AI systems as they work in order to detect bias and provide explanations for the automated decisions being made — a degree of transparency that may be necessary for compliance purposes not just a company’s own due diligence.
The new trust and transparency system runs on the IBM cloud and works with models built from what IBM bills as a wide variety of popular machine learning frameworks and AI-build environments — including its own Watson tech, as well as Tensorflow, SparkML, AWS SageMaker, and AzureML.
It says the service can be customized to specific organizational needs via programming to take account of the “unique decision factors of any business workflow”.
The fully automated SaaS explains decision-making and detects bias in AI models at runtime — so as decisions are being made — which means it’s capturing “potentially unfair outcomes as they occur”, as IBM puts it.
It will also automatically recommend data to add to the model to help mitigate any bias that has been detected.
Explanations of AI decisions include showing which factors weighted the decision in one direction vs another; the confidence in the recommendation; and the factors behind that confidence.
IBM also says the software keeps records of the AI model’s accuracy, performance and fairness, along with the lineage of the AI systems — meaning they can be “easily traced and recalled for customer service, regulatory or compliance reasons”.
For one example on the compliance front, the EU’s GDPR privacy framework references automated decision making, and includes a right for people to be given detailed explanations of how algorithms work in certain scenarios — meaning businesses may need to be able to audit their AIs.
The IBM AI scanner tool provides a breakdown of automated decisions via visual dashboards — an approach it bills as reducing dependency on “specialized AI skills”.
However it is also intending its own professional services staff to work with businesses to use the new software service. So it will be both selling AI, ‘a fix’ for AI’s imperfections, and experts to help smooth any wrinkles when enterprises are trying to fix their AIs… Which suggests that while AI will indeed remove some jobs, automation will be busy creating other types of work.
Nor is IBM the first professional services firm to spot a business opportunity around AI bias. A few months ago Accenture outed a fairness tool for identifying and fixing unfair AIs.
So with a major push towards automation across multiple industries there also looks to be a pretty sizeable scramble to set up and sell services to patch any problems that arise as a result of increasing use of AI.
And, indeed, to encourage more businesses to feel confident about jumping in and automating more. (On that front IBM cites research it conducted which found that while 82% of enterprises are considering AI deployments, 60% fear liability issues and 63% lack the in-house talent to confidently manage the technology.)
In additional to launching its own (paid for) AI auditing tool, IBM says its research division will be open sourcing an AI bias detection and mitigation toolkit — with the aim of encouraging “global collaboration around addressing bias in AI”.
“IBM led the industry in establishing trust and transparency principles for the development of new AI technologies. It’s time to translate principles into practice,” said David Kenny, SVP of cognitive solutions at IBM, commenting in a statement. “We are giving new transparency and control to the businesses who use AI and face the most potential risk from any flawed decision making.”
It’s no surprise that Google used its Cloud Next 2018 event in Tokyo today — one of a number of international Cloud Next events that follow its flagship San Francisco conference — to announce a couple of new initiatives that specifically focus on the Japanese market.
In addition, Google is also launching an Advanced Solutions Lab in Tokyo as well. Previously Google opened similar labs in Dublin, Ireland, as well as Sunnyvale and New York. These labs offer a wide range of machine learning-centric training options, collaborative workspaces for teams that are part of the company’s four-week machine learning training program, and access to Google experts.
(Photo by Hitoshi Yamada/NurPhoto via Getty Images)
The company also today announced that it is working with Fast Retailing, the company behind brands like Uniqlo, to help it adopt new technologies. As its name implies, Fast Retailing would like to retail faster, so it’s looking at Google and its G Suite and machine learning tools to help it accelerate its growth. The code name for this project is ‘Ariake.’
“Making information accessible to all our employees is one of the foundations of the Ariake project, because it empowers them to use human traits like logic, judgment, and empathy to make decisions,” says Tadashi Yanai, CEO of Fast Retailing. “We write business plans every season, and we use collaborative tools like G Suite make sure they’re available to all. Our work with Google Cloud has gone well beyond demand forecasting; it’s fundamentally changed the way we work together.”
A new research report has raised concerns about how in-home smart devices such as AI virtual voice assistants, smart appliances, and security and monitoring technologies could be gathering and sharing children’s data.
It calls for new privacy measures to safeguard kids and make sure age appropriate design code is included with home automation technologies.
Barassi wants the UK’s data protection agency to launch a review of what she terms “home life data” — meaning the information harvested by smart in-home devices that can end up messily mixing adult data with kids’ information — to consider its impact on children’s privacy, and “put this concept at the heart of future debates about children’s data protection”.
“Debates about the privacy implications of AI home assistants and Internet of Things focus a lot on the the collection and use of personal data. Yet these debates lack a nuanced understanding of the different data flows that emerge from everyday digital practices and interactions in the home and that include the data of children,” she writes in the report.
“When we think about home automation therefore, we need to recognise that much of the data that is being collected by home automation technologies is not only personal (individual) data but home life data… and we need to critically consider the multiple ways in which children’s data traces become intertwined with adult profiles.”
The report gives examples of multi-user functions and aggregated profiles (such as Amazon’s Household Profiles feature) as constituting a potential privacy risk for children’s privacy.
Another example cited is biometric data — a type of information frequently gathered by in-home ‘smart’ technologies (such as via voice or facial recognition tech) yet the report asserts that generic privacy policies often do not differentiate between adults’ and children’s biometric data. So that’s another grey area being critically flagged by Barassi.
She’s submitted the report to the ICO in response to its call for evidence and views on an Age Appropriate Design Code it will be drafting. This code is a component of the UK’s new data protection legislation intended to support and supplement rules on the handling of children’s data contained within pan-EU privacy regulation — by providing additional guidance on design standards for online information services that process personal data and are “likely to be accessed by children”.
And it’s very clear that devices like smart speakers intended to be installed in homes where families live are very likely to be accessed by children.
The report concludes:
There is no acknowledgement so far of the complexity of home life data, and much of the privacy debates seem to be evolving around personal (individual) data. It seems that companies are not recognizing the privacy implications involved in children’s daily interactions with home automation technologies that are not designed for or targeted at them. Yet they make sure to include children in the advertising of their home technologies. Much of the responsibility of protecting children is in the hands of parents, who struggle to navigate Terms and Conditions even after changes such as GDPR [the European Union’s new privacy framework]. It is for this reason that we need to find new measures and solutions to safeguard children and to make sure that age appropriate design code is included within home automation technologies.
“We’ve seen privacy concerns raised about smart toys and AI virtual assistants aimed at children, but so far there has been very little debate about home hubs and smart technologies aimed at adults that children encounter and that collect their personal data,” adds Barassi commenting in a statement.
“The very newness of the home automation environment means we do not know what algorithms are doing with this ‘messy’ data that includes children’s data. Firms currently fail to recognise the privacy implications of children’s daily interactions with home automation technologies that are not designed or targeted at them.
“Despite GDPR, it’s left up to parents to protect their children’s privacy and navigate a confusing array of terms and conditions.”
The report also includes a critical case study of Amazon’s Household Profiles — a feature that allows Amazon services to be shared by members of a family — with Barassi saying she was unable to locate any information on Amazon’s US or UK privacy policies on how the company uses children’s “home life data” (e.g. information that might have been passively recorded about kids via products such as Amazon’s Alexa AI virtual assistant).
“It is clear that the company recognizes that children interact with the virtual assistants or can create their own profiles connected to the adults. Yet I can’t find an exhaustive description or explanation of the ways in which their data is used,” she writes in the report. “I can’t tell at all how this company archives and sells my home life data, and the data of my children.”
Amazon does make this disclosure on children’s privacy — though it does not specifically state what it does in instances where children’s data might have been passively recorded (i.e. as a result of one of its smart devices operating inside a family home.)
We asked Amazon to clarify its handling of children’s data but at the time of writing the company had not responded to multiple requests for comment.
The EU’s new GDPR framework does require data processors to take special care in handling children’s data.
In its guidance on this aspect of the regulation the ICO writes: “You should write clear privacy notices for children so that they are able to understand what will happen to their personal data, and what rights they have.”
The ICO also warns: “The GDPR also states explicitly that specific protection is required where children’s personal data is used for marketing purposes or creating personality or user profiles. So you need to take particular care in these circumstances.”
Facial recognition technology presents myriad opportunities as well as risks, but it seems like the government tends to only consider the former when deploying it for law enforcement and clerical purposes. Senator Kamala Harris (D-CA) has written the Federal Bureau of Investigation, Federal Trade Commission, and Equal Employment Opportunity Commission telling them they need to get with the program and face up to the very real biases and risks attending the controversial tech.
In three letters provided to TechCrunch (and embedded at the bottom of this post), Sen. Harris, along with several other notable legislators, pointed out recent research showing how facial recognition can produce or reinforce bias, or otherwise misfire. This must be considered and accommodated in the rules, guidance, and applications of federal agencies.
Other lawmakers and authorities have sent letters to various companies and CEOs or held hearings, but representatives for Sen. Harris explained that there is also a need to advance the issue within the government as well.
Sen. Harris at a recent hearing.
Attention paid to agencies like the FTC and EEOC that are “responsible for enforcing fairness” is “a signal to companies that the cop on the beat is paying attention, and an indirect signal that they need to be paying attention too. What we’re interested in is the fairness outcome rather than one particular company’s practices.”
If this research and the possibility of poorly controlled AI systems aren’t considered in the creation of rules and laws, or in the applications and deployments of the technology, serious harm could ensue. Not just positive harm, such as the misidentification of a suspect in a crime, but negative harm, such as calcifying biases in data and business practices in algorithmic form and depriving those affected by the biases of employment or services.
“While some have expressed hope that facial analysis can help reduce human biases, a growing body of evidence indicates that it may actually amplify those biases,” the letter to the EEOC reads.
Here Sen. Harris, joined by Senators Patty Murray (D-WA) and Elisabeth Warren (D-MA), expresses concern over the growing automation of the employment process. Recruitment is a complex process and AI-based tools are being brought in at every stage, so this is not a theoretical problem. As the letter reads:
Suppose, for example, that an African American woman seeks a job at a company that uses facial analysis to assess how well a candidate’s mannerisms are similar to those of its top managers.
First, the technology may interpret her mannerisms less accurately than a white male candidate.
Second, if the company’s top managers are homogeneous, e.g., white and male, the very characteristics being sought may have nothing to do with job performance but are instead artifacts of belonging to this group. She may be as qualified for the job as a white male candidate, but facial analysis may not rate her as highly becuase her cues naturally differ.
Third, if a particular history of biased promotions led to homogeneity in top managers, then the facial recognition analysis technology could encode and then hide this bias behind a scientific veneer of objectivity.
If that sounds like a fantasy use of facial recognition, you probably haven’t been paying close enough attention. Besides, even if it’s still rare, it makes sense to consider these things before they become widespread problems, right? The idea is to identify issues inherent to the technology.
“We request that the EEOC develop guidelines for employers on the fair use of facial analysis technologies and how this technology may violate anti-discrimination law,” the Senators ask.
A set of questions also follows (as it does in each of the letters): have there been any complaints along these lines, or are there any obvious problems with the tech under current laws? If facial technology were to become mainstream, how should it be tested, and how would the EEOC validate that testing? Sen. Harris and the others request a timeline of how the Commission plans to look into this by September 28.
Next on the list is the FTC. This agency is tasked with identifying and punishing unfair and deceptive practices in commerce and advertising; Sen. Harris asserts that the purveyors of facial recognition technology may be considered in violation of FTC rules if they fail to test or account for serious biases in their systems.
“Developers rarely if ever test and then disclose biases in their technology,” the letter reads. “Without information about the biases in a technology or the legal and ethical risks attendant to using it, good faith users may be unintentionally and unfairly engaging in discrimination. Moreover, failure to disclose these biases to purchasers may be deceptive under the FTC Act.”
Another example is offered:
Consider, for example, a situation in which an African American female in a retail store is misidentified as a shoplifter by a biased facial recognition technology and is falsely arrested based on this information. Such a false arrest can cause trauma and substantially injure her future house, employment, credit, and other opportunities.
Or, consider a scenario in which a young man with a dark complexion is unable to withdraw money from his own bank account because his bank’s ATM uses facial recognition technology that does not identify him as their customer.
Again, this is very far from fantasy. On stage at Disrupt just a couple weeks ago Chris Atageka of UCOT and Timnit Gebru from Microsoft Research discussed several very real problems faced by people of color interacting with AI-powered devices and processes.
The FTC actually had a workshop on the topic back in 2012. But, amazing as it sounds, this workshop did not consider the potential biases on the basis of race, gender, age, or other metrics. The agency certainly deserves credit for addressing the issue early, but clearly the industry and topic have advanced and it is in the interest of the agency and the people it serves to catch up.
The letter ends with questions and a deadline rather like those for the EEOC: have there been any complaints? How will they assess address potential biases? Will they issue “a set of best practices on the lawful, fair, and transparent use of facial analysis?” The letter is cosigned by Senators Richard Blumenthal (D-CT), Cory Booker (D-NJ), and Ron Wyden (D-OR).
Last is the FBI, over which Sen. Harris has something of an advantage: the Government Accountability Office issued a report on the very topic of facial recognition tech that had concrete recommendations for the Bureau to implement. What Harris wants to know is, what have they done about these, if anything?
“Although the GAO made its recommendations to the FBI over two years ago, there is no evidence that the agency has acted on those recommendations,” the letter reads.
The GAO had three major recommendations. Briefly summarized: do some serious testing of the Next Generation Identification-Interstate Photo System (NGI-IPS) to make sure it does what they think it does, follow that with annual testing to make sure it’s meeting needs and operating as intended, and audit external facial recognition programs for accuracy as well.
“We are also eager to ensure that the FBI responds to the latest research, particularly research that confirms that face recognition technology underperforms when analyzing the faces of women and African Americans,” the letter continues.
The list of questions here is largely in line with the GAO’s recommendations, merely asking the FBI to indicate whether and how it has complied with them. Has it tested NGI-IPS for accuracy in realistic conditions? Has it tested for performance across races, skin tones, genders, and ages? If not, why not, and when will it? And in the meantime, how can it justify usage of a system that hasn’t been adequately tested, and in fact performs poorest on the targets it is most frequently loosed upon?
The FBI letter, which has a deadline for response of October 1, is cosigned by Sen. Booker and Cedric Richmond, Chair of the Congressional Black Caucus.
These letters are just a part of what certainly ought to be a government-wide plan to inspect and understand new technology and how it is being integrated with existing systems and agencies. The federal government moves slowly, even at its best, and if it is to avoid or help mitigate real harm resulting from technologies that would otherwise go unregulated it must start early and update often.
Microsoft HoloLens mixed reality glasses have always been interesting technology, but it’s never been clear how the company would move from novelty device to actual viable business use cases. Today, it made a move toward the latter, announcing a couple of applications designed to put the HoloLens to work in Dynamics 365, giving it a real business purpose.
Dynamics 365 is Microsoft’s one-stop shop for CRM and ERP, where a company can work on some of its key business software functions including field service in an integrated fashion. The company has been looking at for HoloLens to bring computing power to a group of field workers like repair technicians for whom even a tablet would be awkward because they have to work with both hands free.
For these people, having a fully functioning Windows 10 computer you can wear on your face could be a big advantage and that’s what Microsoft is hoping to provide with HoloLens. The problem was finding use cases where this would make sense. One idea is providing remote assistance for people out in the field to get help from subject experts back at the office, and today the company announced Dynamics 365 Remote Assist.
In this scenario, the worker is wearing a HoloLens either to understand the repair scenario before they go to the site or to get remote help from a subject expert while they are at the site. The expert can virtually see what the technician is seeing through the HoloLens, and walk them through the repair without leaving the office, even circling parts and providing other annotations in real time.
Microsoft Remote Assist in action with expert walking the technician through the task. Photo: Microsoft
Microsoft is not the first company to create such a solution. ScopeAR announced RemoteAR 4 months ago, a similar product, but Microsoft has the advantage of building it natively into Windows 10 and all that entails including data integration to update the various repositories with information after the repair is complete.
The other business scenario the company is announcing today is called Dynamics 365 Layout. A designer can create a 3D representation of something like a store or factory layout in CAD software, view the design in 3D in HoloLens, and adjust it in real time before the design goes live. As Microsoft’s Lorraine Bardeen, who has the cool title of General Manager for Microsoft Mixed Reality says, instead of creating cardboard mockups and adjusting your 3D CAD drawing on your computer as you find issues in your design, you can put on your HoloLens and make adjustments in a virtual representation of the layout and it adjusts the CAD drawing for you as you make changes.
Laying out the pieces on a factory floor using Dynamics 365 Layout. Photo: Microsoft
Bardeen says the company has worked with customers to find real-world use cases that would save time, effort and money using mixed reality with HoloLens. They cite companies like Chevron, Ford and ThyssenKrupp Elevators as organizations actively embracing this kind of technology, but it still not clear if HoloLens and mixed reality will become a central component of business in the future. These two solutions GA on October 1st and we will begin the process of finding out.