Walmart’s test store for new technology, Sam’s Club Now, opens next week in Dallas

Walmart’s warehouse club, Sam’s Club is preparing to open the doors at a new Dallas area store that will serve as a testbed for the latest in retail technology. Specifically, the retailer will test out new concepts like mobile checkout, an Amazon Go-like camera system for inventory management, electronic shelf labels, wayfinding technology for in-store navigation, augmented reality, and artificial intelligence-infused shopping, among other things.

The retailer first announced its plans to launch a concept store in Dallas back in June, which was then said to be a real-world test lab for technology-driven shopping experiences.

Today, the company is taking the wraps off the project and is detailing what it has planned for the new location, which goes by the name “Sam’s Club Now.”

Like other Sam’s Club stores, consumers will need a membership to shop at Sam’s Club Now. But how they shop will be remarkably different.

Instead of cashiers, the store is staffed with “Member Hosts,” who will act more like concierges, the company says.

And instead of scanning items at a point-of-sale cashier stand, customers will use a specialized Sam’s Club Now mobile app.

The app leverages Sam’s Club existing “Scan & Go” technology, which is used today across its retail locations to help speed up checkout. With the current Scan & Go mobile app, shoppers can opt to scan items as they place them in their cart, then pay right on their phone. At Sam’s Club Now, however, the use of mobile scan-and-pay is required, not optional.

The Sam’s Club Now app will also be infused with other features the company wants to try out, including an integrated wayfinding and navigation system, augmented reality features, an A.I.-powered shopping list and more.

At launch the app will offer a built-in map for finding the right aisle for a given product, but over time, this mapping system will be upgraded to use beacon technology and will be tied to the customer’s shopping list to map their best route through the store.

The shopping list will also be powered by A.I. Using a combination of machine learning and customer purchase history, the list will be pre-populated with customers’ frequent purchases. Those items can be removed from the list, if not needed.

This way, customers won’t forget things they usually need to buy, the retailer says.

Meanwhile, the app will allow Sam’s Club to test augmented reality as a way of highlighting “stories’ about the products being sold and their features, as well as providing a way to find out how items are sourced. This seems more gimmicky, though, as it’s unlikely that customers are interested in this sort of “infotainment” when just trying to get their shopping done.

But at very least, the test store gives the retailer a chance to confirm that supposition with real world data.

The app will also allow members to place pickup orders that will be ready in just an hour, or place same-day delivery orders.

The lack of cashiers won’t be the only difference between this Sam’s Club Now and other locations. The store will also be just a quarter of the size of an average club, at 32,000 square feet. That means it will feature, in some cases, smaller pack sizes than at the other warehouse club locations.

Because of the smaller size, it will also feature a quarter of the usual staff at 44 associates. But the goal is not to eliminate staff and replace them with technology, the retailer claims.

“Eliminating friction doesn’t mean replacing exceptional member service with a digital experience,” said John Furner, Sam’s Club President and CEO. “We know our members expect both.”

The company says it will include a range of products like meats, fresh foods, frozen items, beer and wine and meal solutions.

More importantly, it will also include a new inventory management and tracking technology. A system of over 700 cameras will be used to help the retailer manage the inventory and optimize the store layout.

On the shelves, it’s also testing electronic shelf labels that will instantly update prices, eliminating the need to print out paper signage.

These are not third-party systems, the retailer says.

“The vast majority of technologies that we’re building here are technologies that we’ve developed in house. There may be pieces of modules of things that we’re using from third parties. But the majority are systems that are building on the technology that we’ve developed here,” said Jamie Iannone, CEO of SamsClub.com and EVP of Membership & Technology. “That allows us to iterate and move pretty quickly with it,” he noted. 

By “quickly,” the retailer means things can change in a matter of weeks. The store plans to rapidly iterate on new and different experiences across computer vision, A.I., A.R., machine learning, and robotics.

The winners will then be rolled out to other Sam’s Club locations across the U.S.

The retailer says Dallas was selected as a test market because it’s an easy trip from Walmart’s headquarters in Bentonville, Arkansas, and because of Dallas’ tech talent and the recruiting potential. The company today has over 100 engineers in the area, and it plans to hire more in the areas of machine learning, A.I. and computer vision.

It’s worth noting, too, that Sam’s Club Now has been set up, developed and made ready for opening in just five months.

The store will officially open to on an invite basis to local members for testing as soon as next week. The grand opening to the public is tentatively scheduled for a couple weeks out.

AIs trained to help with sepsis treatment, fracture diagnosis

Image of a wrist x-ray.

Treating patients effectively involves a combination of training and experience. That’s one of the reasons that people have been excited about the prospects of using AI in medicine: it’s possible to train algorithms using the experience of thousands of doctors, giving them more information than any single human could accumulate.

This week has provided some indications that software may be on the verge of living up to that promise, as two papers describe excellent preliminary results with using AI for both diagnosis and treatment decisions. The papers involve very different problems and approaches, which suggests that the range of situations where AI could prove useful is very broad.

Choosing treatments

One of the two studies focuses on sepsis, which occurs when the immune system mounts an excessive response to an infection. Sepsis is apparently the third leading cause of death worldwide, and it remains a problem even when the patient is already hospitalized. There are guidelines available for treating sepsis patients, but the numbers suggest there’s still considerable room for improvement. So a small UK-US team decided to see if software could help provide some of that improvement.

Read 9 remaining paragraphs | Comments

Google Lens comes to Google Images for searching – and shopping – inside photos

Google this morning announced it’s bringing its A.I.-powered Lens technology to Google Image Search. The idea, explains the company, is to allow web searchers to learn more about what’s in a photo – including, in particular, items they may want to shop for and buy. For example, a photo of a well-decorated living room might have a sofa you like, but the photo itself wouldn’t have necessarily informed you who made the sofa or where it was for sale.

Google Lens – yes, acting very much like Pinterest – will now be able to help with that.

You’ll be able to tap on “dots” that appear within the photo, which designate items Google Lens has identified, or you can use your finger to “draw” around an object in the photo to trigger Google Images to search for related information. Google then searches across the web, including for other images, web pages, and even videos where this object may appear.

This isn’t just for shopping, of course. Google Lens can also be used to learn more about landmarks, animals, places you want to travel and more.

But Google naturally sees a good fit for Lens when it comes to directing users to products and, therefore, the websites of potential Google advertisers. This is the area where Pinterest has been steadily advancing.

Pinterest last month reported a 25 percent increase in monthly active users, as it gears up for its IPO. That means more people are starting their shopping journeys on its site, looking for purchase inspiration around things like fashion, home décor, travel, and other ideas. It’s also been beefing up its advertising product to further capture users’ interests and connect them with brands, having earlier this year added promoted videos to its ad products. 

And just a week ago, Pinterest announced it had rebuilt the infrastructure around product pins to make its site and app more “shoppable,” while reporting that tests of the changes had shown a 40 percent increase in visits to retailers’ site, as a result.

For these reasons – not to mention the looming threat of Facebook and Instagram ads sending users directly to retailers’ websites, and Amazon’s not insignificant entry into the advertising business – it’s clear that it was time for Google to leverage its own technology to help improve shopping and click-through rates for retailers on its site, as well.

Google says Lens in Images is now live on the mobile web for people in the U.S. searching in English, and will soon be rolled out to other countries, languages and Google Images locations.

Never mind the cannabis hype — the real buzz is around AI

It feels like a good week to talk about bubbles.

The temperature in Montreal was close to freezing on Oct. 18, yet the lineup at the Société québécoise du cannabis store on Ste-Catherine Street wrapped around the block for a second consecutive day.

Irrational exuberance? Or a symbol of all the wealth that suppliers will generate? There surely is some degree of excess when factory farms are likened to Amazon and Google just because they grow marijuana instead of tomatoes.

Cryptocurrencies and blockchains were going to change the world in 2017. Now, it looks like those innovations will be used simply to upgrade the plumbing of the existing financial system. Valuations are correcting as a result. Bet you a bitcoin that pot is on a similar path.

This brings us to another technology that definitely will change the world, and also has some bubbly characteristics.

If the fun Canadian business story of the moment is cannabis, the serious one is artificial intelligence (AI). When Stephen Poloz, the Bank of Canada governor, devoted a speech to creative destruction last month, he wasn’t thinking about pot. AI will reshape entire industries; tens of thousands of jobs will be taken over by computers, (hopefully) at least as many will be created in the process.

“In the future, AI is going to be as normal and as natural as the electricity in this room right now,” Carolina Bessega, chief scientific officer at Stradigi AI, told me in an interview at the company’s headquarters in Montreal earlier this month. “Nobody is going to talk about it because everyone is going to use it and have it.”

Bessega’s future is coming at us quickly.

Less than five years ago, Stradigi was just another developer of custom software. Then a retail chain asked the company to clean up a very messy inventory system. The job required sorting tens of thousands of items into a single database. It would have taken a human months to do it. So Bessega went to her boss, Basil Bouraropoulos, the chief executive, and said that she might be able to build an AI system that could do the work in a couple of hours. She was right; the gamble worked and the client was happy.

Bouraropoulos, an entrepreneur with a background in coding, refocused his company immediately. “We didn’t go into AI because it was a bubble, because in 2014 the bubble wasn’t there,” he said. “The bubble really started in 2016.”

Like pot, crypto, and blockchain, there’s some froth around AI too. Thousands of tickets for the annual Neural Information Processing Systems conference, which this year will be held in Montreal during the first week of December, apparently sold out in about 10 minutes. Something called the AI-Powered Supply Chains Supercluster, spanning the “Quebec-Windsor corridor,” won a share of the Trudeau government’s $950-million fund to create innovation hubs, enough to convince some of you that AI must be a loser if it needs Ottawa’s help. One of Bouraropoulos’s challenges is convincing clients that Stradigi is legitimate and not party of the hype.

“You see companies just adding ‘AI’ or ‘dot AI.’ I’ve seen it over and over,” he said. “We’ve even had questions from clients, `Are you really an AI company?’ My answer is always, `I’d love to have you visit our offices and see that we really have 30 PhD’s sitting in here’.”

I saw the 30 PhD’s; they exist. They soon will be joined by about a dozen more to help Stradigi keep up with a surge in demand, including a new partnership with Seattle-based Cray Inc., the publicly traded maker of supercomputers that generated (US) $392 million in revenue in 2017. After taking its time, Stradigi is stepping out from under the shadow of Mila, the Montreal-based AI lab founded by Yoshua Bengio, a pioneer of the field.

“They are doing a fantastic job and if you look at their plans for the future, I really see that Canada is going to be the leader in AI,”  Bouraropoulos said. “I am very confident saying that, and I’m very confident that we are going to be a huge part of it as well.”

I’m inclined to believe him, because Bouraropoulos and Bessega were rare executives I’ve talked to this year who didn’t complain about a labour shortage. That’s a serious issue in a lot of industries, but not in AI apparently. If talented scientists are lining up to work for companies such as Stradigi, the industry should be able to grow quickly.

Only a couple of years ago, Canada’s best graduates were rushing to the United States. Now, that migration pattern has reversed. That might surprise those who think Canada’s personal tax rates are too high to compete in tech. The cost of living in places such as San Francisco and New York has become so expensive that Canadian cities such as Montreal and Toronto can compete easily, even if the taxes are higher, said Bouraropoulos.

It also helps that the U.S. has become hostile to immigrants. Canada becomes an easy second choice, especially if it means working in the orbit of world-class researchers such as Bengio.

“For people who are not from the United States, the situation in the United States is not easy,” said Bessega, a native of Venezuela. “That plays in our advantage.”

• Email: kcarmichael@postmedia.com | Twitter:

ServiceNow to acquire FriendlyData for its natural language search technology

Enterprise cloud service management company ServiceNow announced today that it will acquire FriendlyData and integrate the startup’s natural language search technology into apps on its Now platform. Founded in 2016, FriendlyData’s natural language query (NLQ) technology enables enterprise customers to build search tools that allow users to ask technical questions even if they don’t know the right jargon.

FriendlyData’s NLQ tech figures out what they are trying to say and then answers with text responses or easy-to-understand data visualizations. ServiceNow said it will integrate FriendlyData’s tech into the Now Platform, which includes apps for IT, human resources, security operations, and customer service management. It will also be available in products for developers and ServiceNow’s partners.

In a statement, Pat Casey, senior vice president of development and operations at ServiceNow, said “ServiceNow is bringing NLQ capabilities to the Now Platform, enabling companies to ask technical questions in plain English and receive direct answers. With this technical enhancement, our goal is to allow anyone to easily make data driven decisions, increasing productivity and driving businesses forward faster.”

The acquisition of FriendlyData is the latest in ServiceNow’s initiative to reduce the friction of support requests within organizations with AI-based tools. For example, it launched a chatbot-building tools called Virtual Agent in May, which enables companies to create custom chatbots for services like Slack or Microsoft Teams to automatically handle routine inquiries such as equipment requests. It also announced the acquisition of Parlo, a chatbot startup, around the same time.

Google will not bid for the Pentagon’s $10B cloud computing contract, citing its “AI Principles”

Google has dropped out of the running for JEDI, the massive Defense Department cloud computing contract potentially worth $10 billion. In a statement to Bloomberg, Google said that it decided not to participate in the bidding process, which ends this week, because the contract may not align with the company’s principles for how artificial intelligence should be used.

In statement to Bloomberg, Google spokesperson said “We are not bidding on the JEDI contract because first, we couldn’t be assured that it would align with our AI Principles. And second, we determined that there were portions of the contract that were out of scope with our current government certifications,” adding that Google is still “working to support the U.S. government with our cloud in many ways.”

Officially called Joint Enterprise Defense Infrastructure, bidding for the initiative’s contract began two months ago and closes this week. JEDI’s lead contender is widely considered to be Amazon, because it set up the CIA’s private cloud, but Oracle, Microsoft, and IBM are also expected to be in the running.

The winner of the contract, which could last for up to 10 years, is expected to be announced by the end of the year. The project is meant to accelerate the Defense Department’s adoption of cloud computing and services. Only one provider will be chosen, a controversial decision that the Pentagon defended by telling Congress that the pace of handling task orders in a multiple-award contract “could prevent DOD from rapidly delivering new capabilities and improved effectiveness to the warfighter that enterprise-level cloud computing can enable.”

Google also addressed the controversy over a single provider, telling Bloomberg that “had the JEDI contract been open to multiple vendors, we would have submitted a compelling solution for portions of it. Google Cloud believes that a multi-cloud approach is in the best interest of government agencies, because it allows them to choose the right cloud for the right workload.”

Google’s decision no to bid for JEDI comes four months after it reportedly decided not to renew its contract with the Pentagon for Project Maven, which involved working with the military to analyze drone footage, including images taken in conflict zones. Thousands of Google employees signed a petition against its work on Project Maven because they said it meant the company was directly involved in warfare. Afterward, Google came up with its “AI Principles,” a set of guidelines for how it will use its AI technology.

It is worth noting, however, that Google is still under employee fire because it is reportedly building a search engine for China that will comply with the government’s censorship laws, eight years after exiting the country for reasons including its limits on free speech.

5 takeaways on the state of AI from Disrupt SF

The promise of artificial intelligence is immense, but the roadmap to achieving those goals still remains unclear. Onstage at TechCrunch Disrupt SF, some of AI’s leading minds shared their thoughts on current competition in the market, how to ensure algorithms don’t perpetuate racism and the future of human-machine interaction.

Here are five takeaways on the state of AI from Disrupt SF 2018:

1. U.S. companies will face many obstacles if they look to China for AI expansion

Sinnovation CEO Kai-Fu Lee (Photo: TechCrunch/Devin Coldewey)

The meteoric rise in China’s focus on AI has been well-documented and has become impossible to ignore these days. With mega companies like Alibaba and Tencent pouring hundreds of millions of dollars into home-grown businesses, American companies are finding less and less room to navigate and expand in China. AI investor and Sinnovation CEO Kai-Fu Lee described China as living in a “parallel universe” to the U.S. when it comes to AI development.

“We should think of it as electricity,” explained Lee, who led Google’s entrance into China. “Thomas Edison and the AI deep learning inventors – who were American – they invented this stuff and then they generously shared it. Now, China, as the largest marketplace with the largest amount of data, is really using AI to find every way to add value to traditional businesses, to internet, to all kinds of spaces.”

“The Chinese entrepreneurial ecosystem is huge so today the most valuable AI companies in computer vision, speech recognition, drones are all Chinese companies.”

2. Bias in AI is a new face on an old problem

SAN FRANCISCO, CA – SEPTEMBER 07: (L-R) UC Berkeley Professor Ken Goldberg, Google AI Research Scientist Timnit Gebru, UCOT Founder and CEO Chris Ategeka, and moderator Devin Coldewey speak onstage during Day 3 of TechCrunch Disrupt SF 2018 at Moscone Center on September 7, 2018 in San Francisco, California. (Photo by Kimberly White/Getty Images for TechCrunch)

AI promises to increase human productivity and efficiency by taking the grunt work out of many processes. But the data used to train many AI systems often falls victim to the same biases of humans and, if unchecked, can further marginalize communities caught up in systemic issues like income disparity and racism.

“People in lower socio-economic statuses are under more surveillance and go through algorithms more,” said Google AI’s Timnit Gebru. “So if they apply for a job that’s lower status they are likely to go through automated tools. We’re right now in a stage where these algorithms are being used in different places and we’re not event checking if they’re breaking existing laws like the Equal Opportunity Act.”

A potential solution to prevent the spread of toxic algorithms was outlined by UC Berkeley’s Ken Goldberg who cited the concept of ensemble theory, which involves multiple algorithms with various classifiers working together to produce a single result.

We’re right now in a stage where these algorithms are being used in different places and we’re not even checking if they’re breaking existing laws.

But how do we know if the solution to inadequate tech is more tech? Goldberg says this is where having individuals from multiple backgrounds, both in and outside the world of AI, is vital to developing just algorithms. “It’s very relevant to think about both machine intelligence and human intelligence,” explained Goldberg. “Having people with different viewpoints is extremely valuable and I think that’s starting to be recognized by people in business… it’s not because of PR, it’s actually because it will give you better decisions if you get people with different cognitive, diverse viewpoints.”

3. The future of autonomous travel will rely on humans and machines working together

Uber CEO Dara Khosrowshahi (Photo: TechCrunch/Devin Coldewey)

Transportation companies often paint a flowery picture of the near future where mobility will become so automated that human intervention will be detrimental to the process.

That’s not the case, according to Uber CEO Dara Khosrowshahi. In an era that’s racing to put humans on the sidelines, Khosrowshahi says humans and machines working hand-in-hand is the real thing.

“People and computers actually work better than each of them work on a stand-alone basis and we are having the capability of bringing in autonomous technology, third-party technology, Lime, our own product all together to create a hybrid,” said Khosrowshahi.

Khosrowshahi ultimately envisions the future of Uber being made up of engineers monitoring routes that present the least amount of danger for riders and selecting optimal autonomous routes for passengers. The combination of these two systems will be vital in the maturation of autonomous travel, while also keeping passengers safe in the process.

4. There’s no agreed definition of what makes an algorithm “fair”

SAN FRANCISCO, CA – SEPTEMBER 07: Human Rights Data Analysis Group Lead Statistician Kristian Lum speaks onstage during Day 3 of TechCrunch Disrupt SF 2018 at Moscone Center on September 7, 2018 in San Francisco, California. (Photo by Kimberly White/Getty Images for TechCrunch)

Last July ProPublica released a report highlighting how machine learning can falsely develop its own biases. The investigation examined an AI system used in Fort Lauderdale, Fla., that falsely flagged black defendants as future criminals at a rate twice that of white defendants. These landmark findings set off a wave of conversation on the ingredients needed to build a fair algorithms.

One year later AI experts still don’t have the recipe fully developed, but many agree a contextual approach that combines mathematics and an understanding of human subjects in an algorithm is the best path forward.

“Unfortunately there is not a universally agreed upon definition of what fairness looks like,” said Kristian Lum, lead statistician at the Human Rights Data Analysis Group. “How you slice and dice the data can determine whether you ultimately decide the algorithm is unfair.”

Lum goes on to explain that research in the past few years has revolved around exploring the mathematical definition of fairness, but this approach is often incompatible to the moral outlook on AI.

“What makes an algorithm fair is highly contextually dependent, and it’s going to depend so much on the training data that’s going into it,” said Lum. “You’re going to have to understand a lot about the problem, you’re going to have to understand a lot about the data, and even when that happens there will still be disagreements on the mathematical definitions of fairness.”

5. AI and Zero Trust are a “marriage made in heaven” and will be key in the evolution of cybersecurity

SAN FRANCISCO, CA – SEPTEMBER 06: (l-R) Duo VP of Security Mike Hanley, Okta Executive Director of Cybersecurity Marc Rogers, and moderator Mike Butcher speak onstage during Day 2 of TechCrunch Disrupt SF 2018 at Moscone Center on September 6, 2018 in San Francisco, California. (Photo by Kimberly White/Getty Images for TechCrunch)

If previous elections have taught us anything it’s that security systems are in dire need of improvement to protect personal data, financial assets and the foundation of democracy itself. Facebook’s ex-chief security officer Alex Stamos shared a grim outlook on the current state of politics and cybersecurity at Disrupt SF, stating the security infrastructure for the upcoming Midterm elections isn’t much better than it was in 2016.

So how effective will AI be in improving these systems? Marc Rodgers of Okta and Mike Hanley of Duo Security believe the combination of AI and a security model called Zero Trust, which cuts off all users from accessing a system until they can prove themselves, are the key to developing security systems that actively fight off breaches without the assistance of humans.

“AI and Zero Trust are a marriage made in heaven because the whole idea behind Zero Trust is you design policies that sit inside your network,” said Rodgers. “AI is great at doing human decisions much faster than a human ever can and I have great hope that as Zero Trust evolves, we’re going to see AI baked into the new Zero Trust platforms.”

By handing much of the heavy lifting to machines, cybersecurity professionals will also have the opportunity to solve another pressing issue: being able to staff qualified security experts to manage these systems.

“There’s also a substantial labor shortage of qualified security professionals that can actually do the work needed to be done,” said Hanley. “That creates a tremendous opportunity for security vendors to figure out what are those jobs that need to be done, and there are many unsolved challenges in that space. Policy engines are one of the more interesting ones.”

IBM launches cloud tool to detect AI bias and explain automated decisions

IBM has launched a software service that scans AI systems as they work in order to detect bias and provide explanations for the automated decisions being made — a degree of transparency that may be necessary for compliance purposes not just a company’s own due diligence.

The new trust and transparency system runs on the IBM cloud and works with models built from what IBM bills as a wide variety of popular machine learning frameworks and AI-build environments — including its own Watson tech, as well as Tensorflow, SparkML, AWS SageMaker, and AzureML.

It says the service can be customized to specific organizational needs via programming to take account of the “unique decision factors of any business workflow”.

The fully automated SaaS explains decision-making and detects bias in AI models at runtime — so as decisions are being made — which means it’s capturing “potentially unfair outcomes as they occur”, as IBM puts it.

It will also automatically recommend data to add to the model to help mitigate any bias that has been detected.

Explanations of AI decisions include showing which factors weighted the decision in one direction vs another; the confidence in the recommendation; and the factors behind that confidence.

IBM also says the software keeps records of the AI model’s accuracy, performance and fairness, along with the lineage of the AI systems — meaning they can be “easily traced and recalled for customer service, regulatory or compliance reasons”.

For one example on the compliance front, the EU’s GDPR privacy framework references automated decision making, and includes a right for people to be given detailed explanations of how algorithms work in certain scenarios — meaning businesses may need to be able to audit their AIs.

The IBM AI scanner tool provides a breakdown of automated decisions via visual dashboards — an approach it bills as reducing dependency on “specialized AI skills”.

However it is also intending its own professional services staff to work with businesses to use the new software service. So it will be both selling AI, ‘a fix’ for AI’s imperfections, and experts to help smooth any wrinkles when enterprises are trying to fix their AIs… Which suggests that while AI will indeed remove some jobs, automation will be busy creating other types of work.

Nor is IBM the first professional services firm to spot a business opportunity around AI bias. A few months ago Accenture outed a fairness tool for identifying and fixing unfair AIs.

So with a major push towards automation across multiple industries there also looks to be a pretty sizeable scramble to set up and sell services to patch any problems that arise as a result of increasing use of AI.

And, indeed, to encourage more businesses to feel confident about jumping in and automating more. (On that front IBM cites research it conducted which found that while 82% of enterprises are considering AI deployments, 60% fear liability issues and 63% lack the in-house talent to confidently manage the technology.)

In additional to launching its own (paid for) AI auditing tool, IBM says its research division will be open sourcing an AI bias detection and mitigation toolkit — with the aim of encouraging “global collaboration around addressing bias in AI”.

“IBM led the industry in establishing trust and transparency principles for the development of new AI technologies. It’s time to translate principles into practice,” said David Kenny, SVP of cognitive solutions at IBM, commenting in a statement. “We are giving new transparency and control to the businesses who use AI and face the most potential risk from any flawed decision making.”