OpenAI built a text generator so good, it’s considered too dangerous to release

A storm is brewing over a new language model, built by non-profit artificial intelligence research company OpenAI, which it says is so good at generating convincing, well-written text that it’s worried about potential abuse.

That’s angered some in the community, who have accused the company of reneging on a promise not to close off its research.

OpenAI said its new natural language model, GPT-2, was trained to predict the next word in a sample of 40 gigabytes of internet text. The end result was the system generating text that “adapts to the style and content of the conditioning text,” allowing the user to “generate realistic and coherent continuations about a topic of their choosing.” The model is a vast improvement on the first version by producing longer text with greater coherence.

But with every good application of the system, such as bots capable of better dialog and better speech recognition, the non-profit found several more, like generating fake news, impersonating people, or automating abusive or spam comments on social media.

To wit: when GPT-2 was tasked with writing a response to the prompt, “Recycling is good for the world,” which nearly everyone agrees with, the machine spat back:

“Recycling is NOT good for the world. It is bad for the environment, it is bad for our health, and it is bad for our economy. I’m not kidding. Recycling is not good for the environment. It is destructive to the earth and it is a major contributor to global warming. Recycling is not good for our health. It contributes to obesity and diseases like heart disease and cancer. Recycling is bad for our economy. It increases the cost of a product, and in turn, the price of everything that is made with that product. Recycling is not good for our nation. We pay a tremendous price for the privilege of having the world’s most advanced and efficient recycling system. Recycling is a huge, colossal waste of time, energy, money, and resources.”

No wonder OpenAI was worried about releasing it.

For that reason, OpenAI said, it’s only releasing a smaller version of the language model, citing its charter, which noted that the organizations expects that “safety and security concerns will reduce our traditional publishing in the future.” Admittedly, the organization said that it wasn’t sure of the decision, “we believe that the AI community will eventually need to tackle the issue of publication norms in a thoughtful way in certain research areas.”

Not everyone took that well. OpenAI’s tweet announcing GPT-2 was met with anger and frustration, accusing the company of “closing off” its research, and doing the “opposite of open,” seizing on the company’s name.

Others were more forgiving, calling the move a “new bar for ethics” for thinking ahead of possible abuses.

Jack Clark, policy director at OpenAI, said the organization’s priority is “not enabling malicious or abusive uses of the technology,” calling it a “very tough balancing act for us.”

Elon Musk, one of the initial funders of OpenAI, was roped into the controversy, confirming in a tweet that he has not been involved with the company “for over a year,” and that he and the company parted “on good terms.”

OpenAI said it’s not settled on a final decision about GPT-2’s release, and that it will revisit in six months. In the meantime, the company said that governments “should consider expanding or commencing initiatives to more systematically monitor the societal impact and diffusion of AI technologies, and to measure the progression in the capabilities of such systems.”

Just this week, President Trump signed an executive order on artificial intelligence. It comes months after the U.S. intelligence community warned that artificial intelligence was one of the many “emerging threats” to U.S. national security, along with quantum computing and autonomous unmanned vehicles.

Vision system for autonomous vehicles watches not just where pedestrians walk, but how

The University of Michigan, well known for its efforts in self-driving car tech, has been working on an improved algorithm for predicting the movements of pedestrians that takes into account not just what they’re doing, but how they’re doing it. This body language could be critical to predicting what a person does next.

Keeping an eye on pedestrians and predicting what they’re going to do is a major part of any autonomous vehicle’s vision system. Understanding that a person is present and where makes a huge difference to how the vehicle can operate — but while some companies advertise that they can see and label people at such and such a range, or under these or those conditions, few if any can or say they can see gestures and posture.

Such vision algorithms can (though nowadays are unlikely to) be as simple as identifying a human and seeing how many pixels it moves over a few frames, then extrapolating from there. But naturally human movement is a bit more complex than that.

UM’s new system uses the lidar and stereo camera systems to estimate not just a person’s trajectory, but their pose and gait. Pose can indicate whether a person is looking towards or away from the car, or using a cane, or stooped over a phone; gait indicates not just speed but also intention.

Is someone glancing over their shoulder? Maybe they’re going to turn around, or walk into traffic. Are they putting their arms out? Maybe they’re signaling someone (or perhaps the car) to stop. This additional data helps a system predict motion and makes for a more complete set of navigation plans and contingencies.

Importantly, it performs well with only a handful of frames to work with — perhaps comprising a single step and swing of the arm. That’s enough to make a prediction that beats simpler models handily, a critical measure of performance as one cannot assume that a pedestrian will be visible for any more than a few frames between obstructions.

Not too much can be done with this noisy, little-studied data right now but perceiving and cataloguing it is the first step to making it an integral part of an AV’s vision system. You can read the full paper describing the new system in IEEE Robotics and Automation Letters or at Arxiv (PDF).

Apple acquires talking Barbie voicetech startup PullString

Apple has just bought up the talent it needs to make talking toys a part of Siri, HomePod, and its voice strategy. Apple has acquired PullString, also known as ToyTalk, according to Axios’ Dan Primack and Ina Fried. The company makes voice experience design tools, artificial intelligence to power those experiences, and toys like talking Barbie and Thomas The Tank Engine toys in partnership with Mattel. Founded in 2011 by former Pixar executives, PullString went on to raise $44 million.

Apple’s Siri is seen as lagging far behind Amazon Alexa and Google Assistant, not only in voice recognition and utility, but also in terms of developer ecosystem. Google and Amazon has built platforms to distribute Skills from tons of voice app makers, including storytelling, quizzes, and other games for kids. If Apple wants to take a real shot at becoming the center of your connected living room with Siri and HomePod, it will need to play nice with the children who spend their time there. Buying PullString could jumpstart Apple’s in-house catalog of speech-activated toys for kids as well as beef up its tools for voice developers.

PullString did catch some flack for being a “child surveillance device” back in 2015, but countered by detailing the security built intoHello Barbie product and saying it’d never been hacked to steal childrens’ voice recordings or other sensitive info. Privacy norms have changed since with so many people readily buying always-listening Echos and Google Homes.

We’ve reached out to Apple and PullString for more details about whether PullString and ToyTalk’s products will remain available. .

The startup raised its cash from investors including Khosla Ventures, CRV, Greylock, First Round, and True Ventures, with a Series D in 2016 as its last raise that PitchBook says valued the startup at $160 million. While the voicetech space has since exploded, it can still be difficult for voice experience developers to earn money without accompanying physical products, and many enterprises still aren’t sure what to build with tools like those offered by PullString. That might have led the startup to see a brighter future with Apple, strengthening one of the most ubiquitous though also most detested voice assistants.

Peltarion raises $20M for its AI platform

Peltarion, a Swedish startup founded by former execs from companies like Spotify, Skype, King, TrueCaller and Google, today announced that it has raised a $20 million Series A funding round led by Euclidean Capital, the family office for hedge fund billionaire James Simons. Previous investors FAM and EQT Ventures also participated, and this round brings the company’s total funding to $35 million.

There is obviously no dearth of AI platforms these days. Peltarion focus on what it calls “operational AI.” The service offers an end-to-end platform that lets you do everything from pre-processing your data to building models and putting them into production. All of this runs in the cloud and developers get access to a graphical user interface for building and testing their models. All of this, the company stresses, ensures that Peltarion’s users don’t have to deal with any of the low-level hardware or software and can instead focus on building their models.

“The speed at which AI systems can be built and deployed on the operational platform is orders of magnitude faster compared to the industry standard tools such as TensorFlow and require far fewer people and decreases the level of technical expertise needed,” Luka Crnkovic-Friis, of Peltarion’s CEO and co-founder, tells me. “All this results in more organizations being able to operationalize AI and focusing on solving problems and creating change.”

In a world where businesses have a plethora of choices, though, why use Peltarion over more established players? “Almost all of our clients are worried about lock-in to any single cloud provider,” Crnkovic-Friis said. “They tend to be fine using storage and compute as they are relatively similar across all the providers and moving to another cloud provider is possible. Equally, they are very wary of the higher-level services that AWS, GCP, Azure, and others provide as it means a complete lock-in.”

Peltarion, of course, argues that its platform doesn’t lock in its users and that other platforms take far more AI expertise to produce commercially viable AI services. The company rightly notes that, outside of the tech giants, most companies still struggle with how to use AI at scale. “They are stuck on the starting blocks, held back by two primary barriers to progress: immature patchwork technology and skills shortage,” said Crnkovic-Friis.

The company will use the new funding to expand its development team and its teams working with its community and partners. It’ll also use the new funding for growth initiatives in the U.S. and other markets.

Biotech AI startup Sight Diagnostics gets $27.8M to speed up blood tests

Sight Diagnostics, an Israeli medical devices startup that’s using AI technology to speed up blood testing, has closed a  $27.8 million Series C funding round.

The company has built a desktop machine, called OLO, that analyzes cartridges manually loaded with drops of the patient’s blood — performing blood counts in situ.

The new funding is led by VC firm Longliv Ventures, also based in Israel, and a member of the multinational conglomerate CK Hutchison Group.

Sight Diagnostics said it was after strategic investment for the Series C — specifically investors that could contribute to its technological and commercial expansion. And on that front CK Hutchison Group’s portfolio includes more than 14,500 health and beauty stores across Europe and Asia, providing a clear go-to-market route for the company’s OLO blood testing device.

Other strategic investors in the round include Jack Nicklaus II, a healthcare philanthropist and board member of the Nicklaus Children’s Health Care Foundation; Steven Esrick, a healthcare impact investor; and a “major medical equipment manufacturer” — which they’re not naming.

Sight Diagnostics also notes that it’s seeking additional strategic partners who can help it get its device to “major markets throughout the world”.

Commenting in a statement, Yossi Pollak, co-founder and CEO, said: “We sought out groups and individuals who genuinely believe in our mission to improve health for everyone with next-generation diagnostics, and most importantly, who can add significant value beyond financial support. We are already seeing positive traction across Europe and seeking additional strategic partners who can help us deploy OLO to major markets throughout the world.”

The company says it expects that customers across “multiple countries in Europe” will have deployed OLO in actual use this year.

Existing investors OurCrowd, Go Capital, and New Alliance Capital also participated in the Series C. The medtech startup, which was founded back in 2011, has raised more than $50M to date, only disclosing its Series A and B raises last year.

The new funding will be used to further efforts to sell what it bills as its “lab-grade” point-of-care blood diagnostics system, OLO, around the world. Although its initial go-to-market push has focused on Europe — where it has obtained CE Mark registration for OLO (necessary for commercial sale within certain European countries) following a 287-person clinical trial, and went on to launch the device last summer. It’s since signed a distribution agreement for OLO in Italy.

“We have pursued several pilots with potential customers in Europe, specifically in the UK and Italy,” co-founder Danny Levner tells TechCrunch. “In Europe, it is typical for market adoption to begin with pilot studies: Small clinical evaluations that each major customers run at their own facilities, under real-world conditions. This allows users to experience the specific benefits of the technology in their own context. In typical progress, pilot studies are then followed by modest initial orders, and then by broad deployment.”

The funding will also support ongoing regulatory efforts in the U.S., where it’s been conducting a series of trials as part of FDA testing in the hopes of gaining regulatory clearance for OLO. Levner tells us it has now submitted data to the regulator and is waiting for it to be reviewed.

“In December 2018, we completed US clinical trials at three US clinical sites and we are submitting them later this month to the FDA. We are seeking 510(k) FDA clearance for use in US CLIA compliant laboratories, to be followed by a CLIA waiver application that will allow for use at any doctor’s office. We are very pleased with the results of our US trial and we hope to obtain the FDA’s 510(k) clearance within a year’s time,” he says.

“With the current funding, we’re focusing on commercialization in the European market, starting in the UK, Italy and the Nordics,” he adds. “In the US, we’re working to identify new opportunities in oncology and pediatrics.”

Funds will also go on R&D to expand the menu of diagnostic tests the company is able to offer via OLO.

The startup previously told us it envisages developing the device into a platform capable of running a portfolio of blood tests, saying each additional test would be added individually and only after “independent clinical validation”.

The initial test OLO offers is a complete blood count (CBC), with Sight Diagnostics applying machine learning and computer vision technology to digitize and analyze a high resolution photograph of a finger prick’s worth of the patient’s blood on device.

The idea is to offer an alternative to having venous blood drawn and sent away to a lab for analysis — with an OLO-based CBC billed as taking “minutes” to perform, with the startup also claiming it’s simple enough for non-professional to carry out, whereas it says a lab-based blood count can take several days to process and return a result.

On the R&D front, Levner says it sees “enormous potential” for OLO to be used to diagnose blood diseases such as leukemia and sickle cell anemia.

“Also, given the small amount of blood required and the minimally-invasive nature of the test when using finger-prick blood samples, there is an opportunity to use OLO in neonatal screening,” he says. “Accordingly, one of the most important immediate next steps is to tailor the test procedures and algorithms for neonate screening.”

Levner also told us that some of its pilot studies have looked at evaluating “improvements in operator and patient satisfaction”. “Clearly standing out in these studies is the preference for finger-prick-based testing, which OLO provides,” he claims. 

One key point to note: Sight Diagnostics has still yet to publish peer reviewed results of its clinical trials for OLO. Last July it told us it has a publication pending in a peer-reviewed journal.

“With regards to the peer-reviewed publication, we’ve decided to combine the results from the Israel clinical trials with those that we just completed in the US for a more robust publication,” the company says now. “We expect to focus on that publication after we receive FDA approval in the US.”

Qloo acquires cultural recommendation service TasteDive

Qloo announced this morning that it has acquired TasteDive.

The two companies sound pretty similar — according to the announcement, Qloo is “the leading artificial intelligence platform for culture and taste,” while TasteDive is “a cultural recommendation engine and social community.”

What’s the difference? Well TasteDive is a website where you can create a profile, connect with other users and, as you like and dislike things, it will recommend music, movies, TV shows, books and more. Qloo, meanwhile, is trying to understand patterns in consumer taste and then sell that data to marketers.

Or, as Qloo CEO Alex Elias (pictured above) put it in a statement, “TasteDive does for millions of individuals what Qloo has been doing for brands for years – using AI to make better decisions about culture and taste.”

Apparently TasteDive has 4.5 million active users, and it will continue to operate as a separate team and product, with founder Andrei Oghina remaining on-board as CEO. (Elias will become chairman.)

At the same time, the companies say the addition of Qloo technology will allow TasteDive to get smarter and to expand into different categories, while Qloo benefits from TasteDive’s global customer base and its API ecosystem.

The financial terms of the acquisition were not disclosed.

Xnor’s saltine-sized, solar-powered AI hardware redefines the edge

“If AI is so easy, why isn’t there any in this room?” asks Ali Farhadi, founder and CEO of Xnor, gesturing around the conference room overlooking Lake Union in Seattle. And it’s true — despite a handful of displays, phones, and other gadgets, the only things really capable of doing any kind of AI-type work are the phones each of us have set on the table. Yet we are always hearing about how AI is so accessible now, so flexible, so ubiquitous.

And in many cases even those devices that can aren’t employing machine learning techniques themselves, but rather sending data off to the cloud where it can be done more efficiently. Because the processes that make up “AI” are often resource-intensive, sucking up CPU time and battery power.

That’s the problem Xnor aimed to solve, or at least mitigate, when it spun off from the Allen Institute for Artificial Intelligence in 2017. Its breakthrough was to make the execution of deep learning models on edge devices so efficient that a $5 Raspberry Pi Zero could perform state of the art computer vision processes nearly well as a supercomputer.

The team achieved that, and Xnor’s hyper-efficient ML models are now integrated into a variety of devices and businesses. As a follow-up, the team set their sights higher — or lower, depending on your perspective.

Answering his own question on the dearth of AI-enabled devices, Farhadi pointed to the battery pack in the demo gadget they made to show off the Pi Zero platform, Farhadi explained: “This thing right here. Power.”

Power was the bottleneck they overcame to get AI onto CPU- and power-limited devices like phones and the Pi Zero. So the team came up with a crazy goal: Why not make an AI platform that doesn’t need a battery at all? Less than a year later, they’d done it.

That thing right there performs a serious computer vision task in real time: It can detect in a fraction of a second whether and where a person, or car, or bird, or whatever, is in its field of view, and relay that information wirelessly. And it does this using the kind of power usually associated with solar-powered calculators.

The device Farhadi and hardware engineering head Saman Naderiparizi showed me is very simple — and necessarily so. A tiny camera with a 320×240 resolution, an FPGA loaded with the object recognition model, a bit of memory to handle the image and camera software, and a small solar cell. A very simple wireless setup lets it send and receive data at a very modest rate.

“This thing has no power. It’s a two dollar computer with an uber-crappy camera, and it can run state of the art object recognition,” enthused Farhadi, clearly more than pleased with what the Xnor team has created.

For reference, this video from the company’s debut shows the kind of work it’s doing inside:

As long as the cell is in any kind of significant light, it will power the image processor and object recognition algorithm. It needs about a hundred millivolts coming in to work, though at lower levels it could just snap images less often.

It can run on that current alone, but of course it’s impractical to not have some kind of energy storage; to that end this demo device has a supercapacitor that stores enough energy to keep it going all night, or just when its light source is obscured.

As a demonstration of its efficiency, let’s say you did decide to equip it with, say, a watch battery. Naderiparizi said it could probably run on that at one frame per second for more than 30 years.

Not a product

Of course the breakthrough isn’t really that there’s now a solar-powered smart camera. That could be useful, sure, but it’s not really what’s worth crowing about here. It’s the fact that a sophisticated deep learning model can run on a computer that costs pennies and uses less power than your phone does when it’s asleep.

“This isn’t a product,” Farhadi said of the tiny hardware platform. “It’s an enabler.”

The energy necessary for performing inference processes such as facial recognition, natural language processing, and so on put hard limits on what can be done with them. A smart light bulb that turns on when you ask it to isn’t really a smart light bulb. It’s a board in a light bulb enclosure that relays your voice to a hub and probably a datacenter somewhere, which analyzes what you say and returns a result, turning the light on.

That’s not only convoluted, but it introduces latency and a whole spectrum of places where the process could break or be attacked. And meanwhile it requires a constant source of power or a battery!

On the other hand, imagine a camera you stick into a house plant’s pot, or stick to a wall, or set on top of the bookcase, or anything. This camera requires no more power than some light shining on it; it can recognize voice commands and analyze imagery without touching the cloud at all; it can’t really be hacked because it barely has an input at all; and its components cost maybe $10.

Only one of these things can be truly ubiquitous. Only the latter can scale to billions of devices without requiring immense investment in infrastructure.

And honestly, the latter sounds like a better bet for a ton of applications where there’s a question of privacy or latency. Would you rather have a baby monitor that streams its images to a cloud server where it’s monitored for movement? Or a baby monitor that absent an internet connection can still tell you if the kid is up and about? If they both work pretty well, the latter seems like the obvious choice. And that’s the case for numerous consumer applications.

Amazingly, the power cost of the platform isn’t anywhere near bottoming out. The FPGA used to do the computing on this demo unit isn’t particularly efficient for the processing power it provides. If they had a custom chip baked, they could get another order of magnitude or two out of it, lowering the work cost for inference to the level of microjoules. The size is more limited by the optics of the camera and the size of the antenna, which must have certain dimensions to transmit and receive radio signals.

And again, this isn’t about selling a million of these particular little widgets. As Xnor has done already with its clients, the platform and software that runs on it can be customized for individual projects or hardware. One even wanted a model to run on MIPS — so now it does.

By drastically lowering the power and space required to run a self-contained inference engine, entirely new product categories can be created. Will they be creepy? Probably. But at least they won’t have to phone home.

DARPA wants smart bandages for wounded warriors

Nowhere is prompt and effective medical treatment more important than on the battlefield, where injuries are severe and conditions dangerous. DARPA thinks that outcomes can be improved by the use of intelligent bandages and other systems that predict and automatically react to the patient’s needs.

Ordinary cuts and scrapes just need a bit of shelter and time and your amazing immune system takes care of things. But soldiers not only receive far graver wounds, but under complex conditions that are not just a barrier to healing but unpredictably so.

DARPA’s Bioelectronics for Tissue Regeneration program, or BETR, will help fund new treatments and devices that “closely track the progress of the wound and then stimulate healing processes in real time to optimize tissue repair and regeneration.”

“Wounds are living environments and the conditions change quickly as cells and tissues communicate and attempt to repair,” said Paul Sheehan, BETR program manager, in a DARPA news release. “An ideal treatment would sense, process, and respond to these changes in the wound state and intervene to correct and speed recovery. For example, we anticipate interventions that modulate immune response, recruit necessary cell types to the wound, or direct how stem cells differentiate to expedite healing.”

It’s not hard to imagine what these interventions might comprise. Smart watches are capable of monitoring several vital signs already, and in fact have alerted users to such things as heart-rate irregularities. A smart bandage would use any signal it can collect — “optical, biochemical, bioelectronic, or mechanical” — to monitor the patient and either recommend or automatically adjust treatment.

A simple example might be a wound that the bandage detects from certain chemical signals is becoming infected with a given kind of bacteria. It can then administer the correct antibiotic in the correct dose and stop when necessary rather than wait for a prescription. Or if the bandage detects shearing force and then an increase in heart rate, it’s likely the patient has been moved and is in pain — out come the painkillers. Of course, all this information would be relayed to the caregiver.

This system may require some degree of artificial intelligence, although of course it would have to be pretty limited. But biological signals can be noisy and machine learning is a powerful tool for sorting through that kind of data.

BETR is a four-year program, during which DARPA hopes that it can spur innovation in the space and create a “closed-loop, adaptive system” that improves outcomes significantly. There’s a further ask to have a system that addresses osseointegration surgery for prosthetics fitting — a sad necessity for many serious injuries incurred during combat.

One hopes that the technology will trickle down, of course, but let’s not get ahead of ourselves. It’s all largely theoretical for now, though it seems more than possible that the pieces could come together well ahead of the deadline.