XPRIZE names two grand prize winners in $15 million Global Learning Challenge

XPRIZE, the non-profit organization developing and managing competitions to find solutions to social challenges, has named two grand prize winners in the Elon Musk-backed Global Learning XPRIZE .

The companies, KitKit School out of South Korea and the U.S., and onebillion, operating in Kenya and the U.K., were announced at an awards ceremony hosted at the Google Spruce Goose Hangar in Playa Vista, Calif.

XPRIZE set each of the competing teams the task of developing scalable services that could enable children to teach themselves basic reading, writing, and arithmetic skills within 15 months.

Musk himself was on hand to award $5 million checks to each of the winning teams.

Five finalists including: New York-based CCI, which developed lesson plans and a development language so non-coders could create lessons; Chimple, a Bangalore-based, learning platform enabling children to learn reading, writing and math on a tablet; RobotTutor, a Pittsburgh-based company which used Carnegie Mellon research to develop an app for Android tablets that would teach lessons in reading and writing with speech recognition, machine learning, and human computer interactions, and the two grand prize winners all received $1 million to continue developing their projects.

The tests required each product to be field tested in Swahili, reaching nearly 3,000 children in 170 villages across Tanzania.

All of the final solutions from each of the five teams that made it to the final round of competition have been open-sourced so anyone can improve on and develop local solutions using the toolkits developed by each team in competition.

Kitkit School, with a team from Berkeley, Calif. and Seoul, developed a program with a game-based core and flexible learning architecture to help kids learn independently, while onebillion, merged numeracy content with literacy material to provide directed learning and activities alongside monitoring to personalize responses to children’s needs.

Both teams are going home with $5 million to continue their work.

The problem of access to basic education affects more than 250 million children around the world, who can’t read or write and one-in-five children around the world aren’t in school, according to data from UNESCO.

The problem of access is compounded by a shortage of teachers at the primary ad secondary school level. Some research, cited by XPRIZE, indicates that the world needs to recruit another 68.8 million teachers to provide every child with a primary and secondary education by 2040.

Before the Global Learning XPRIZE field test, 74% of the children who participated were reported as never having attended school; 80% were never read to at home; and 90% couldn’t read a single word of Swahili.

After the 15 month program working on donated Google Pixel C tablets and pre-loaded with software, the number was cut in half.

“Education is a fundamental human right, and we are so proud of all the teams and their dedication and hard work to ensure every single child has the opportunity to take learning into their own hands,” said Anousheh Ansari, CEO of XPRIZE, in a statement. “Learning how to read, write and demonstrate basic math are essential building blocks for those who want to live free from poverty and its limitations, and we believe that this competition clearly demonstrated the accelerated learning made possible through the educational applications developed by our teams, and ultimately hope that this movement spurs a revolution in education, worldwide.”

After the grand prize announcement, XPRIZE said it will work to secure and load the software onto tablets; localize the software; and deliver preloaded hardware and charging stations to remote locations so all finalist teams can scale their learning software across the world.

AV8 Ventures launches with $170M to invest in digital health, mobility and enterprise tech

AV8 Ventures has closed on €150 million (about $170 million) for its debut early-stage venture capital fund to invest in seed and Series A startups in the U.S. and Europe focused on the “machine-enabled future.”

Allianz, a German insurance and asset management business, is the fund’s sole backer.

Headquartered in Palo Alto and London, the new effort is led by George Ugras (pictured above, right), the former head of IBM Ventures, and Miles Kirby (pictured above, left), previously a managing director at Qualcomm Ventures in Europe. The pair have brought on two full-time partners and five additional venture partners to hit its goal of 10 investments per year.

The AV8 team brings together technical backgrounds and strong business acumen, Kirby told TechCrunch: “That’s something fairly unique about us, we can really help the entrepreneurs as opposed to just investing and sitting back. Having been through this a few times, we can really help through the journey.”

“We think of ourselves as builders versus transactors,” Ugras, who began his career as an astrophysicist, added. “Quite often in venture you’ll notice investors get perked up around transactions, but the magic happens in-between rounds.”

Though AV8 is backed by a corporate, it is indeed not a corporate venture capital fund. Ugras, in a conversation with TechCrunch, compared AV8 with Sapphire Ventures, an early-stage fund supported by SAP. Sapphire was formerly known as SAP Ventures but rebranded as Sapphire in 2014 to reinforce its status as a firm independent from the German corporation.

AV8 has a general focus on digital health, big data and artificial intelligence, mobility and enterprise tech. Having been investing out of the fund for roughly one year already, the team has deployed capital to seven companies to date. Their portfolio includes Locomation, an autonomous trucking startup spun-out of The Robotics Institute at Carnegie Mellon University; weather forecasting and climate monitoring business PlanetIQ; and Alpha Medical, a women’s healthcare platform.

In an era of venture capital when one or two $100 million-plus funds launch each week, Ugras and Kirby say it’s their lack of vanity that sets them apart.

“What I noticed in this era with hundreds of funds, especially in the early stages, is there’s still a need for people who know how to build businesses,” he said. “Anyone can structure a term sheet and write a check, but at the end of the day, we are really conduits between limited partners, who trust us with their precious capital, and entrepreneurs, who trust us with their precious dream. Creating this aura of celebrity has created an imbalance with these relationships, which has caused a lot of issues with behavior in the industry.”

“What it’s all about is helping the portfolio companies do well, they are the ones doing the heavy lifting,” Kirby concluded. “At the end of the day, it’s the entrepreneurs that are driving it.”

Uber spent $457 million on self-driving and flying car R&D last year

Uber spent $457 million last year on research and development of autonomous vehicles, flying cars (known as eVTOLs) and other “technology programs” and will continue to invest heavily in the futuristic tech even though it expects to rely on human drivers for years to come, according to the company’s IPO prospectus filed Thursday.

R&D costs at Uber ATG, the company’s autonomous vehicle unit, its eVTOL unit Uber Elevate and other related technology represented one-third of its total R&D spend. Uber’s total R&D costs in 2018 were more than $1.5 billion.

Uber filed its S-1 on Thursday, laying the groundwork for the transportation company to go public next month. This comes less than one month after competitor Lyft’s debut on the public market. Uber is listing under the New York Stock Exchange under the symbol “UBER,” but has yet to disclose the anticipated initial public offering price.

Uber believes that autonomous vehicles will be an important part of its offerings over the long term, namely that AVs can increase safety, make rides more efficient and lower prices for customers.

However, the transportation company struck a more conservative tone in the prospectus on how and when autonomous vehicles will be deployed, a striking difference from the early days of Uber ATG when former CEO Travis Kalanick called AVs an existential risk to the business.

Uber contends there will be a long period of “hybrid autonomy” and it will continue to rely on human drivers for its core business for the foreseeable future. Uber said even when autonomous vehicle taxis are deployed, it will still need human drivers for situations that “involve substantial traffic, complex routes, or unusual weather conditions.” Human drivers will also be needed during concerts, sporting events and other high-demand events that will “likely exceed the capacity of a highly utilized, fully autonomous vehicle fleet,” the company wrote in the S-1.

Here’s an excerpt from the S-1:

Along the way to a potential future autonomous vehicle world, we believe that there will be a long period of hybrid autonomy, in which autonomous vehicles will be deployed gradually against specific use cases while Drivers continue to serve most consumer demand. As we solve specific autonomous use cases, we will deploy autonomous vehicles against them. Such situations may include trips along a standard, well-mapped route in a predictable environment in good weather.

Uber contends it is well-suited to balance that potentially awkward in-between phase when both human drivers and autonomous vehicles will co-exist on its platform.

“Drivers are therefore a critical and differentiating advantage for us and will continue to be our valued partners for the long-term,” Uber wrote.

Despite Uber’s forecast and more tempered tone, the company is pushing ahead on autonomous vehicles.

Uber ATG was founded in 2015 in Pittsburgh with just 40 researchers from Carnegie Robotics and Carnegie Mellon University . Today, Uber ATG has more than 1,000 employees spread out in offices in Pittsburgh, San Francisco and Toronto.

Uber acknowledged under the risk factors section of the S-1 that it could fail to develop and successfully commercialize autonomous vehicle technologies or could be undercut by competitors, which would threaten its ride-hailing and delivery businesses.

Uber’s view of which companies pose the biggest threat to the company was particularly interesting. The company named nearly a dozen potential competitors, a list that contained a few of the usual suspects like Waymo, GM Cruise and Zoox, as well as less-known startups such as May Mobility and Anthony Levandowski’s new company, Prontio.ai. Other competitors listed in the S-1 include Tesla, Apple, Aptiv, Aurora and Nuro. Argo AI, the subsidiary of Ford, was not listed.

ATG has built more than 250 self-driving vehicles and has three partnerships — Volvo, Toyota and Daimler — that illustrates the company’s mult-tiered strategy to AVs.

Uber has a first-party agreement with Volvo. Under the agreement announced in August 2016, Uber owns Volvo vehicles, has added its AV tech and plans to deploy those cars on its own network.

Its partnership with Daimler is on the other extreme. In that partnership, announced in January 2017, Daimler will introduce a fleet of its own AVs on the Uber network. This is similar to Lyft’s partnership with Aptiv.

Finally, there’s Toyota, a new partnership just announced in August 2018, that is a hybrid of sorts of the other two. Uber says it expects to integrate its autonomous vehicle technologies into purpose-built Toyota vehicles to be deployed on its network.

CMU’s robotic arm attaches to a backpack to lend a helping hand

Carnegie Mellon’s Biorobotics Lab is probably best known as the birthplace of the modular snake robot. Initially designed to squeeze into tight spots for search and rescue missions and infrastructure inspections, the lab’s snake robot has given rise to an army of different projects, and at least one Pittsburgh-area startup.

Several years ago, the robot became modular, allowing engineers to mix and match pieces and replace malfunctioning segments. From those modules, the team of CMU students have built a wide range of different projects, including a spider-like hexapod robot, whose six limbs are constructed from robotic segments. We also spent time with Hebi, whose modular robotic actuators commercialized versions of the lab’s research.

When we returned to the lab two years later, the researchers had an entirely new project to show us. “Students in this group are very self-directed, we come up with our own projects,” CMU doctoral student Julian Whitman explained. “We’re often inspired by the ability of our hardware to reconfigure into any kind of shapes. Sometimes people will look at a pile of modules. So they’ll build that and very quickly program it to do some kind of interesting behavior, and sometimes that’ll spur an entirely new research direction.”

Whitman’s project fashions the modules into a wearable extra limb. The system, he’s quick to point out, isn’t an exoskeleton. Instead, it’s a robotic arm mounted to a backpack-style support structure. The idea behind the project is to allow wearers to complete jobs that are a bit too complex for just two hands.

“One somewhat common task in automotive assembly or airplane assembly is to hold something up over your head and fix it to the ceiling,” Whitman explained, before demoing the action at a nearby workstation. “So if you’re putting a part on the bottom of a car or on the roof of an airplane, oftentimes in industry, they have to have two people working on this job, one guy is just holding the part up in place, the other one’s fixing it.”

The project currently supports one limb, controlled using a game pad. Whitman explained it’s possible to add as many limbs “as a person can carry,” for a more Dr. Octopus-style approach. But the biggest limitation is how many a wearer can control at a given time.

“Right now I’m controlling it with a button or with voice commands, so you now have two sets of buttons and two sets of voice commands,” Whitman said. “At some point, it becomes harder for the user to control it and becomes less useful, the more arms you add. But in the future we’re hoping to have these arms be more autonomous that have their own perception, their own decision-making processes.”

CMU team develops a robot and drone system for mine rescues

On our final day in Pittsburgh, we find ourself in a decommissioned coal mine. Just northeast of the city proper, Tour-Ed’s owners run field trips and tours during the warmer months, despite the fact that the mine’s innards run a constant 50 degrees or so, year round.

With snow still melted just beyond the entrance, a team of students from Carnegie Mellon and Oregon State University are getting a pair of robots ready for an upcoming competition. The small team is one of a dozen or so currently competing in DARPA’s Subterranean Challenge.

The multi-year SUbT competition is designed to “explore new approaches to rapidly map, navigate, search, and exploit complex underground environments, including human-made tunnel systems, urban underground, and natural cave networks.” In particular, teams are tasked with search and rescue missions in underground structures, ranging from mines to caves to subway stations.

The goal of the $2 million challenge is design a system capable of navigating complex underground terrains, in case of cave-ins or other disasters. The robots are created to go where human rescuers can’t — or, at very least, shouldn’t.

The CMU team’s solution features multiple robots, with a foul-wheeled rover and a small, hobbyist style drone taking center state. “Our system consists of ground robots that will be able to track and follow the terrain,” says CMU’s Steve Willits, who serves as an adviser on the project. “We also have an unmanned aerial vehicle consisting of a hexacopter. It’s equipped with all of instrumentation that it will need to explore various area of the mine.”

The rover uses a combination of 3D cameras and LIDAR to navigate and map the environment, while looking for humans amid the rubble. Should it find itself unable to move, due to debris, small passage ways or a manmade obstacle like stairs, the drone is designed to lift off from the rear and continue the search.

All the while, the rover drops ultra rugged WIFI repeaters off its rear like a breadcrumb trail, extending its signal in the process. Most of this is still early stages. While the team was able to demonstrate the rover and drone in action, it still hasn’t mastered a method for getting them to work in tandem.

Testing the robots will begin in September, with the Tunnel Circuit That’s followed in March 2020 by the manmade Urban Circuit and then a Cave Circuit that September. A final event will be held in September 2012.

Nuro CEO Dave Ferguson at TC Sessions: Mobility on July 10 in San Jose

Autonomous delivery startup Nuro, fresh with nearly $1 billion in capital from SoftBank, is bursting with ideas — as some recent patent filings (and our recent deep dive into the company) suggest. And we can’t wait to learn more about what Nuro has planned.

It’s only fitting that Nuro co-founder and CEO Dave Ferguson is our first announced guest for TechCrunch’s inaugural TC Sessions: Mobility, a one-day event on July 10, 2019 in San Jose, Calif., that’s centered around the future of mobility and transportation.

Ferguson has been working on robotics and machine learning for nearly two decades and is an early pioneer of self-driving vehicle technology. He led the planning group for Carnegie Mellon University’s team that won the DARPA Urban Grand Challenge in 2007.

Ferguson holds an MS and PhD in robotics from Carnegie Mellon and a bachelor’s in computer science and mathematics from the University of Otago. He went on to become a senior research scientist at Intel and then developed machine learning trading strategies at Two Sigma, an investment firm.

Ferguson, who has been awarded more than 100 patents, eventually headed to Google’s self-driving program, now known as Waymo, serving as the machine learning and computer vision team lead.

TC Sessions: Mobility will present a day of programming with the best and brightest founders, investors and technologists who are determined to inventing a future Henry Ford might never have imagined. TC Sessions: Mobility aims to do more than highlight the next new thing. We’ll dig into the how and why, the cost and impact to cities, people and companies, as well as the numerous challenges that lie along the way, from technological and regulatory to capital and consumer pressures.

Nuro was founded in June 2016 by Ferguson and another former Google engineer, Jiajun Zhu. Nuro completed its first Series A funding round in China just three months later, in a previously unreported deal that gave NetEase founder Ding Lei (aka William Ding) a seat on Nuro’s board.

In February, Nuro hit the big leagues with a whopping $940 million in financing from the SoftBank Vision Fund, capital that will be used to expand its delivery service, add new partners, hire employees and scale up its fleet of self-driving bots. The startup has raised more than $1 billion from partners, including SoftBank, Greylock Partners  and Gaorong Capital.

Nuro’s focus has been developing a self-driving stack and combining it with a custom unmanned vehicle designed for last-mile delivery of local goods and services. The vehicle has two compartments that can fit up to six grocery bags each. Nuro’s aspirations don’t stop there.

A recent patent application details how its R1 self-driving vehicle could carry smaller robots to cross lawns or climb stairs to drop off packages. The company has even taken the step of trademarking the name “Fido” for delivery services.


Early-Bird tickets are now on sale — save $100 on tickets before prices go up.

Students, you can grab your tickets for just $45.

These temporary electronic tattoos could redefine wearables

Wearables are great, sure, except you have to wear them. Wouldn’t it be nice if that functionality was printed right onto your skin? Well, even though it’s not for everybody, it sounds like it might soon be a possibility: CMU researchers have created a durable, flexible electronic temporary tattoo that could be used for all kinds of things.

This might sound familiar — we’ve been hearing about electronic tattoos for a while now. But previous methods were slow and limited, essentially painting oneself with conductive ink or attaching a thin conductive film. If the idea is going to take off, it needs to be easily manufactured and simple to apply. That’s what the team hopes they’ve accomplished here.

“We’re reporting a new way of creating electronic tattoos,” said CMU’s Carmel Majidi in a video from the university. “These are circuits that are printed on temporary tattoo film. We print circuits made of silver nanoparticles, and then what we do is we coat those silver nanoparticles with a liquid metal alloy. The liquid metal fuses with the silver to create these conductive wires on the tattoo; the tattoo can easily be transferred to skin, and the conductivity is high enough to support digital circuit functionality.”

The big advance, as co-author Mahmoud Tavakoli explained in a news release, is the ability to join the inkjet-printed nanoparticle patterns with the other metal (a gallium indium alloy) at room temperature.

“This is a breakthrough in the printed electronics area,” he said. “Removing the need for high temperature sintering makes our technique compatible with thin-film and heat sensitive substrates.”

In other words, it can easily be attached to fragile things like temporary tattoo film, cheap and abundant, or perhaps to a bandage. Fortunately the tattoos are also quite flexible, maintaining their functions when deformed, and won’t rub off easily.

The most obvious application is in the medical field, where a tattoo could perhaps replace a finger clip or armband heart monitor, or perhaps include chemical sensors that test blood sugar and alert the user if it gets too low. There are plenty of other ways that a skin-mounted circuit could be applied, but most haven’t been thought up yet. It may be time to brainstorm!

The paper describing the e-tattoo technique was published in the journal Advanced Materials.

This robot uses lasers to ‘listen’ to its environment

A new technology from researchers at Carnegie Mellon University will add sound and vibration awareness to create truly context-aware computing. The system, called Ubicoustics, adds additional bits of context to smart device interaction, allowing a smart speaker to know it’s in a kitchen or a smart sensor to know you’re in a tunnel versus on the open road.

“A smart speaker sitting on a kitchen countertop cannot figure out if it is in a kitchen, let alone know what a person is doing in a kitchen,” said Chris Harrison a researcher at CMU’s Human-Computer Interaction Institute. “But if these devices understood what was happening around them, they could be much more helpful.”

The first implementation of the system uses built-in speakers to create “a sound-based activity recognition.” How they are doing this is quite fascinating.

“The main idea here is to leverage the professional sound-effect libraries typically used in the entertainment industry,” said Gierad Laput, a PhD student. “They are clean, properly labeled, well-segmented and diverse. Plus, we can transform and project them into hundreds of different variations, creating volumes of data perfect for training deep-learning models.”

From the release:

Laput said recognizing sounds and placing them in the correct context is challenging, in part because multiple sounds are often present and can interfere with each other. In their tests, Ubicoustics had an accuracy of about 80 percent — competitive with human accuracy, but not yet good enough to support user applications. Better microphones, higher sampling rates and different model architectures all might increase accuracy with further research.

In a separate paper, HCII Ph.D. student Yang Zhang, along with Laput and Harrison, describe what they call Vibrosight, which can detect vibrations in specific locations in a room using laser vibrometry. It is similar to the light-based devices the KGB once used to detect vibrations on reflective surfaces such as windows, allowing them to listen in on the conversations that generated the vibrations.

This system uses a low-power laser and reflectors to sense whether an object is on or off or whether a chair or table has moved. The sensor can monitor multiple objects at once and the tags attached to the objects use no electricity. This would let a single laser monitor multiple objects around a room or even in different rooms, assuming there is line of sight.

The research is still in its early stages, but expect to see robots that can hear when you’re doing the dishes and, depending on their skills, hide or offer to help.