Bringing tech efficiencies to the agribusiness market, Silo harvests $3 million

Roughly $165 billion worth of wholesale produce is bought and sold every year in the U.S. And while that number is expected to go up to $1 trillion by 2025, the business of agribusiness remains unaffected by technology advancements that have reshaped almost every other industry. ‘

Now Silo, a company which has recently raised $3 million from investors led by Garry Tan and Alexis Ohanian’s Initialized Capital and including Semil Shah from Haystack Ventures; angel investors Kevin Mahaffey and Matt Brezina; and The Penny Newman Grain Company, an international grain and feed marketplace, is looking to change that. 

Silo’s chief executive, Ashton Braun, spent years working in commodities marketplaces as a coffee trader in Singapore and moved to California after business school. As part of the founding team at Kite with Adam Smith, Braun worked on getting Kite’s software to automate computer programming off the ground, but he’d never let go of creating a tool that could help farmers and buyers better communicate and respond to demand signals, Braun says.

“I was a super young, green, bright-eyed potential entrepreneur,” says Braun. Eventually, when Kite sold to Microsoft, Braun took the opportunity to develop the software that had been on his mind for four-and-a-half years.

He’d seen the technology work in another industry closer to home. Growing up in Boston, Braun had seen how technology was used to update the fishing industry, giving ships a knowledge of potential buyers of their catch while they were still out in ocean waters.

“When you’re moving a product that’s worth tens of thousands of dollars and has a shelf life of a few days there’s literally no room for error and there’s a lot you need to do,” says Braun. It’s a principle that applies not only to seafood but to the hundreds of millions of dollars of produce and meat that comes from farms in places like California. “What we want to do is we want communication and data to live int he right places at the right time.”

Braun says there’s limited data coming in to farmers to let them know what demand for certain produce looks like, so they’re making guesses that have real financial outcomes with very little data.

Silo’s software vets and supports buyers and suppliers to give farmers a window into demand and potential buyers a view into available supply and quality.

“What Silo is building has the potential to make marketing and distribution of agriculture incredibly more efficient, which is a win both for the suppliers and buyers. We’re excited to support and assist this team as they work to move agriculture forward,” said Eric Woersching, General Partner at Initialized Capital, in a statement.

Silo is using the new financing to make a hiring push and develop new products and services to support liquidity in its perishable goods marketplace.

While an earlier generation of agribusiness software focused on increasing productivity on farms, a new crop of companies is targeting the business of farming itself. Companies like AgriChain and GrainChain, also offer supply chain management software for farming, and WorldCover is creating insurance products for small farmowners in emerging markets.

The penetration of technology through near ubiquitous mobile devices, coupled with sensing technologies and machine learning enhanced predictive software is transforming one of the world’s oldest industries.

“I’ve come across quite a few marketplace platforms attempting to serve different segments of the agriculture supply chain, and none of which have come close to impressing me to the degree Silo has in their tech-forward approach to reducing the friction that comes with managing all aspects of the supply chain on their platform. Silo’s deployment of machine learning streamlines the process, requiring little to no change in their users’ workflow, and removes many barriers of their platform reaching critical mass,” said Matthew Nicoletti, commodity trader at The Penny Newman Grain Company.  

Microsoft aims to train and certify 15,000 workers on AI skills by 2022

Microsoft is investing in certification and training for a range of AI-related skills in partnership with education provider General Assembly, the companies announced this morning. The goal is to train some 15,000 people by 2022 in order to increase the pool of AI talent around the world. The training will focus on AI, machine learning, data science, cloud and data engineering and more.

In the new program’s first year, Microsoft will focus on training 2,000 workers to transition to an AI and machine learning role. And over the full three years, it will train an additional 13,000 workers with AI-related skills.

As part of this effort, Microsoft is joining General Assembly’s new AI Standards Board, along with other companies. Over the next six months, the Board will help to define AI skills standards, develop assessments, design a career framework and create credentials for AI skills.

The training developed will also focus on filling the AI jobs currently available where Microsoft technologies are involved. As Microsoft notes, many workers today are not skilled enough for roles involving the use of Azure in aerospace, manufacturing and elsewhere. The training, it says, will focus on serving the needs of its customers who are looking to employ AI talent.

This will also include the creation of an AI Talent Network that will source candidates for long-term employment as well as contract work. General Assembly will assist with this effort by connecting its 22 campuses and the broader Adecco ecosystem to this jobs pipeline. (GA sold to staffing firm Adecco last year for $413 million.)

Microsoft cited the potential for AI’s impact on job creation as a reason behind the program, noting that up to 133 million new roles may be created by 2022 as a result of the new technologies. Of course, it’s also very much about making sure its own software and cloud customers can find people who are capable of working with its products, like Azure.

“As a technology company committed to driving innovation, we have a responsibility to help workers access the AI training they need to ensure they thrive in the workplace of today and tomorrow,” said Jean-Philippe Courtois, executive vice president and president of Global Sales, Marketing and Operations at Microsoft, in a statement. “We are thrilled to combine our industry and technical expertise with General Assembly to help close the skills gap and ensure businesses can maximize their potential in our AI-driven economy.”

Health[at]Scale lands $16M Series A to bring machine learning to healthcare

Health[at]Scale, a startup with founders who have both medical and engineering expertise, wants to bring machine learning to bear on healthcare treatment options to produce outcomes with better results and less aftercare. Today the company announced a $16 million Series A. Optum, which is part of the UnitedHealth Group, was the sole investor .

Today, when people looks at treatment options, they may look at a particular surgeon or hospital, or simply what the insurance company will cover, but they typically lack the data to make truly informed decisions. This is true across every part of the healthcare system, particularly in the U.S. The company believes using machine learning, it can produce better results.

“We are a machine learning shop, and we focus on what I would describe as precision delivery. So in other words, we look at this question of how do we match patients to the right treatments, by the right providers, at the right time,” Zeeshan Syed, Health at Scale CEO told TechCrunch.

The founders see the current system as fundamentally flawed, and while they see their customers as insurance companies, hospital systems and self-insured employers; they say the tools they are putting into the system should help everyone in the loop get a better outcome.

The idea is to make treatment decisions more data driven. While they aren’t sharing their data sources, they say they have information from patients with a given condition, to doctors who treat that condition, to facilities where the treatment happens. By looking at a patient’s individual treatment needs and medical history, they believe they can do a better job of matching that person to the best doctor and hospital for the job. They say this will result in the fewest post-operative treatment requirements, whether that involves trips to the emergency room or time in a skilled nursing facility, all of which would end up adding significant additional cost.

If you’re thinking this is strictly about cost savings for these large institutions, Mohammed Saeed, who is the company’s chief medical officer and has and MD from Harvard and a PhD in electrical engineering from MIT, insists that isn’t the case. “From our perspective, it’s a win-win situation since we provide the best recommendations that have the patient interest at heart, but from a payer or provider perspective, when you have lower complication rates you have better outcomes and you lower your total cost of care long term,” he said.

The company says the solution is being used by large hospital systems and insurer customers, although it couldn’t share any. The founders also said, it has studied the outcomes after using its software and the machine learning models have produced better outcomes, although it couldn’t provide the data to back that up at that point at this time.

The company was founded in 2015 and currently has 11 employees. It plans to use today’s funding to build out sales and marketing to bring the solution to a wider customer set.

LG developed its own AI chip to make its smart home products even smarter

As its once-strong mobile division continues to slide, LG is picking up its focus on emerging tech. The company has pushed automotive, and particularly its self-driving capabilities, and today it doubled down on its smart home play with the announcement of its own artificial intelligence (AI) chip.

LG said the new chip includes its own neural engine that will improve the deep-learning algorithms used in its future smart home devices, which will include robot vacuum cleaners, washing machines, refrigerators and air conditioners. The chip can operate without an internet connection thanks to on-device processing, and it uses “a separate hardware-implemented security zone” to store personal data.

“The AI Chip incorporates visual intelligence to better recognize and distinguish space, location, objects and users while voice intelligence accurately recognizes voice and noise characteristics while product intelligence enhances the capabilities of the device by detecting physical and chemical changes in the environment,” the company wrote in an announcement.

To date, companies seeking AI or machine learning (ML) smarts at chipset level have turned to established names like Intel, ARM and Nvidia, with upstarts including Graphcore, Cerebras and Wave Computing provided VC-fueled alternatives.

There is, indeed, a boom in AI and ML challengers. A New York Times report published last year estimated that “at least 45 startups are working on chips that can power tasks like speech and self-driving cars,” but that doesn’t include many under-the-radar projects financed by the Chinese government.

LG isn’t alone in opting to fly solo in AI. Facebook, Amazon and Apple are all reported to be working on AI and ML chipsets for specific purposes. In LG’s case, its solution is customized for smarter home devices.

“Our AI C​hip is designed to provide optimized artificial intelligence solutions for future LG products. This will further enhance the three key pillars of our artificial intelligence strategy – evolve, connect and open – and provide customers with an improved experience for a better life,” IP Park, president and CTO of LG Electronics, said in a statement.

The company’s home appliance unit just recorded its highest quarter of sales and profit to date. Despite a sluggish mobile division, LG posted an annual profit of $2.4 billion last year with standout results for its home appliance and home entertainment units — two core areas of focus for AI.

Unveiling its latest cohort, Alchemist announces $4 million in funding for its enterprise accelerator

The enterprise software and services focused accelerator, Alchemist has raised $4 million in fresh financing from investors BASF and the Qatar Development Bank, just in time for its latest demo day unveiling 20 new companies.

Qatar and BASF join previous investors including the venture firms Mayfield, Khosla Ventures, Foundation Capital, DFJ, and USVP, and corporate investors like Cisco, Siemens and Juniper Networks.

While the roster of successes from Alchemist’s fund isn’t as lengthy as Y Combinator, the accelerator program has launched the likes of the quantum computing upstart, Rigetti, the soft-launch developer tool LaunchDarkly, and drone startup Matternet .

Some (personal) highlights of the latest cohort include:

  • Bayware: Helmed by a former head of software defined networking from Cisco, the company is pitching a tool that makes creating networks in multi-cloud environments as easy as copying and pasting.
  • MotorCortex.AI: Co-founded by a Stanford Engineering professor and a Carnegie Mellon roboticist, the company is using computer vision, machine learning, and robotics to create a fruit packer for packaging lines. Starting with avocados, the company is aiming to tackle the entire packaging side of pick and pack in logistics.
  • Resilio: With claims of a 96% effectiveness rate and $35,000 in annual recurring revenue with another $1 million in the pipeline, Resilio is already seeing companies embrace its mobile app that uses a phone’s camera to track stress levels and application-based prompts on how to lower it, according to Alchemist.
  • Operant Networks: It’s a long held belief (of mine) that if computing networks are already irrevocably compromised the best thing that companies and individuals can do is just encrypt the hell out of their data. Apparently Operant agrees with me.  The company is claiming 50% time savings with this approach, and have booked $1.9m in 2019 as proof, according to Alchemist.
  • HPC Hub: HPC Hub wants to  democratize access to supercomputers by overlaying a virtualization layer and pre-installed software on underutilized super computers to give more companies and researchers easier access to machines… and they’ve booked $92,000 worth of annual recurring revenue.
  • DinoPlusAI: This chip developer is designing a low latency chip for artificial intelligence applications, reducing latency by 12 times over a competing Nvidia chip, according to the company. DinoPlusAI sees applications for its tech in things like real-time AI markets and autonomous driving. Its team is led by a designer from Cadence and Broadcom and the company already has $8 million in letters of intent signed, according to Alchemist.
  • Aero Systems West Co-founders from the Air Force’s Research Labs and MIT are aiming to take humans out of drone operations and maintenance. The company contends that for every hour of flight time, drones require 7 hours of maintenance and check ups. Aero Systems aims to reduce that by using remote analytics, self-inspection, autonomous deployment, and automated maintenance to take humans out of the drone business.

Watch a livestream of Alchemist’s demo day pitches, starting at 3PM, here.

 

XPRIZE names two grand prize winners in $15 million Global Learning Challenge

XPRIZE, the non-profit organization developing and managing competitions to find solutions to social challenges, has named two grand prize winners in the Elon Musk-backed Global Learning XPRIZE .

The companies, KitKit School out of South Korea and the U.S., and onebillion, operating in Kenya and the U.K., were announced at an awards ceremony hosted at the Google Spruce Goose Hangar in Playa Vista, Calif.

XPRIZE set each of the competing teams the task of developing scalable services that could enable children to teach themselves basic reading, writing, and arithmetic skills within 15 months.

Musk himself was on hand to award $5 million checks to each of the winning teams.

Five finalists including: New York-based CCI, which developed lesson plans and a development language so non-coders could create lessons; Chimple, a Bangalore-based, learning platform enabling children to learn reading, writing and math on a tablet; RobotTutor, a Pittsburgh-based company which used Carnegie Mellon research to develop an app for Android tablets that would teach lessons in reading and writing with speech recognition, machine learning, and human computer interactions, and the two grand prize winners all received $1 million to continue developing their projects.

The tests required each product to be field tested in Swahili, reaching nearly 3,000 children in 170 villages across Tanzania.

All of the final solutions from each of the five teams that made it to the final round of competition have been open-sourced so anyone can improve on and develop local solutions using the toolkits developed by each team in competition.

Kitkit School, with a team from Berkeley, Calif. and Seoul, developed a program with a game-based core and flexible learning architecture to help kids learn independently, while onebillion, merged numeracy content with literacy material to provide directed learning and activities alongside monitoring to personalize responses to children’s needs.

Both teams are going home with $5 million to continue their work.

The problem of access to basic education affects more than 250 million children around the world, who can’t read or write and one-in-five children around the world aren’t in school, according to data from UNESCO.

The problem of access is compounded by a shortage of teachers at the primary ad secondary school level. Some research, cited by XPRIZE, indicates that the world needs to recruit another 68.8 million teachers to provide every child with a primary and secondary education by 2040.

Before the Global Learning XPRIZE field test, 74% of the children who participated were reported as never having attended school; 80% were never read to at home; and 90% couldn’t read a single word of Swahili.

After the 15 month program working on donated Google Pixel C tablets and pre-loaded with software, the number was cut in half.

“Education is a fundamental human right, and we are so proud of all the teams and their dedication and hard work to ensure every single child has the opportunity to take learning into their own hands,” said Anousheh Ansari, CEO of XPRIZE, in a statement. “Learning how to read, write and demonstrate basic math are essential building blocks for those who want to live free from poverty and its limitations, and we believe that this competition clearly demonstrated the accelerated learning made possible through the educational applications developed by our teams, and ultimately hope that this movement spurs a revolution in education, worldwide.”

After the grand prize announcement, XPRIZE said it will work to secure and load the software onto tablets; localize the software; and deliver preloaded hardware and charging stations to remote locations so all finalist teams can scale their learning software across the world.

Google’s Translatotron converts one spoken language to another, no text involved

Every day we creep a little closer to Douglas Adams’ famous and prescient babel fish. A new research project from Google takes spoken sentences in one language and outputs spoken words in another — but unlike most translation techniques, it uses no intermediate text, working solely with the audio. This makes it quick, but more importantly lets it more easily reflect the cadence and tone of the speaker’s voice.

Translatotron, as the project is called, is the culmination of several years of related work, though it’s still very much an experiment. Google’s researchers, and others, have been looking into the possibility of direct speech-to-speech translation for years, but only recently have those efforts borne fruit worth harvesting.

Translating speech is usually done by breaking down the problem into smaller sequential ones: turning the source speech into text (speech-to-text, or STT), turning text in one language into text in another (machine translation), and then turning the resulting text back into speech (text-to-speech, or TTS). This works quite well, really, but it isn’t perfect; Each step has types of errors it is prone to, and these can compound one another.

Furthermore, it’s not really how multilingual people translate in their own heads, as testimony about their own thought processes suggests. How exactly it works is impossible to say with certainty, but few would say that they break down the text and visualize it changing to a new language, then read the new text. Human cognition is frequently a guide for how to advance machine learning algorithms.

Spectrograms of source and translated speech. The translation, let us admit, is not the best. But it sounds better!

To that end researchers began looking into converting spectrograms, detailed frequency breakdowns of audio, of speech in one language directly to spectrograms in another. This is a very different process from the three-step one, and has its own weaknesses, but it also has advantages.

One is that, while complex, it is essentially a single-step process rather than multi-step, which means, assuming you have enough processing power, Translatotron could work quicker. But more importantly for many, the process makes it easy to retain the character of the source voice, so the translation doesn’t come out robotically, but with the tone and cadence of the original sentence.

Naturally this has a huge impact on expression and someone who relies on translation or voice synthesis regularly will appreciate that not only what they say comes through, but how they say it. It’s hard to overstate how important this is for regular users of synthetic speech.

The accuracy of the translation, the researchers admit, is not as good as the traditional systems, which have had more time to hone their accuracy. But many of the resulting translations are (at least partially) quite good, and being able to include expression is too great an advantage to pass up. In the end, the team modestly describes their work as a starting point demonstrating the feasibility of the approach, though it’s easy to see that it is also a major step forward in an important domain.

The paper describing the new technique was published on Arxiv, and you can browse samples of speech, from source to traditional translation to Translatotron, at this page. Just be aware that these are not all selected for the quality of their translation, but serve more as examples of how the system retains expression while getting the gist of the meaning.

Microsoft open sources algorithm that gives Bing some of its smarts

The Eiffel Tower.

Search engines today are more than just the dumb keyword matchers they used to be. You can ask a question—say, “How tall is the tower in Paris?”—and they’ll tell you that the Eiffel Tower is 324 meters (1,063 feet) tall, about the same as an 81-story building. They can do this even though the question never actually names the tower.

How do they do this? As with everything else these days, they use machine learning. Machine-learning algorithms are used to build vectors—essentially, long lists of numbers—that in some sense represent their input data, whether it be text on a webpage, images, sound, or videos. Bing captures billions of these vectors for all the different kinds of media that it indexes. To search the vectors, Microsoft uses an algorithm it calls SPTAG (“Space Partition Tree and Graph”). An input query is converted into a vector, and SPTAG is used to quickly find “approximate nearest neighbors” (ANN), which is to say, vectors that are similar to the input.

This (with some amount of hand-waving) is how the Eiffel Tower question can be answered: a search for “How tall is the tower in Paris?” will be “near” pages talking about towers, Paris, and how tall things are. Such pages are almost surely going to be about the Eiffel Tower.

Read 2 remaining paragraphs | Comments