LONDON (Reuters) – Financial technology firm Revolut said on Thursday its valuation had jumped by five times in a year to $1.7 billion at its most recent funding round, making it the first of Britain’s digital-only banks to reach unicorn status.
While it’s usually best to just sit back with a bucket of popcorn and watch reality business drama unfold, I was surprised by the severe reactions insinuating Facebook’s eagerness to profit at the expense of its users’ data, creating paranoia around data analytics and equating data driven targeting to an underhanded practice of mind control.
Perhaps this is because it’s being bundled up with the clearly unethical issues of fake news and foreign interference, both of which are distinct from the issue of data harvesting through Facebook’s API.
The scandal surrounding Facebook’s graph API 1.0 and 2.0 might not have been rooted in malicious intent. In fact, a key component of the solution lies in forming a shared understanding amongst platforms, regulators and users, of what data can reasonably be considered private.
“Facebook gave out its users data!”
Indeed it did. But it is important to gauge the motivation and intent behind doing this. For starters, this wasn’t a “leak” as many have called it. Graph API 1.0 was a conscious feature Facebook rolled out under its Platform vision to allow other developers to utilize Facebook data to give rise to presumably useful new apps and use-cases.
Core features of popular apps like Tinder, Timehop and various Zynga social games are powered by their ability to access users’ preexisting Facebook content, social connections and information, instead of having users build up that information from scratch for each app they use.
It was also not a “loophole”. Limitations and procedures for accessing data were clearly stated in the API’s documentation, available publicly for everyone to read. It did not hide the fact that a user’s friends’ data could also be accessed.
This was a product and architectural decision; and a bad decision in hindsight, because it lacked basic precautions against bad actors. But after all, this was API 1.0 and as with all first versions in the new agile world, there is always going to be significant learnings and course correction.
importantly, Facebook did not make any money from developers accessing data through the API. Therefore, the growing narrative insinuating some deceptive, profiteering motives with regards to user data does not resonate with me.
“Data is dangerous since it can be used for psychographic profiling in political campaigns!”
The media outcry has been sounding alarms and highlighting how data can be used to create segments and psychographic profiles to influence people with pinpoint precision. It’s important to realize that this is data-driven marketing and is not something new.
It has been a widely practiced and constantly improving marketing practice utilized across industries and even in political campaigns, across parties. To assume that it would not have happened if Facebook’s data was never accessible is incorrect.
The ability to capture data online and in the physical world is only getting better, and there is a growing industry whose core function is specifically capturing and selling data. The core issue here is neither the data source nor data analytics, but rather when a useful scientific tool has been used to add sophistication to an unethical or illegal activity.
Let’s elaborate using a simple example of a fictitious company ‘Homer Donuts’ which decides to run a series of ad campaigns for its different segments. For the price conscious segment their ad says “Homer Donuts is now 20% cheaper!”, for the health conscious segment it says “Try Homer Donuts’ new low-calorie airfried donuts!” and for the convenience conscious segment it says “Donuts delivered to your doorstep in 15 minutes“.
Understanding your target audience in granular segments and customizing your ad’s positioning, messaging and placement for each segment is a core part of data-driven marketing. None of this is wrong or manipulative. However, if they create a fake article titled “Homer donuts cures baldness! ” and show it as ads to us vulnerable bald folks, unless somehow miraculously true, that is fake news: intentionally and knowingly spreading false information in a way to mislead for benefit or towards an agenda.
Differentiation between the tools and the improper utilization is important lest businesses start having to feel apologetic and tentative in striving for advancements in data science and analytics.
The industry needs for a framework for shared responsibility on privacy
Facebook made a critical error of judgement in indiscriminately allowing anyone to access to conscenting users and that user’s entire list of friends. It was overly naive not to anticipate manipulation by bad actors and take precautionary measures. That said, it’s important to realize that it’s not in Facebook’s power alone to protect user data. There is need for a framework and shared understanding of privacy expectation levels amongst platforms, users and regulators to guide corporate practices, social behaviour and potential regulations.
Orkut (c. 2005) and Facebook (c. 2007) were my first exposure to social networks. I remember asking myself back then: Why would I write a message to a friend publicly on their “wall” instead of emailing them privately?
There needs to be a guiding principle on types of user data and the level to which a platform can utilize or analyze it.
The concept was completely alien and unintuitive to me at the time. And yet a short few years later wishing birthdays, sharing posts or writing a message to a friend in sight of a broader audience, whether to flaunt one’s friendship or to invite others to participate, became the new norm around the world. We tend to forget today that whereas services like chat messengers and email are varying degrees of private, things we write, post or share on social networks are varying degrees of public.
There needs to be a guiding principle on types of user data and the level to which a platform can utilize or analyze it. This level needs to be commensurate to the implicit privacy expectation based on where it is shared, the intended audience and the data’s purpose.
When Kim Kardashian shares a selfie on Instagram with her 109 million followers, it would unreasonable for her to be outraged if that photo finds its way into the hands of people outside of the platform. At a much smaller scale, when you share something with your social network of 500 friends and acquaintances, while it’s not technically public, you can’t reasonably expect it to be very private data.
It is very possible for anyone in that audience to further speak about, record or share your content with people outside your selected audience. Conversely, if you are having a private one-on-one chat with a person, your expectation and intent of privacy is a lot higher, despite the fact that it can still be shared on by that person.
To illustrate this framework, let’s take the example of Facebook’s most criticized error in the data harvesting scandal: allowing users to be able to grant access to their friends’ information in addition to their own. Essentially, it would have been the equivalent of any user manually collecting all the data that he had rights to view on Facebook, be it his own, his friends’ or public content, and passing it on to the third-party app or whomever else he wishes.
So while Facebook added fuel to the fire by making it systematically easier and more effortless for a user to collect all this data and pass it on with a simple click of one button, the fire still exists; data can still be shared outside intended audiences even without the API’s provision.
The objective is not to absolve platforms of the responsibility to keep its users data safe, but to reinforce the understanding to all users that social networks by design cannot be foolproof data-safes and to adjust social media user’s norms in addition to counter those risks.
It is important to isolate and view all the different issues under consideration here. We are at a pivotal juncture in history where tech companies, regulators and lawmakers are actively reviewing the acceptability of evolved social norms. Along with addressing serious threats of fake news and foreign intervention, more granular and grey-area questions like what data should be considered private, the obligations of companies in protection of that data even if the data owner consents, and the acceptable boundaries of data-driven influencing in business and political settings must be debated taking all aspects, good and bad, into consideration.
Anyway, time to share this on Facebook.
Artificial intelligence and the application of it across nearly every aspect of our lives is shaping up to be one of the major step changes of our modern society. Today, a startup that wants to help other companies capitalise on AI’s advances is announcing funding and emerging from stealth mode.
Allegro.AI, which has built a deep learning platform that companies can use to build and train computer-vision-based technologies — from self-driving car systems through to security, medical and any other services that require a system to read and parse visual data — is today announcing that it has raised $11 million in funding, as it prepares for a full-scale launch of its commercial services later this year after running pilots and working with early users in a closed beta.
The round may not be huge by today’s startup standards, but the presence of strategic investors speaks to the interest that the startup has sparked and the gap in the market for what it is offering. It includes MizMaa Ventures — a Chinese fund that is focused on investing in Israeli startups, along with participation from Robert Bosch Venture Capital GmbH (RBVC), Samsung Catalyst Fund and Israeli fund Dynamic Loop Capital. Other investors (the $11 million actually covers more than one round) are not being disclosed.
Nir Bar-Lev, the CEO and cofounder (Moses Guttmann, another cofounder, is the company’s CTO), started Allegro.AI first as Seematics in 2016 after he left Google, where he had worked in various senior roles for over 10 years. It was partly that experience that led him to the idea that with the rise of AI, there would be an opportunity for companies that could build a platform to help other less AI-savvy companies build AI-based products.
“We’re addressing a gap in the industry,” he said in an interview. Although there are a number of services, for example Rekognition from Amazon’s AWS, which allow a developer to ping a database by way of an API to provide analytics and some identification of a video or image, these are relatively basic and couldn’t be used to build and “teach” full-scale navigation systems, for example.
“An ecosystem doesn’t exist for anything deep-learning based.” Every company that wants to build something would have to invest 80-90 percent of their total R&D resources on infrastructure, before getting to the many other apsects of building a product, he said, which might also include the hardware and applications themselves. “We’re providing this so that the companies don’t need to build it.”
Instead, the research scientists that will buy in the Allegro.AI platform — it’s not intended for non-technical users (not now at least) — can concentrate on overseeing projects and considering strategic applications and other aspects of the projects. He says that currently, its direct target customers are tech companies and others that rely heavily on tech, “but are not the Googles and Amazons of the world.”
Indeed, companies like Google, AWS, Microsoft, Apple and Facebook have all made major inroads into AI, and in one way or another each has a strong interest in enterprise services and may already be hosting a lot of data in their clouds. But Bar-Lev believes that companies ultimately will be wary to work with them on large-scale AI projects:
“A lot of the data that’s already on their cloud is data from before the AI revolution, before companies realized that the asset today is data,” he said. “If it’s there, it’s there and a lot of it is transactional and relational data.
“But what’s not there is all the signal-based data, all of the data coming from computer vision. That is not on these clouds. We haven’t spoken to a single automotive who is sharing that with these cloud providers. They are not even sharing it with their OEMs. I’ve worked at Google, and I know how companies are afraid of them. These companies are terrified of tech companies like Amazon and so on eating them up, so if they can now stop and control their assets they will do that.”
Customers have the option of working with Allegro either as a cloud or on-premise product, or a combination of the two, and this brings up the third reason that Allegro believes it has a strong opportunity. The quantity of data that is collected for image-based neural networks is massive, and in some regards it’s not practical to rely on cloud systems to process that. Allegro’s emphasis is on building computing at the edge to work with the data more efficiently, which is one of the reasons investors were also interested.
“AI and machine learning will transform the way we interact with all the devices in our lives, by enabling them to process what they’re seeing in real time,” said David Goldschmidt, VP and MD at Samsung Catalyst Fund, in a statement. “By advancing deep learning at the edge, Allegro.AI will help companies in a diverse range of fields—from robotics to mobility—develop devices that are more intelligent, robust, and responsive to their environment. We’re particularly excited about this investment because, like Samsung, Allegro.AI is committed not just to developing this foundational technology, but also to building the open, collaborative ecosystem that is necessary to bring it to consumers in a meaningful way.”
Allegro.AI is not the first company with hopes of providing AI and deep learning as a service to the enterprise world: Element.AI out of Canada is another startup that is being built on the premise that most companies know they will need to consider how to use AI in their businesses, but lack the in-house expertise or budget (or both) to do that. Until the wider field matures and AI know-how becomes something anyone can buy off-the-shelf, it’s going to present an interesting opportunity for the likes of Allegro and others to step in.
Teachers have long supplemented their incomes by tutoring. And there’s perhaps never been a better, or easier, time to do it than right now. The reason: China-based online education companies are in an apparent race with each other to hire U.S. teachers who’d like to work from home this summer and, using their webcams, “teach cute kids” the English language — in the marketing parlance of one of those companies, Beijing-based VIPKid.
If you doubt that’s true, you haven’t been looking at the classifieds. Just today, five-year-old VIPKid — which reportedly raised $200 million in fresh funding last summer at a $1.5 billion valuation — listed openings for thousands of U.S. teachers, from Jacksonville Beach, Florida, to Saint Joseph, Missouri, to Carmel, Indiana.
Its jobs offensive comes just three days after seven-year-old, Beijing-based China Online Education Group, known as 51Talk, did precisely the same thing.
Both companies are growing quickly and, in the process, trying to outgun competitors, including 14-year-old, Goldman Sachs-backed iTutorGroup, which operates out of Shanghai as VIPABC and boasts of its $1 billion valuation on its home page, and 15-year-old TAL Education Group, a holding company for a group of tutoring-related companies that went public in 2010 and now enjoys a roughly $17 billion market cap. (51Talk is also publicly traded, having IPO’d in 2016. Its market cap is currently $215 million.)
There’s seemingly plenty of demand for all. According to a recent report from the China-focused consultancy iResearch, online language lessons in China was a $4.5 billion market in 2016 and is expected to grow to nearly $8 billion by next year.
VIPKID seems to be winning the war for media attention, however. Back in January, Forbes named the company the best employer when it comes to work-from-home jobs (up from fifth place in 2017). Its founder, Cindy Mi, has also received glowing coverage in Bloomberg, the Financial Times, and Fast Company, among numerous other English-language outlets. (We also featured Mi in a fireside chat at our signature Disrupt event in San Francisco last fall.)
According to one of its job listings today, teachers on average are paid between $14 and $18 an hour and are available to candidates who are eligible to work in the U.S. or Canada, have a bachelor’s degree in any field, and have at least one school year of traditional teaching (or mentoring, or tutoring) experience.
The company — which is backed by Sequoia Capital, Learn Capital, and an investment firm cofounded by Alibaba’s Jack Ma, among others — says it already works with more than 30,000 teachers. We’ll have to see how much those numbers change after this summer.
Acorns, the mobile service that’s providing a gateway to investing in the stock market, has completed the master plan it set in motion months ago with the acquisition of Vault by finally launching a retirement account product today.
“Setting up a retirement account is confusing and, as a result, two out of three Americans aren’t saving for later in life,” said Noah Kerner, Acorns chief executive officer, in a statement. “Acorns Later removes friction from the decision making process, getting back to our central product philosophy: make big decisions small.”
Based on the same premise as the Acorns app, the Acorns Later feature will automatically recommend a retirement account and portfolio to customers.
The recommendations use an investor’s age, income and “other factors” to suggest one of three individual retirement accounts — either a traditional account, a Roth account (which charges you taxes up front, but not upon withdrawals that meet certain conditions) or an SEP (simplified employee pension plan).
Anyone can launch an account as long as they can put five dollars into it — then, like a regular Acorns account — customers set contributions to be withdrawn from an account daily, weekly or monthly.
“We joined the Acorns team to offer more Americans a simpler way to save,” said Randy Fernando, the founder and former chief executive of Vault and current head of investment products at Acorns, in a statement.