Here’s something I didn’t expect to read today. The U.S. Justice Department and Securities and Exchange Commission has subpoenaed Snap for details on its IPO apparently in connection with a lawsuit from disgruntled shareholders who claim the company played down its rivalry with Instagram.
Reuters first reported on the subpoenas which Snap has confirmed. Precise details aren’t clear at this point but Snap told Reuters that the probe is likely “related to the previously disclosed allegations asserted in the class action about our IPO disclosures.”
Snap went public last March with sharing popping over 40 percent on its debut to give it a valuation of $30 billion. It’s market cap today is a more modest $8.9 billion due to numerous factors including, most prominently, the efforts of rival Facebook to compete with Instagram, which has rolled out a series of features that mimic Snap’s core user experience.
That cloning has taken its toll on Snap’s business.
Today, Instagram’s Stories — the feature that closely resembles Snap’s app — has some 400 million users, that’s more than double the users of Snap. But it is far-fetched to claim that Snap played down that threat when it went public, which is what the class action case claims.
The writing had been on the wall for some time as Snap noted in its S-1 filing ahead of the IPO:
We face significant competition in almost every aspect of our business both domestically and internationally. This includes larger, more established companies such as Apple, Facebook (including Instagram and WhatsApp), Google (including YouTube), Twitter, Kakao, LINE, Naver (including Snow), and Tencent, which provide their users with a variety of products, services, content, and online advertising offerings, and smaller companies that offer products and services that may compete with specific Snapchat features. For example, Instagram, a subsidiary of Facebook, recently introduced a “stories” feature that largely mimics our Stories feature and may be directly competitive. We may also lose users to small companies that offer products and services that compete with specific Snapchat features because of the low cost for our users to switch to a different product or service. Moreover, in emerging international markets, where mobile devices often lack large storage capabilities, we may compete with other applications for the limited space available on a user’s mobile device. We also face competition from traditional and online media businesses for advertising budgets. We compete broadly with the social media offerings of Apple, Facebook, Google, and Twitter, and with other, largely regional, social media platforms that have strong positions in particular countries.
But even if an investor something didn’t read that document or reports of it (not advised) there was ample press coverage of the growth of Instagram Stories, and Facebook’s general Snap cloning efforts, since its launch in August 2016.
In short, it was fairly clear that Instagram was cloning Snap, which in turn was a key factor for Snap’s growth struggles.
Don’t get me wrong there’s certainly a lot to worry about over at Snap — those poor user numbers, a string of executive exits and a strange u-turn on a recent hire — but this lawsuit looks to be little more than sour grapes from investors who either didn’t fully understand the space they invested in, or simply made a poor decision to back Snap at whatever price they did.
On that note: anyone who invested at Snap’s peak valuation might have lost more money than betting on Bitcoin during this year’s January hype — that’s saying something — but ultimately they have no-one to blame but themselves.
Metacert, founded by Paul Walsh, originally began as a way to watch chat rooms for fake Ethereum scams. Walsh, who was an early experimenter in cryptocurrencies, grew frustrated when he saw hackers dumping fake links into chat rooms, resulting in users regularly losing cash to scammers.
Now Walsh has expanded his software to email. A new product built for email will show little green or red shields next to links, confirming that a link is what it appears to be. A fake link would appear red while a real PayPal link, say, would appear green. The plugin works with Apple’s Mail app on the iPhone and is called Cryptonite.
“The system utilizes the MetaCert Protocol infrastructure/registry,” said Walsh. “It contains 10 billion classified URLs. This is at the core of all of MetaCert’s products and services. It’s a single API that’s used to protect over 1 million crypto people on Telegram via a security bot and it’s the same API that powers the integration that turned off phishing for the crypto world in 2017. Even when links are shortened? MetaCert unfurls them until it finds the real destination site, and then checks the Protocol to see if it’s verified, unknown or classified as phishing. It does all this in less that 300ms.”
Walsh is also working on a system to scan for Fake News in the wild using a similar technology to his anti-phishing solution. The company is raising currently and is working on a utility token.
Walsh sees his first customers as enterprise and expects IT shops to implement the software to show employees which links are allowed, i.e. company or partner links, and which ones are bad.
“It’s likely we will approach this top down and bottom up, which is unusual for enterprise security solutions. But ours is an enterprise service that anyone can install on their phone in less than a minute,” he said. “SMEs isn’t typically a target market for email security companies but we believe we can address this massive market with a solution that’s not scary to setup and expensive to support. More research is required though, to see if our hypothesis is right.”
“With MetaCert’s security, training is reduced to a single sentence ‘if it doesn’t have a green shield, assume it’s not safe,” said Walsh.
Today, Google’s YouTube VR app arrives on the $199 Oculus Go, bringing the largest library of VR content on the web to Facebook’s entry-level VR device.
YouTube brings plenty of content in conventional and more immersive video types. It’s undoubtedly the biggest single hub of 360 content and native formats like VR180, though offering access to the library at large is probably far more important to the Oculus platform.
One of the interesting things about Oculus’s strategy with the Go headset is that gaming turned out to be the minority use case following media consumption. If you find it hard to believe that so many people are out there binging on 360 videos it’s because they probably aren’t. Users have kind of co-opted the device’s capabilities to make it a conventional movie and TV viewing device, there are apps from Netflix and Hulu while Facebook has also built Oculus TV, a feature that’s still in its infancy but basically offers an Apple TV-like environment for watching a lot of 2D content in a social environment.
At the company’s Oculus Connect conference this past year CTO John Carmack remarked how about 70 percent of time spent by users on the Go has been watching videos with about 30 percent of user time has gone to gaming. Oculus has positioned itself as a gaming company in a lot of ways via its investments so it will be interesting to see how it grows its mobile platform to make the video aspect of its VR business more attractive.
With YouTube, the company has pretty easy access to effortlessly bringing a bunch of content onboard, this would have been a great partner for Oculus TV, but a dedicated app brings a lot to users. It wasn’t super clear whether Google was going to play hardball with the YouTube app and keep standalone access confined to its Daydream platform, as the company’s homegrown VR ambitions seem to have grown more subdued, it looks like they’ve had some time to focus on external platforms.
The idea that social media can be harmful to our mental and emotional well-being is not a new one, but little has been done by researchers to directly measure the effect; surveys and correlative studies are at best suggestive. A new experimental study out of Penn State, however, directly links more social media use to worse emotional states, and less use to better.
To be clear on the terminology here, a simple survey might ask people to self-report that using Instagram makes them feel bad. A correlative study would, for example, find that people who report more social media use are more likely to also experience depression. An experimental study compares the results from an experimental group with their behavior systematically modified, and a control group that’s allowed to do whatever they want.
This study, led by Melissa Hunt at Penn State’s psychology department, is the latter — which despite intense interest in this field and phenomenon is quite rare. The researchers only identified two other experimental studies, both of which only addressed Facebook use.
One hundred and forty-three students from the school were monitored for three weeks after being assigned to either limit their social media use to about 10 minutes per app (Facebook, Snapchat and Instagram) per day or continue using it as they normally would. They were monitored for a baseline before the experimental period and assessed weekly on a variety of standard tests for depression, social support and so on. Social media usage was monitored via the iOS battery use screen, which shows app use.
The results are clear. As the paper, published in the latest Journal of Social and Clinical Psychology, puts it:
The limited use group showed significant reductions in loneliness and depression over three weeks compared to the control group. Both groups showed significant decreases in anxiety and fear of missing out over baseline, suggesting a benefit of increased self-monitoring.
Our findings strongly suggest that limiting social media use to approximately 30 minutes per day may lead to significant improvement in well-being.
It’s not the final word in this, however. Some scores did not see improvement, such as self-esteem and social support. And later follow-ups to see if feelings reverted or habit changes were less than temporary were limited because most of the subjects couldn’t be compelled to return. (Psychology, often summarized as “the study of undergraduates,” relies on student volunteers who have no reason to take part except for course credit, and once that’s given, they’re out.)
That said, it’s a straightforward causal link between limiting social media use and improving some aspects of emotional and social health. The exact nature of the link, however, is something at which Hunt could only speculate:
Some of the existing literature on social media suggests there’s an enormous amount of social comparison that happens. When you look at other people’s lives, particularly on Instagram, it’s easy to conclude that everyone else’s life is cooler or better than yours.
When you’re not busy getting sucked into clickbait social media, you’re actually spending more time on things that are more likely to make you feel better about your life.
The researchers acknowledge the limited nature of their study and suggest numerous directions for colleagues in the field to take it from here. A more diverse population, for instance, or including more social media platforms. Longer experimental times and comprehensive follow-ups well after the experiment would help, as well.
The 30-minute limit was chosen as a conveniently measurable one, but the team does not intend to say that it is by any means the “correct” amount. Perhaps half or twice as much time would yield similar or even better results, they suggest: “It may be that there is an optimal level of use (similar to a dose response curve) that could be determined.”
Until then, we can use common sense, Hunt suggested: “In general, I would say, put your phone down and be with the people in your life.”
Read this slowly: The White House’s press secretary has tweeted a manipulated video shared by the editor-at-large of conspiracy theorist outlet Infowars to attempt to justify its decision to suspend the press credentials of CNN’s chief white house correspondent.
CNN’s Jim Acosta had his press pass pulled by the White House earlier today after press secretary Sarah Sanders claimed he had “plac[ed] his hands on a young woman just trying to do her job”.
The journalist had being trying to continue asking president Trump questions during a contentious exchange at a White House press briefing.
During this exchange Trump cut over him verbally — saying “that’s enough” — at which point a female White House intern moved towards Acosta and attempted to take the microphone out of his hands.
The journalist dodged and then blocked several attempts to take the microphone by using his arm and the side of his hand against the intern’s arm, addressing her with “pardon me ma’am” as he did so, and indicating that he was trying to ask Trump another question.
You can see what happened in a video shared by NBC News which captured footage of the incident (below). In the footage the intern can be seen stopping trying to remove the mic after Acosta speaks to her. He goes on to ask Trump if he is “worried about indictments coming down in [the Russia] investigation”.
Trump does not answer, repeating “that’s enough” and “put down the mic”.
Getting no answers, Acosta does then relinquish the mic.
BREAKING: White House aide grabs and tries to physically remove a microphone from CNN Correspondent Jim Acosta during a contentious exchange with President Trump at a news conference. pic.twitter.com/fFm7wclFw2
Far right conspiracy theorist outlet Infowars quickly spun into action after this episode — publishing a couple of posts on its website couching Acosta’s actions as a “physical confrontation with female White House staffer”, and asking in a lengthy video post whether Acosta “assault[ed] a woman?”.
In the video Infowars editor-at-large Paul Joseph Watson can be seen following the modern political disinformation playbook — avoiding personally claiming the incident constituted an assault while repeatedly showing manipulated, slowed down footage, stripped of its audio, to make it look like an assault — all the while suggestively reframing what happened to whip up hyperpartisan sentiment (‘what if this had been a conservative reporter ranting at Obama’ etc) in order to manipulate his audience to side with the president against CNN.
Watson was also active on social media, seeding a further doctored version of footage on Twitter — which includes a repeat close crop that zooms in on the CNN reporter’s hand against the intern’s arm, making it look as if Acosta is giving her a karate chop.
Yes the incident clearly did happen. Acosta placed his hands on a woman. Do you think we're all stupid? pic.twitter.com/lbYOXtgXJx
This is very clearly not what the unedited video shows.
In the unedited footage Acosta can be seen essentially brushing off the intern’s attempt to grab the mic — and addressing her politely at the crucial moment, when the side of his hand is resting on her arm. She responds to his polite “pardon me ma’am” by stepped back and stopping trying to take the mic away.
Acosta then asks Trump more questions which Trump does not answer.
Now Infowars conspiracy theorists creating doctored videos to try to spin hyperpartisan junk news is not new or news. Their business model is based on manipulating viewers’ emotions to flog them, er, junk supplements.
But what is new is that three hours after Sanders issued her series of tweets accusing Acosta of inappropriately placing his hands on a young woman, the White House press secretary tweeted again — this time appearing to share the exact same doctored video that had been shared earlier by Watson, as he worked to put the Infowars’ divisive alternative spin on reality.
Sanders referred directly to the video in her tweet, claiming that “inappropriate behaviour” had been “clearly documented in this video”:
We stand by our decision to revoke this individual’s hard pass. We will not tolerate the inappropriate behavior clearly documented in this video. pic.twitter.com/T8X1Ng912y
Trump has made no secret of his hatred for CNN — repeatedly badging the cable news network ‘fake news’ in myriad vitriolic tweets since taking office.
Now his administration has gone a step further in seeking to stamp out reality by using manipulated video to bar a genuine news outlet from presidential press briefings. A news outlet that the president especially hates.
Let that sink in.
It’s not just conspiracy theorists who use this kind of information manipulation playbook of course. Authoritarian regimes, terrorists, criminals, racists… the list goes on.
Now you can add the White House press secretary to that ignominious list.
We’ve reached out to the White House to ask why Sanders chose to share the Infowars video — rather than sharing unedited footage of the incident. We’ll update this post with any response.
Meanwhile, on the list of people being allowed into White House press briefings these days…
Facebook has yet again declined an invitation for its founder and CEO Mark Zuckerberg to answer international politicians’ questions about how disinformation spreads on his platform and undermines democratic processes.
But policymakers aren’t giving up — and have upped the ante by issuing a fresh invitation signed by representatives from another three national parliaments. So the call for global accountability is getting louder.
Now representatives from a full five parliaments have signed up to an international grand committee calling for answers from Zuckerberg, with Argentina, Australia and Ireland joining the UK and Canada to try to pile political pressure on Facebook.
The UK’s Digital, Culture, Media and Sport (DCMS) committee has been asking for Facebook’s CEO to attend its multi-month enquiry for the best part of this year, without success…
In its last request the twist was it came not just from the DCMS inquiry into online disinformation but also the Canadian Standing Committee on Access to Information, Privacy and Ethics.
This year policymakers on both sides of the Atlantic have been digging down the rabbit hole of online disinformation — before and since the Cambridge Analytica scandal erupted into a major global scandal — announcing last week they will form an ‘international grand committee’ to further their enquiries.
The two committees will convene for a joint hearing in the UK parliament on November 27 — and they want Zuckerberg to join them to answer questions related to the “platform’s malign use in world affairs and democratic process”, as they put it in their invitation letter.
Facebook has previously despatched a number of less senior representatives to talk to policymakers probing damages caused by disinformation — including its CTO, Mike Schroepfer, who went before the DCMS committee in April.
But both Schroepfer and Zuckerberg have admitted the accountability buck stops with Facebook’s CEO.
In addition to comprehensive privacy reviews, we put products through rigorous data security testing. We also meet with regulators, legislators and privacy experts around the world to get input on our data practices and policies.
The increasingly pressing question, though, is to whom is Facebook actually accountable?
Yet so far only the supranational EU parliament has managed to secure a public meeting with Facebook’s CEO. And MEPs there had to resort to heckling Zuckerberg to try to get answers to their actual questions.
“Facebook say that they remain “committed to working with our committees to provide any additional relevant information” that we require. Yet they offer no means of doing this,” tweeted DCMS chair Damian Collins today, reissuing the invitation for Zuckerberg. “The call for accountability is growing, with representatives from 5 parliaments now meeting on the 27th.”
The letter to Facebook’s CEO notes that the five nations represent 170 million Facebook users.
“We call on you once again to take up your responsibility to Facebook users, and speak in person to their elected representatives,” it adds.
Facebook say that they remain committed" to working with our committees "to provide any additional relevant information" that we require. Yet they offer no means of doing this. The call for accountability is growing, with representatives from 5 parliaments now meeting on the 27th pic.twitter.com/VJFtpqUi0r
The UK’s information commissioner said yesterday that Facebook needs to overhaul its business model, giving evidence to parliament on the “unprecedented” data investigation her office has been running which was triggered by the Cambridge Analytica scandal. She also urged policymakers to strengthen the rules on the use of people’s data for digital campaigning.
Last month the European parliament also called for Facebook to let in external auditors in the wake of Cambridge Analytica, to ensure users’ data is being properly protected — yet another invitation Facebook has declined.
Meanwhile an independent report assessing the company’s human rights impact in Myanmar — which Facebook commissioned but chose to release yesterday on the eve of the US midterms when most domestic eyeballs would be elsewhere — agreed with the UN’s damning assessment that Facebook did not do enough to prevent its platform from being used to incite ethical violence.
The report also said Facebook is still not doing enough in Myanmar.
The UK’s data watchdog has warned that Facebook must overhaul its privacy-hostile business model or risk burning user trust for good.
Comments she made today have also raised questions over the legality of so-called lookalike audiences to target political ads at users of its platform.
Information commissioner Elizabeth Denham was giving evidence to the Digital, Culture, Media and Sport committee in the UK parliament this morning. She’s just published her latest report to parliament, on the ICO’s (still ongoing) investigation into the murky world of data use and misuse in political campaigns.
Since May 2017 the watchdog has been pulling on myriad threads attached to the Cambridge Analytica Facebook data misuse scandal — to, in the regulator’s words, “follow the data” across an entire ecosystem of players; from social media firms to data brokers to political parties, and indeed beyond to other still unknown actors with an interest in also getting their hands on people’s data.
Denham readily admitted to the committee today that the sprawling piece of work had opened a major can of worms.
“I think we were astounded by the amount of data that’s held by all of these agencies — not just social media companies but data companies like Cambridge Analytica; political parties the extent of their data; the practices of data brokers,” she said.
“We also looked at universities, and the data practices in the Psychometric Centre, for example, at Cambridge University — and again I think universities have more to do to control data between academic researchers and the same individuals that are then running commercial companies.
“There’s a lot of switching of hats across this whole ecosystem — that I think there needs to be clarity on who’s the data controller and limits on how data can be shared. And that’s a theme that runs through our whole report.”
“The major concern that I have in this investigation is the very disturbing disregard that many of these organizations across the entire ecosystem have for personal privacy of UK citizens and voters. So if you look across the whole system that’s really what this report is all about — and we have to improve these practices for the future,” she added. “We really need to tighten up controls across the entire ecosystem because it matters to our democratic processes.”
Asked whether she would personally trust her data to Facebook, Denham told the committee: “Facebook has a long way to go to change practices to the point where people have deep trust in the platform. So I understand social media sites and platforms and the way we live our lives online now is here to stay but Facebook needs to change, significantly change their business model and their practices to maintain trust.”
“I understand that platforms will continue to play a really important role in people’s lives but they need to take much greater responsibility,” she added when pressed to confirm that she wouldn’t trust Facebook.
Inferred data refers to inferences made about individuals based on data-mining their wider online activity — such as identifying a person’s (non-stated) political views by examining which Facebook Pages they’ve liked. Facebook offers advertisers an interests-based tool to do this — by creating so-called lookalike audiences comprises of users with similar interests.
But if the information commissioner’s view of data protection law is correct, it implies that use of such tools to infer political views of individuals could be in breach of European privacy law. Unless explicit consent is gained beforehand for people’s personal data to be used for that purpose.
“What’s happened here is the model that’s familiar to people in the commercial sector — or behavioural targeting — has been transferred, I think transformed, into the political arena,” said Denham. “And that’s why I called for an ethical pause so that we can get this right.
“I don’t think that we want to use the same model that sells us holidays and shoes and cars to engage with people and voters. I think that people expect more than that. This is a time for a pause, to look at codes, to look at the practices of social media companies, to take action where they’ve broken the law.”
She told MPs that the use of lookalike audience should be included in a Code of Practice which she has previously called for vis-a-vis political campaigns’ use of data tools.
Social media platforms should also disclose the use of lookalike audiences for targeting political ads at users, she said today — a data-point that Facebook has nonetheless omitted to include in its newly launched political ad disclosure system.
“The use of lookalike audiences should be made transparent to the individuals,” she argued. “They need to know that a political party or an MP is making use of lookalike audiences, so I think the lack of transparency is problematic.”
Asked whether the use of Facebook lookalike audiences to target political ads at people who have chosen not to publicly disclose their political views is legal under current EU data protection laws, she declined to make an instant assessment — but told the committee: “We have to look at it in detail under the GDPR but I’m suggesting the public is uncomfortable with lookalike audiences and it needs to be transparent.”
We’ve reached out to Facebook for comment.
Links to known cyber security breaches
The ICO’s latest report to parliament and today’s evidence session also lit up a few new nuggets of intel on the Cambridge Analytica saga, including the fact that some of the misused Facebook data — which had found its way to Cambridge University’s Psychometric Centre — was not only accessed by IP addresses that resolve to Russia but some IP addresses have been linked to other known cyber security breaches.
“That’s what we understand,” Denham’s deputy, James Dipple-Johnstone told the committee. “We don’t know who is behind those IP addresses but what we understand is that some of those appear on lists of concern to cyber security professionals by virtue of other types of cyber incidents.”
“We’re still examining exactly what data that was, how secure it was and how anonymized,” he added saying “it’s part of an active line of enquiry”.
The ICO has also passed the information on “to the relevant authorities”, he added.
The regulator also revealed that it now knows exactly who at Facebook was aware of the Cambridge Analytica breach at the earliest instance — saying it has internal emails related to it issue which have “quite a large distribution list”. Although it’s still not been made public whether or not Mark Zuckerberg name is on that list.
When pressed if Zuckerberg was on the distribution list for the breach emails, Denham declined to confirm so today, saying “we just don’t want to get it wrong”.
The ICO said it would pass the list to the committee in due course.
Which means it shouldn’t be too long before we know exactly who at Facebook was responsible for not disclosing the Cambridge Analytica breach to relevant regulators (and indeed parliamentarians) sooner.
The committee is pressing in this because Facebook gave earlier evidence to its online disinformation enquiry yet omitted to mention the Cambridge Analytica breach entirely. (Hence its accusation that senior management at Facebook deliberately withheld pertinent information.)
Denham agreed it would have been best practice for Facebook to notify relevant regulators at the time it became aware of the data misuse — even without the GDPR’s new legal requirement being in force then.
She also agreed with the committee that it would be a good idea for Zuckerberg to personally testify to the UK parliament.
Though Facebook has yet to confirm whether or not Zuckerberg will make himself available this time.
How to regulate Internet harms?
This summer the ICO announced it would be issuing Facebook with the maximum penalty possible under the country’s old data protection regime for the Cambridge Analytica data breach.
At the same time Denham also called for an ethical pause on the use of social media microtargeting of political ads, saying there was an urgent need for “greater and genuine transparency” about the use of such technologies and techniques to ensure “people have control over their own data and that the law is upheld”.
She reiterated that call for an ethical pause today.
She also said the fine the ICO handed Facebook last month for the Cambridge Analytica breach would have been “significantly larger” under the rebooted privacy regime ushered in by the pan-EU GDPR framework this May — adding that it would be interesting to see how Facebook responds to the fine (i.e. whether it pays up or tries to appeal).
“We have evidence… that Cambridge Analytica may have partially deleted some of the data but even as recently as 2018, Spring, some of the data was still there at Cambridge Analytica,” she told the committee. “So the follow up was less than robust. And that’s one of the reasons that we fined Facebook £500,000.”
Data deletion assurances that Facebook had sought from various entities after the data misuse scandal blew up don’t appear to be worth the paper they’re written on — with the ICO also noting that some of these confirmations had not even been signed.
Dipple-Johnstone also said it believes that a number of additional individuals and academic institutions received “parts” of the Cambridge Analytica Facebook data-set — i.e. additional to the multiple known entities in the saga so far (such as GSR’s Aleksandr Kogan, and CA whistleblower Chris Wylie).
“We’re examining exactly what data has gone where,” he said, saying it’s looking into “about half a dozen” entities — but declining to name names while its enquiry remains ongoing.
Asked for her views on how social media should be regulated by policymakers to rein in data abuses and misuses, Denham suggested a system-based approach that looks at effectiveness and outcomes — saying it boils down to accountability.
“What is needed for tech companies — they’re already subject to data protection law but when it comes to the broader set of Internet harms that your committee is speaking about — misinformation, disinformation, harm to children in their development, all of these kinds of harms — I think what’s needed is an accountability approach where parliament sets the objectives and the outcomes that are needed for the tech companies to follow; that a Code of Practice is developed by a regulator; backstopped by a regulator,” she suggested.
“What I think’s really important is the regulators looking at the effectiveness of systems like takedown processes; recognizing bots and fake accounts and disinformation — rather than the regulator taking individual complaints. So I think it needs to be a system approach.”
“I think the time for self regulation is over. I think that ship has sailed,” she also told the committee.
On the regulatory powers front, Denham was generally upbeat about the potential of the new GDPR framework to curb bad data practices — pointing out that not only does it allow for supersized fines but companies can be ordered to stop processing data, which she suggested is an even more potent tool to control rogue data-miners.
She also said suggested another new power — to go in and inspect companies and conduct data audits — will help it get results.
But she said the ICO may need to ask parliament for another tool to be able to carry out effective data investigations. “One of the areas that we may be coming back to talk to parliament, to talk to government about is the ability to compel individuals to be interviewed,” she said, adding: “We have been frustrated by that aspect of our investigation.”
Both the former CEO of Cambridge Analytica, Alexander Nix, and Kogan, the academic who built the quiz app used to extract Facebook user data so it could be processed for political ad targeting purposes, had refused to appear for an interview with it under caution, she said today.
On the wider challenge of regulating a full range of “Internet harms” — spanning the spread of misinformation, disinformation and also offensive user-generated content — Denham suggested a hybrid regulatory model might ultimately be needed to tackle this, suggesting the ICO and communications regular Ofcom might work together.
“It’s a very complex area. No country has tackled this yet,” she conceded, noting the controversy around Germany’s social media take down law, and adding: “It’s very challenging for policymakers… Balancing privacy rights with freedom of speech, freedom of expression. These are really difficult areas.”
Asked what her full ‘can of worms’ investigation has highlighted for her, Denham summed it up as: “A disturbing amount of disrespect for personal data of voters and prospective voters.”
“The main purpose of this [investigation] is to pull back the curtain and show the public what’s happening with their personal data,” she added. “The politicians, the policymakers need to think about this too — stronger rules and stronger laws.”
One committee member suggestively floated the idea of social media platforms being required to have an ICO officer inside their organizations — to grease their compliance with the law.
Smiling, Denham responded that it would probably make for an uncomfortable prospect on both sides.
This time around, the $50 million raise includes new investors Shunwei Capital from China, DST Partners and RPS Ventures, as well as returning backers Sequoia India, SAIF Partners, Venture Highway and Y Combinator.
Meesho has adjusted its focus considerably since it graduated YC, and today it operates as an enabler for people in India wanting to sell products using social media. Primarily the focus is WhatsApp, the world’s largest messaging app which counts India as its largest market with over 200 million monthly users.
The company providers sellers with products (which it sources from suppliers) and inventory management and other basic seller tools. In turn, sellers hawk their catalog to friends and family as they please. Meesho handles all payment and logistics, providing a cut of the transaction to sellers.
Interestingly, there’s no fixed price for products. That means that sellers can vary the price and even haggle with their customers just as they’d do in real life.
“We want to simulate the exact experience that happens offline,” Aatrey explained. “Sellers have the liberty to sell to 10 different people at 10 different prices.”
Sales typically happen between friends and family because there is a trusted relationship. Selling consistently to family members doesn’t seem like an easy task, but Meesho operates in a range of verticals, including fashion, living, cosmetics and more, which the company said makes repeat custom easier. The firm is working on technology that helps sellers figure out which products to push to their customer list, but Aatrey believes a good seller has a knack for what their customers will want on a given day or week.
Aatrey — who started Meesho with fellow IIT-Delhi graduate Sanjeev Barnwal in 2015 — told TechCrunch that the startup is picky about who it selects as a seller, and those who are not active enough are removed from the platform — although he said the latter doesn’t happen a lot. Instead, Meesho offers training and skill development programs to sellers who perform well.
“We go and intentionally invest more to scale up the sellers who show more promise,” he explained.
(Left to right) Meesho founders Sanjeev Barnwal and Vidit Aatrey
Meesho says it has registered some two million sellers to date but the goal is to reach 20 million by 2020. A majority 80 percent are female because the startup first targeted housewives, but increasingly, Aatrey said, it is seeing male sellers grow. Nearly one-third of sellers are students and many others use the app part-time to add to an existing income source.
In one example, Aatrey explained that typically households that earn 30,000 INR ($410) per month can make 8,000-10,000 INR in additional capital if one of the homemakers uses Meesho full-time. That’s a pretty significant addition.
One of the more intriguing pieces of the Meshoo business is that by tapping into people’s trusted relationships and offer them incentives to sell products without requiring operating capital, the business has cut a lot of the expensive overheads associated with e-commerce. Customer acquisition cost is low, for example, while there’s no need to dole out discounts, both of which are expensive line items for Amazon India and its rival Flipkart, which is owned by Walmart.
“We don’t burn a lot of money,” Aatrey said, although he declined to provide specific financial information.
With this new money in the bank, Meesho is working to go deeper into its existing areas of business. That’ll include offering more product categories, bringing on more suppliers, extending its supply chain and developing tools to help sellers sell better.
Aatrey also confirmed that the company is also looking to develop a supply chain in China, that’s where Shunwei and its network will come into play. He also revealed that the company is beginning to think about the potential for its own labeled product — an Amazon-style move — although that isn’t likely to happen just yet.
Another longer-term objective is international expansion.
“For the next 12 months we won’t go beyond India,” Aatrey explained. “But what we are doing here is very similar to Southeast Asia, Latin America and even the Middle East so at some point we’ll think about venturing overseas.”
With three funding rounds in the past year, the Meesho CEO revealed that the company is well capitalized but he didn’t rule out the potential to raise money again.
“If we get a good offer that makes sense for the growth of the business, we are open to it,” he said.