Facebook urged to give users greater control over what they see

Academics at the universities of Oxford and Stanford think Facebook should give users greater transparency and control over the content they see on its platform.

They also believe the social networking giant should radically reform its governance structures and processes to throw more light on content decisions, including by looping in more external experts to steer policy.

Such changes are needed to address widespread concerns about Facebook’s impact on democracy and on free speech, they argue in a report published today which includes a series of recommendations for reforming Facebook (entitled: Glasnost! Nine Ways Facebook Can Make Itself a Better Forum for Free Speech and Democracy.)

“There is a great deal that a platform like Facebook can do right now to address widespread public concerns, and to do more to honour its public interest responsibilities as well as international human rights norms,” writes lead author Timothy Garton Ash.

“Executive decisions made by Facebook have major political, social, and cultural consequences around the world. A single small change to the News Feed algorithm, or to content policy, can have an impact that is both faster and wider than that of any single piece of national (or even EU-wide) legislation.”

Here’s a rundown of the report’s nine recommendations:

  1. Tighten Community Standards wording on hate speech — the academics argue that Facebook’s current wording on key areas is “overbroad, leading to erratic, inconsistent and often context-insensitive takedowns”; and also generating “a high proportion of contested cases”. Clear and tighter wording could make consistent implementation easier, they believe
  2. Hire more and contextually expert content reviewers — “the issue is quality as well as quantity”, the report points out, pressing Facebook to hire more human content reviewers plus a layer of senior reviewers with “relevant cultural and political expertise”; and also to engage more with trusted external sources such as NGOs. “It remains clear that AI will not resolve the issues with the deeply context-dependent judgements that need to be made in determining when, for example, hate speech becomes dangerous speech,” they write
  3. Increase ‘decisional transparency’ — Facebook still does not offer adequate transparency around content moderation policies and practices, they suggest, arguing it needs to publish more detail on its procedures, including specifically calling for the company to “post and widely publicize case studies” to provide users with more guidance and to provide potential grounds for appeals
  4. Expand and improve the appeals process — also on appeals, the report recommends Facebook gives reviewers much more context around disputed pieces of content, and also provide appeals statistics data to analysts and users. “Under the current regime, the initial internal reviewer has very limited information about the individual who posted a piece of content, despite the importance of context for adjudicating appeals,” they write. “A Holocaust image has a very different significance when posted by a Holocaust survivor or by a Neo-Nazi.” They also suggest Facebook should work on developing “a more functional and usable for the average user” appeals due process, in dialogue with users — such as with the help of a content policy advisory group
  5. Provide meaningful News Feed controls for users — the report suggests Facebook users should have more meaningful controls over what they see in the News Feed, with the authors dubbing current controls as “altogether inadequate”, and advocating for far more. Such as the ability to switch off the algorithmic feed entirely (without the chronological view being defaulted back to algorithm when the user reloads, as is the case now for anyone who switches away from the AI-controlled view). The report also suggests adding a News Feed analytics feature, to give users a breakdown of sources they’re seeing and how that compares with control groups of other users. Facebook could also offer a button to let users adopt a different perspective by exposing them to content they don’t usually see, they suggest
  6. Expand context and fact-checking facilities — the report pushes for “significant” resources to be ploughed into identifying “the best, most authoritative, and trusted sources” of contextual information for each country, region and culture — to help feed Facebook’s existing (but still inadequate and not universally distributed) fact-checking efforts
  7. Establish regular auditing mechanisms — there have been some civil rights audits of Facebook’s processes (such as this one, which suggested Facebook formalizes a human rights strategy) but the report urges the company to open itself up to more of these, suggesting the model of meaningful audits should be replicated and extended to other areas of public concern, including privacy, algorithmic fairness and bias, diversity and more
  8. Create an external content policy advisory group — key content stakeholders from civil society, academia and journalism should be enlisted by Facebook for an expert policy advisory group to provide ongoing feedback on its content standards and implementation; as well as also to review its appeals record. “Creating a body that has credibility with the extraordinarily wide geographical, cultural, and political range of Facebook users would be a major challenge, but a carefully chosen, formalized, expert advisory group would be a first step,” they write, noting that Facebook has begun moving in this direction but adding: “These efforts should be formalized and expanded in a transparent manner.”
  9. Establish an external appeals body — the report also urges “independent, external” ultimate control of Facebook’s content policy, via an appeals body that sits outside the mothership and includes representation from civil society and digital rights advocacy groups. The authors note Facebook is already flirting with this idea, citing comments made by Mark Zuckerberg last November, but also warn this needs to be done properly if power is to be “meaningfully” devolved. “Facebook should strive to make this appeals body as transparent as possible… and allow it to influence broad areas of content policy… not just rule on specific content takedowns,” they warn

In conclusion, the report notes that the content issues it’s focused on are not only attached to Facebook’s business but apply widely across various Internet platforms — hence growing interest in some form of “industry-wide self-regulatory body”. Though it suggests that achieving that kind of overarching regulation will be “a long and complex task”.

In the meanwhile the academics remain convinced there is “a great deal that a platform like Facebook can do right now to address widespread public concerns, and to do more to honour its public interest responsibilities, as well as international human rights norms” — with the company front and center of the frame given its massive size (2.2BN+ active users).

“We recognize that Facebook employees are making difficult, complex, contextual judgements  every day, balancing competing interests, and not all those decisions will benefit from full transparency. But all would be better for more regular, active interchange with the worlds of academic research, investigative journalism, and civil society advocacy,” they add.

We’ve reached out to Facebook for comment on their recommendations.

The report was prepared by the Free Speech Debate project of the Dahrendorf Programme for the Study of Freedom, St. Antony’s College, Oxford, in partnership with the Reuters Institute for the Study of Journalism, University of Oxford, the Project on Democracy and the Internet, Stanford University, and the Hoover Institution, Stanford University.

Last year we offered a few of our own ideas for fixing Facebook — including suggesting the company hire orders of magnitude more expert content reviewers, as well as providing greater transparency into key decisions and processes.

Wrest control from a snooping smart speaker with this teachable “parasite”

What do you get when you put one Internet connected device on top of another? A little more control than you otherwise would in the case of Alias the “teachable ‘parasite’” — an IoT project smart speaker topper made by two designers, Bjørn Karmann and Tore Knudsen.

The Raspberry Pi-powered, fungus-inspired blob’s mission is to whisper sweet nonsense into Amazon Alexa’s (or Google Home’s) always-on ear so it can’t accidentally snoop on your home.

Project Alias from Bjørn Karmann on Vimeo.

Alias will only stop feeding noise into its host’s speakers when it hears its own wake command — which can be whatever you like.

The middleman IoT device has its own local neural network, allowing its owner to christen it with a name (or sound) of their choosing via a training interface in a companion app.

The open source TensorFlow library was used for building the name training component.

So instead of having to say “Alexa” or “Ok Google” to talk to a commercial smart speaker — and thus being stuck parroting a big tech brand name in your own home, not to mention being saddled with a device that’s always vulnerable to vocal pranks (and worse: accidental wiretapping) — you get to control what the wake word is, thereby taking back a modicum of control over a natively privacy-hostile technology.

This means you could rename Alexa “Bezosallseeingeye”, or refer to your Google Home as “Carelesswhispers”. Whatever floats your boat.

Once Alias hears its custom wake command it will stop feeding noise into the host speaker — enabling the underlying smart assistant to hear and respond to commands as normal.

“We looked at how cordyceps fungus and viruses can appropriate and control insects to fulfill their own agendas and were inspired to create our own parasite for smart home systems,” explain Karmann and Knudsen in a write up of the project here. “Therefore we started Project Alias to demonstrate how maker-culture can be used to redefine our relationship with smart home technologies, by delegating more power from the designers to the end users of the products.”

Alias offers a glimpse of a richly creative custom future for IoT, as the means of producing custom but still powerful connected technology products becomes more affordable and accessible.

And so also perhaps a partial answer to IoT’s privacy problem, for those who don’t want to abstain entirely. (Albeit, on the security front, more custom and controllable IoT does increase the hackable surface area — so that’s another element to bear in mind; more custom controls for greater privacy does not necessarily mesh with robust device security.)

If you’re hankering after your own Alexa disrupting blob-topper, the pair have uploaded a build guide to Instructables and put the source code on GitHub. So fill yer boots.

Project Alias is of course not a solution to the underlying tracking problem of smart assistants — which harvest insights gleaned from voice commands to further flesh out interest profiles of users, including for ad targeting purposes.

That would require either proper privacy regulation or, er, a new kind of software virus that infiltrates the host system and prevents it from accessing user data. And — unlike this creative physical IoT add-on — that kind of tech would not be at all legal.

Spotify will now let brands sponsor its Discover Weekly playlist

Spotify has begun testing a new type of ad in Discover Weekly, its personalized playlist of music that’s the streaming service’s flagship feature. The company says that, for the first time, it will allow a brand to “sponsor” this playlist as opposed to just running ads. It believes many advertisers will be interested in this opportunity due to the playlist’s ability to reach heavily engaged Spotify users, and because it allows advertisers to “own the personalized listening experience” on Spotify.

According to Spotify, Discover Weekly listeners stream more than double the amount of users who don’t listen to the playlist because of the personalized experience it offers. That will make the ad product more compelling, compared with brands’ existing ability to sponsor other editorial playlists on the service.

With Spotify’s Sponsored Playlist ad product, brands can surround Spotify’s free listeners with audio or video messages in ad breaks, and gain Spotify’s help in building a collaborative marketing plan.

Microsoft will kick off the launch of branded ads by running an A.I. ad campaign called “Empowering Us All.” This will explore A.I. across sectors like Education, Healthcare and Philanthropy. Spotify says it was a good fit for the launch, as Discover Weekly is customized for each user by taking advantage of A.I. technology.

“At Microsoft we are focused on empowering every individual and organization to do more. Our work in AI is a central part of that mission to unlock human ingenuity,” said Erin Bevington, General Manager of Global Media at Microsoft, in a statement. “Our partnership with a technology innovator like Spotify offered a way for us to effectively share that message within a personalized entertainment experience powered by AI.”

Spotify recently passed 200 million monthly active users, but is now looking to new ways to generate revenue from its user base beyond simply converting free users to premium subscribers.

The company has been growing its subscriber base at a steady pace, but Wall St. hasn’t been happy with its financials. One issue is that its newer promotions, like its low-cost student and family plans, have seen its average revenue per user dropping – as of Q3 2018, it had fallen 6 percent year-over-year to $5.50. A more valuable ad product could help bring these numbers back up. 

“Personalization has quickly gone from a nice-to-have to an expected consumer experience that delights audiences and marketers are craving opportunities to be part of it” said Danielle Lee, Global Head of Partner Solutions at Spotify, in a statement. “Our new Discover Weekly ad experience positions advertisers for success and ensures that our fans are hearing messages that embody the ethos of discovery.”

Brand sponsorships for Discover Weekly are currently in beta testing, says Spotify.

Apple’s increasingly tricky international trade-offs

Far from Apple’s troubles in emerging markets and China, the company is attracting the ire of what should really be a core supporter demographic naturally aligned with the pro-privacy stance CEO Tim Cook has made into his public soapbox in recent years — but which is instead crying foul over perceived hypocrisy.

The problem for this subset of otherwise loyal European iPhone users is that Apple isn’t offering enough privacy.

These users want more choice over key elements such as the search engine that can be set as the default in Safari on iOS (Apple currently offers four choices: Google, Yahoo, Bing and DuckDuckGo, all U.S. search engines; and with ad tech giant Google set as the default).

It is also being called out over other default settings that undermine its claims to follow a privacy by design philosophy. Such as the iOS location services setting which, once enabled, non-transparently flip an associated sub-menu of settings — including location-based Apple ads. Yet bundled consent is never the same as informed consent…

As the saying goes you can’t please all of the people all of the time. But the new normal of a saturated smartphone market is imposing new pressures that will require a reconfiguration of approach.

Certainly the challenges of revenue growth and user retention are only going to step up from here on in. So keeping an otherwise loyal base of users happy and — crucially — feeling listened to and well served is going to be more and more important for the tech giant as the back and forth business of services becomes, well, essential to its fortunes going forward.

(At least barring some miracle new piece of Apple hardware — yet to be unboxed but which somehow rekindles smartphone-level demand afresh. That’s highly unlikely in any medium term timeframe given how versatile and capable the smartphone remains; ergo Apple’s greatest success is now Apple’s biggest challenge.)

With smartphone hardware replacement cycles slowing, the pressure on Cook to accelerate services revenue naturally steps up — which could in turn increase pressure on the core principles Cupertino likes to flash around.

Yet without principles there can be no brand premium for Apple to command. So that way ruin absolutely lies.

Control shift

It’s true that controlling the iOS experience by applying certain limits to deliver mainstream consumer friendly hardware served Apple well for years. But it’s also true iOS has grown in complexity over time having dropped some of its control freakery.

Elements that were previously locked down have been opened up — like the keyboard, for instance, allowing for third party keyboard apps to be installed by users that wish to rethink how they type.

This shift means the imposed limit on which search engines users can choose to set as an iOS default looks increasingly hard for Apple to justify from a user experience point of view.

Though of course from a business PoV Apple benefits by being able to charge Google a large sum of money to remain in the plum search default spot. (Reportedly a very large sum, though claims that the 2018 figure was $9BN have not been confirmed. Unsurprisingly neither party wants to talk about the terms of the transaction.)

The problem for Apple is that indirectly benefiting from Google eroding the user privacy it claims to champion — by letting the ad tech giant pay it to suck up iOS users’ search queries by default — is hardly consistent messaging.

Not when privacy is increasingly central to the premium the Apple brand commands.

Cook has also made a point of strongly and publicly attacking the ‘data industrial complex‘. Yet without mentioning the inconvenient side-note that Apple also engages in trading user data for profit in some instances, albeit indirectly.

In 2017 Apple switched from using Bing to Google for Siri web search results. So even as it has stepped up its rhetoric around user privacy it has deepened its business relationship with one of the Western Internet’s primary data suckers.

All of which makes for a very easy charge of hypocrisy.

Of course Apple offers iOS users a non-tracking search engine choice, DuckDuckGo, as an alternative choice — and has done so since 2014’s iOS 8.

Its support for a growing but still very niche product in what are mainstream consumer devices is an example of Apple being true to its word and actively championing privacy.

The presence of the DDG startup alongside three data-mining tech giants has allowed those ‘in the know’ iOS users to flip the bird at Google for years, meaning Apple has kept privacy conscious consumers buying its products (if not fully on side with all its business choices).

But that sort of compromise position looks increasingly difficult for Apple to defend.

Not if it wants privacy to be the clear blue water that differentiates its brand in an era of increasingly cut-throat and cut-price Android -powered smartphone competition that’s serving up much the same features at a lower up-front price thanks to all the embedded data-suckers.

There is also the not-so-small matter of the inflating $1,000+ price-tags on Apple’s top-of-the-range iPhones. $1,000+ for a smartphone that isn’t selling your data by default might still sound very pricy but at least you’d be getting something more than just shiny glass for all those extra dollars. But the iPhone isn’t actually that phone. Not by default.

Apple may be taking a view that the most privacy sensitive iPhone users are effectively a captive market with little option but to buy iOS hardware, given the Google-flavored Android competition. Which is true but also wouldn’t bode well for the chances of Apple upselling more services to these people to drive replacement revenue in a saturated smartphone market.

Offending those consumers who otherwise could be your very best, most committed and bought in users seems short-sighted and short-termist to say the least.

Although removing Google as the default search provider in markets where it dominates would obviously go massively against the mainstream grain that Apple’s business exists to serve.

This logic says Google is in the default position because, for most Internet users, Google search remains their default.

Indeed, Cook rolled out this exact line late last year when asked to defend the arrangement in an interview with Axios on HBO — saying: “I think their search engine is the best.”

He also flagged various pro-privacy features Apple has baked into its software in recent years, such as private browsing mode and smart tracker prevention, which he said work against the data suckers.

Albeit, that’s a bit like saying you’ve scattered a few garlic cloves around the house after inviting the thirsty vampire inside. And Cook readily admitted the arrangement isn’t “perfect”.

Clearly it’s a trade off. But Apple benefitting financially is what makes this particular trade-off whiff.

It implies Apple does indeed have an eye on quarterly balance sheets, and the increasingly important services line item specifically, in continuing this imperfect but lucrative arrangement — rather than taking a longer term view as the company purports to, per Cook’s letter to shareholders this week; in which he wrote: “We manage Apple for the long term, and Apple has always used periods of adversity to re-examine our approach, to take advantage of our culture of flexibility, adaptability and creativity, and to emerge better as a result.”

If Google’s search product is the best and Apple wants to take the moral high ground over privacy by decrying the surveillance industrial complex it could maintain the default arrangement in service to its mainstream base but donate Google’s billions to consumer and digital rights groups that fight to uphold and strengthen the privacy laws that people-profiling ad tech giants are butting hard against.

Apple’s shareholders might not like that medicine, though.

More palatable for investors would be for Apple to offer a broader choice of alternative search engines, thereby widening the playing field and opening up to more pro-privacy Google alternatives.

It could also design this choice in a way that flags up the trade-off to its millions of users. Such as, during device set-up, proactively asking users whether they want to keep their Internet searches private by default or use Google?

When put like that rather more people than you imagine might choose not to opt for Google to be their search default.

Non-tracking search engine DDG has been growing steadily for years, for example, hitting 30M daily searches last fall — with year-on-year growth of ~50%.

Given the terms of the Apple-Google arrangement sit under an NDA (as indeed all these arrangements do; DDG told us it couldn’t share any details about its own arrangement with Apple, for e.g.) it’s not clear whether one of Google’s conditions requires there be a limit on how many other search engines iOS users can pick from.

But it’s at least a possibility that Google is paying Apple to limit how many rivals sit in the list of competitors iOS users can pick out an alternative default. (It has, after all, recently been spanked in Europe for anti-competitive contractual limits imposed on Android OEMs to limit their ability to use alternatives to Google products, including search. So you could say Google has history where search is concerned.)

Equally, should Google actually relaunch a search product in China — as it’s controversially been toying with doing — it’s likely the company would push Apple to give it the default slot there too.

Though Apple would have more reason to push back, given Google would likely remain a minnow in that market. (Apple currently defaults to local search giant Baidu for iOS users in China.)

So even the current picture around search on iOS is a little more fuzzy than Cook likes to make out.

Local flavor

China is an interesting case, because if you look at Apple’s growth challenges in that market you could come to a very different conclusion vis-a-vis the power of privacy as a brand premium.

In China it’s convenience, via the do-it-all ‘Swiss army knife’ WeChat platform, that’s apparently the driving consumer force — and now also a headwind for Apple’s business there.

At the same time, the idea of users in the market having any kind of privacy online — when Internet surveillance has been imposed and ‘normalized’ by the state — is essentially impossible to imagine.

Yet Apple continues doing business in China, netting it further charges of hypocrisy.

Its revised guidance this week merely spotlights how important China and emerging markets are to its business fortunes. A principled pull-out hardly looks to be on the cards.

All of which underscores growing emerging market pressures on Apple that might push harder against its stated principles. What price privacy indeed?

It’s clear that carving out growth in a saturated smartphone market is going to be an increasingly tricky business for all players, with the risk of fresh trade-offs and pitfalls looming especially for Apple.

Negotiating this terrain certainly demands a fresh approach, as Cook implies is on his mind, per the shareholder letter.

Arguably the new normal may also call for an increasingly localized approach as a way to differentiate in a saturated and samey smartphone market.

The old Apple ‘one-sized fits all’ philosophy is already very outdated for some users and risks being caught flat-footed on a growing number of fronts — be that if your measure is software ‘innovation’ or a principled position on privacy.

An arbitrary limit on the choice of search engine your users can pick seems a telling example. Why not offer iOS users a free choice?

Or are Google’s billions really standing in the way of that?

It’s certainly an odd situation that iPhone owners in France, say, can pick from a wide range of keyboard apps — from mainstream names to superficial bling-focused glitter and/or neon LED keyboard skins or indeed emoji and GIF-obsessed keyboards — but if they want to use locally developed pro-privacy search engine Qwant on their phone’s native browser they have to tediously surf to the company’s webpage every time they want to look something up.

Google search might be the best for a median average ‘global’ (excluding China) iOS user but in an age of increasingly self-focused and self-centred technology, with ever more demanding consumers, there’s really no argument against letting people who want to choose for themselves.

In Europe there’s also the updated data protection framework, GDPR, to consider. Which may yet rework some mainstream ad tech business models.

On this front Qwant questions how even non-tracking rival DDG can protect users’ searches from government surveillance given its use of AWS cloud hosting and the U.S. Cloud Act. (Though, responding to a discussion thread about the issue on Github two years ago, DDG’s founder noted it has servers around the world, writing: “If you are in Europe you will be connected to our European servers.” He also reiterated that DDG does not collect any personal data from users — thereby limiting what could be extracted from AWS via the Act.)

Asked what reception it’s had when asking about getting its search engine on the Safari iOS list, Qwant told us the line that’s been (indirectly) fed back to it is “we are too European according to Apple”. (Apple declined to comment on the search choices it offers iOS users.)

“I have to work a lot to be more American,” Qwant co-founder and CEO Eric Leandri told us, summing up the smoke signals coming out of Cupertino.

“I understand that Apple wants to give the same kind of experience to their customers… but I would say that if I was Apple now, based on the politics that I want to follow — about protecting the privacy of customers — I think it would be great to start thinking about Europe as a market where people have a different point of view on their data,” he continued.

“Apple has done a lot of work to, for example, not let applications give data to each by a very strict [anti-tracking policy]; Apple has done a lot of work to guarantee that cookies and tracking is super difficult on iOS; and now the last problem of Apple is Google search.”

“So I hope that Apple will look at our proposal in a different way — not just one-fits-all. Because we don’t think that one-fits-all today,” he added.

Qwant too, then, is hoping for a better Apple to emerge as a result of a little market adversity.

Google & Facebook fed ad dollars to child porn discovery apps

Google has scrambled to remove third-party apps that led users to child porn sharing groups on WhatsApp in the wake of TechCrunch’s report about the problem last week. We contacted Google with the name of one these apps and evidence that it and others offered links to WhatsApp groups for sharing child exploitation imagery. Following publication of our article, Google removed that app and at least five like it from the Google Play store. Several of these apps had over 100,000 downloads, and they’re still functional on devices that already downloaded them.

A screenshot from earlier this month of now-banned child exploitation groups on WhatsApp . Phone numbers and photos redacted

WhatsApp failed to adequately police its platform, confirming to TechCrunch that it’s only moderated by its own 300 employees and not Facebook’s 20,000 dedicated security and moderation staffers. It’s clear that scalable and efficient artificial intelligence systems are not up to the task of protecting the 1.5 billion user WhatsApp community, and companies like Facebook must invest more in unscalable human investigators.

But now, new research provided exclusively to TechCrunch by anti-harassment algorithm startup AntiToxin shows that these removed apps that hosted links to child porn sharing rings on WhatsApp were supported with ads run by Google and Facebook’s ad networks. AntiToxin found 6 of these apps ran Google AdMob, 1 ran Google Firebase, 2 ran Facebook Audience Network, and 1 ran StartApp. These ad networks earned a cut of brands’ marketing spend while allowing the apps to monetize and sustain their operations by hosting ads for Amazon, Microsoft, Motorola, Sprint, Sprite, Western Union, Dyson, DJI, Gett, Yandex Music, Q Link Wireless, Tik Tok, and more.

The situation reveals that tech giants aren’t just failing to spot offensive content in their own apps, but also in third-party apps that host their ads and that earn them money. While these apps like “Group Links For Whats” by Lisa Studio let people discover benign links to WhatsApp groups for sharing legal content and discussing topics like business or sports, TechCrunch found they also hosted links with titles such as “child porn only no adv” and “child porn xvideos” that led to WhatsApp groups with names like “Children 💋👙👙” or “videos cp” — a known abbreviation for ‘child pornography’.

In a video provided by AntiToxin seen below, the app “Group Links For Whats by Lisa Studio” that ran Google AdMob is shown displaying an interstitial ad for Q Link Wireless before providing WhatsApp group search results for “child”. A group described as “Child nude FBI POLICE” is surfaced, and when the invite link is clicked, it opens within WhatsApp to a group used for sharing child exploitation imagery.  (No illegal imagery is shown in this video or article. TechCrunch has omitted the end of the video that showed a URL for an illegal group and the phone numbers of its members.)

Another video shows the app “Group Link For whatsapp by Video Status Zone” that ran Google AdMob and Facebook Audience Network displaying a link to a WhatsApp group described as “only cp video”. When tapped, the app first surfaces an interstitial ad for Amazon Photos before revealing a button for opening the group within WhatsApp. These videos show how alarmingly easy it was for people to find illegal content sharing groups on WhatsApp, even without WhatsApp’s help.

Zero Tolerance Doesn’t Mean Zero Illegal Content

In response, a Google spokesperson tells me that these group discovery apps violated its content policies and it’s continuing to look for more like them to ban. When they’re identified and removed from Google Play, it also suspends their access to its ad networks. However, it refused to disclose how much money these apps earned and whether it would refund the advertisers. The company provided this statement:

“Google has a zero tolerance approach to child sexual abuse material and we’ve invested in technology, teams and partnerships with groups like the National Center for Missing and Exploited Children, to tackle this issue for more than two decades. If we identify an app promoting this kind of material that our systems haven’t already blocked, we report it to the relevant authorities and remove it from our platform. These policies apply to apps listed in the Play store as well as apps that use Google’s advertising services.”

App Developer Ad Network Estimated Installs   Last Day Ranked
Unlimited Whats Groups Without Limit Group links   Jack Rehan Google AdMob 200,000 12/18/2018
Unlimited Group Links for Whatsapp NirmalaAppzTech Google AdMob 127,000 12/18/2018
Group Invite For Whatsapp Villainsbrain Google Firebase 126,000 12/18/2018
Public Group for WhatsApp Bit-Build Google AdMob, Facebook Audience Network   86,000 12/18/2018
Group links for Whats – Find Friends for Whats Lisa Studio Google AdMob 54,000 12/19/2018
Unlimited Group Links for Whatsapp 2019 Natalie Pack Google AdMob 3,000 12/20/2018
Group Link For whatsapp Video Status Zone   Google AdMob, Facebook Audience Network 97,000 11/13/2018
Group Links For Whatsapp – Free Joining Developers.pk StartAppSDK 29,000 12/5/2018

Facebook meanwhile blamed Google Play, saying the apps’ eligibility for its Facebook Audience Network ads was tied to their availability on Google Play and that the apps were removed from FAN when booted from the Android app store. The company was more forthcoming, telling TechCrunch it will refund advertisers whose promotions appeared on these abhorrent apps. It’s also pulling Audience Network from all apps that let users discover WhatsApp Groups.

A Facebook spokesperson tells TechCrunch that “Audience Network monetization eligibility is closely tied to app store (in this case Google) review. We removed [Public Group for WhatsApp by Bit-Build] when Google did – it is not currently monetizing on Audience Network. Our policies are on our website and out of abundance of caution we’re ensuring Audience Network does not support any group invite link apps. This app earned very little revenue (less than $500), which we are refunding to all impacted advertisers.” WhatsApp has already banned all the illegal groups TechCrunch reported on last week.

Facebook also provided this statement about WhatsApp’s stance on illegal imagery sharing groups and third-party apps for finding them:

“WhatsApp does not provide a search function for people or groups – nor does WhatsApp encourage publication of invite links to private groups. WhatsApp regularly engages with Google and Apple to enforce their terms of service on apps that attempt to encourage abuse on WhatsApp. Following the reports earlier this week, WhatsApp asked Google to remove all known group link sharing apps. When apps are removed from Google Play store, they are also removed from Audience Network.”

An app with links for discovering illegal WhatsApp Groups runs an ad for Amazon Photos

Israeli NGOs Netivei Reshet and Screen Savers worked with AntiToxin to provide a report published by TechCrunch about the wide extent of child exploitation imagery they found on WhatsApp. Facebook and WhatsApp are still waiting on the groups to work with Israeli police to provide their full research so WhatsApp can delete illegal groups they discovered and terminate user accounts that joined them.

AntiToxin develops technologies for protecting online networks harassment, bullying, shaming, predatory behavior and sexually explicit activity. It was co-founded by Zohar Levkovitz who sold Amobee to SingTel for $400M, and Ron Porat who was the CEO of ad-blocker Shine. [Disclosure: The company also employs Roi Carthy, who contributed to TechCrunch from 2007 to 2012.] “Online toxicity is at unprecedented levels, at unprecedented scale, with unprecedented risks for children, which is why completely new thinking has to be applied to technology solutions that help parents keep their children safe” Levkovitz tells me. The company is pushing Apple to remove WhatsApp from the App Store until the problems are fixed, citing how Apple temporarily suspended Tumblr due to child pornography.

Ad Networks Must Be Monitored

Encryption has proven an impediment to WhatsApp preventing the spread of child exploitation imagery. WhatsApp can’t see what is shared inside of group chats. Instead it has to rely on the few pieces of public and unencrypted data such as group names and profile photos plus their members’ profile photos, looking for suspicious names or illegal images. The company matches those images to a PhotoDNA database of known child exploitation photos to administer bans, and has human moderators investigate if seemingly illegal images aren’t already on file. It then reports its findings to law enforcement and the National Center For Missing And Exploited Children. Strong encryption is important for protecting privacy and political dissent, but also thwarts some detection of illegal content and thereby necessitates more manual moderation.

With just 300 total employees and only a subset working on security or content moderation, WhatsApp seems understaffed to manage such a large user base. It’s tried to depend on AI to safeguard its community. However, that technology can’t yet perform the nuanced investigations necessary to combat exploitation. WhatsApp runs semi-independently of Facebook, but could hire more moderators to investigate group discovery apps that lead to child pornography if Facebook allocated more resources to its acquisition.

WhatsApp group discovery apps featured Adult sections that contained links to child exploitation imagery groups

Google and Facebook, with their vast headcounts and profit margins, are neglecting to properly police who hosts their ad networks. The companies have sought to earn extra revenue by powering ads on other apps, yet failed to assume the necessary responsibility to ensure those apps aren’t facilitating crimes. Stricter examinations of in-app content should be administered before an app is accepted to app stores or ad networks, and periodically once they’re running. And when automated systems can’t be deployed, as can be the case with policing third-party apps, human staffers should be assigned despite the cost.

It’s becoming increasingly clear that social networks and ad networks that profit off of other people’s content can’t be low-maintenance cash cows. Companies should invest ample money and labor into safeguarding any property they run or monetize even if it makes the opportunities less lucrative. The strip-mining of the internet without regard for consequences must end.

Captiv8 report highlights data for spotting fake followers

Captiv8, a company offering tools for brands to manage influencer marketing campaigns, has released its 2018 Fraud Influencer Marketing Benchmark Report. The goal is to give marketers the data they need to spot fake followers — and thus, to separate the influencers with a real following from those who only offer the illusion of engagement.

The report argues that that this a problem with a real financial impact (it’s something that Instagram is working to crack down on), with $2.1 billion spent on influencer marketing on Instagram in 2017 and 11 percent of the engagement coming from fraudulent accounts.

“For influencer marketing to truly deliver on its transformative potential, marketers need a more concrete and reliable way to identify fake followers and engagement, compare their performance to industry benchmarks, and determine the real reach and impact of social media spend,” Captiv8 says.

So the company looked at a range of marketing categories (pets, parenting, beauty, fashion, entertainment, travel, gaming, fitness, food and traditional celebrity) and randomly selected 5,000 Instagram influencer accounts in each one, pulling engagement from August to November of this year.

The idea is to establish a baseline for standard activity, so that marketers can spot potential red flags. Of course, everyone with a significant social media audience is going to have some fake followers, but Captiv8 suggests that some categories have a higher rate of fraud than others — fashion was the worst, with an average of 14 percent of fake activity per account, compared to traditional celebrity, where the average was just 4 percent.

Captiv8 report

So what should you look out for? For starters, the report says the average daily change in follower counts for an influencer is 1.2 percent, so be on the lookout for shifts that are significantly larger.

The report also breaks down the average engagement rate for organic and sponsored content by category (ranging from 1.19 percent for sponsored content in food to 3.51 percent in entertainment), and suggests that a lower engagement rate “shows a high probability that their follower count is inflated through bots or fake followers.”

Conversely, it says it could also be a warning sign if a creator’s audience reach or impressions per user is higher than the industry benchmarks (for example, image posts in fashion have an average audience reach of 23.69 percent, with 1.32 impressions per unique user).

You can download the full report on the Captiv8 website.

Devcon raises $4.5M to beef up adtech security

Adtech cybersecurity company Devcon announced today that it has raised $4.5 million in seed funding.

Over the past couple of years, ad fraud has become a bigger concern in the industry, but Devcon co-founder and CEO Maggie Louie said most existing solutions focus on things like verifying ad quality and confirming that impressions aren’t coming from bots. Devcon, in contrast, functions more like “a Norton AntiVirus of adtech,” preventing attempts by bad actors who are “using adtech as a catalyst to attack consumers and companies.”

In other words, Louie said Devcon works with ad networks and publishers to “eliminate 99 percent of the nefarious things that are making their way through the system.” It says it can block malicious ads on an individual basis, whether they include pop-ups and redirects or unauthorized tag injectors. Customers can then view the individually blocked ads and see where they came from, and there’s also a dashboard that shows how much money is being lost to fraud.

Louie pointed to the recent DOJ indictment of eight individuals allegedly involved in a digital ad fraud scheme as a sign that the issue is becoming more serious.

“Some of these attacks have some very concerning potential outcomes [for consumers], so being able to stop those before they get out is akin to stopping a water contamination at the source level,” she added.

At the same time, she argued that this is a particularly challenging area for security, because there’s been “a lack of crossover between cybersecurity and ad ops,” leading to a dearth of “security people or cybersecurity people who understand adtech.”

Devcon screenshot

In contrast, the Devcon team combines media veterans like Louie (who was recently vice president of audience at the Athens Banner-Herald and also worked at the Los Angeles Times) with “white hat” hackers like co-founder and CTO Josh Summitt (who was previously on the ethical hacking team at Bank of America). It’s also hired former FBI Cyber Squad Supervisor Michael F. D. Anaya as its head of global cyber investigations and government relations.

In fact, Devcon says it assisted law enforcement in the first-ever conviction for online ad theft and money laundering, which resulted in a four-year prison sentence.

Devcon was founded in Memphis, Tenn., but has since expanded its headquarters to Atlanta, and it was part of this year’s Techstars Barclay accelerator in London. The seed funding was led by Las Olas VC — among other things, Louie said it will allow Devcon to further develop its machine learning technology to automatically identify emerging threats.

Online ads and games would benefit from more rewards, according to UCLA survey

A new study from Versus Systems and the MEMES (Management of Enterprise in Media, Entertainment & Sports) Center at UCLA’s Anderson School of Management examines how gaming and advertising are evolving, and how one influences the other.

As Versus Systems CEO Matthew Pierce put it, the goal was to study, “What is the impact on advertising as interactive media grows, and as more people consume interactive media?”

The individual findings — People like rewards! Not everyone who plays games calls themselves a gamer! — may not be that shocking to TechCrunch readers. And because Versus Systems has built a white-label platform for publishers to offer in-game rewards, the study might also seem a bit self-serving.

But again, this was conducted with UCLA’s Anderson School of Management, and both Pierce (who’s a lecturer at the school) and UCLA MEMES head Jay Tucker pointed to the size of the study, with 88,000 (U.S.-based) participants across a broad range of demographic groups.

Of those respondents, 50 percent said they’ve played a video game (on any platform) in the past week, while 41 percent said they’ve played a game in the past 24 hours. However, only 13 percent of respondents described themselves as gamers. That “identification gap” is even larger among women, where 56 percent played a game in the past week but only 11 percent identified themselves as gamers.

Why does that matter? Well, the MEMES Center and Versus Systems argue in the study press release that “advertisers that are recognizing the value in advertising in-game may be underestimating how large and how diverse the gaming audience really is today.”

The study also suggests that traditional advertising may be facing more resistance from consumers, with 46 percent of respondents saying they frequently or always avoid ads by “clicking the X” to close windows or changing channels or closing apps. Only 3.6 percent of respondents said they always watch ads all the way through.

When asked what would make them play games more, the most popular answer was “winning real things that I want when I achieve things in-game” — it was the number one result for 30 percent of respondents, and among millennials, it did even better. (In comparison, 18 percent put “if the games were less expensive” as their top answer and 11 percent said “my friends playing the same game(s).”) This attitude even extended to TV, where 77 percent of respondents listed rewards as one of the things (not necessarily the top reason) that would make them watch more television.

Meanwhile, 24 percent of respondents listed “if more games/more shows were made for people like me” as the number one thing that would convince them to play or watch more.

Tucker suggested that these seemingly scattershot answers are actually connected. On the advertising side, “We’ve got folks who are used to being part of a community all day, every day, whether that’s social media or massively multiplayer games. We see users are increasingly connected and are not really interested in getting pulled out of an experience. Rewards, if done properly, can reinforce being part of a community … you can amplify that sense of connection.”

“The introduction of choice seems to make a big difference,” Pierce added. “We need new models where we can foster choice, foster community, foster more aspirational relationships between viewers and brands that ultimately allows content developers to have a relationship with the brands that isn’t so adversarial.”

Meanwhile, when it comes to content and storytelling, Tucker said we’re entering an “age of personalization.” Among other things, that means more diversity, in what he described as “a generational shift away from stories that assume everybody’s looking at life from the same perspective.”

Pierce and Tucker suggested that they’ll be taking an even closer look at the data in the coming months (“needs further study” was repeated several times during the interview), particularly by examining responses within smaller demographic groups.