Facebook may proactively close Pages and Groups before they’re in violation of policy

Facebook today announced changes to the way it handles the removal of content from Facebook Pages that’s in violation of the social network’s Community Standards, as well as when the Page has posted items that are rated false by a third-party fact checking service. It says it will also make it harder for those whose Pages have been shut down for violations from returning with new Pages featuring the same, duplicated content by proactively banned other Pages and Groups, in some cases.

To address the first two issues, Facebook says it’s introducing a new tab on Facebook Pages – the “Page Quality” tab – which will inform those who manage the Page which content has been removed for violating standards and what was rated “fake news.”

The section will explain if content was removed for being “hate speechgraphic violenceharassment and bullying, and regulated goodsnudity or sexual activity,” or being “support or praise” of people and events that are not allowed to be on Facebook, the company explained today in a blog post detailing the upcoming changes.

The “people or events” not allowed on Facebook are those associated with real-world harm. This could include people associated with hate groups, terrorist activity, mass or serial murder, human trafficking, or organized crime or violence. Facebook also removes any content that expresses praise or support for those involved in such activities.

The tab will also inform Page managers which content may have been demoted by Facebook algorithms, if not removed entirely. This includes content that has been found to be false news by independent fact-checking organizations. Facebook began taking action against clickbait several years ago, then later began to flag and down-rank fake news, as that essentially became the new clickbait.

But those who distributed fake news headlines weren’t necessarily aware that their content’s distribution was being reduced as a result. This tab will now inform them.

Facebook says it will identify several types of down-ranked news items, including content recently rated “False,” “Mixture” or “False Headline” by third-party fact-checkers.

However, it won’t actually show those items it deemed “clickbait,” or those that it removed for being spam or due to an IP violation.

In other words, the new Page Quality tab isn’t a full window into everything being removed or down-ranked, only those areas that are today of utmost importance to Facebook to get under control.

“We hope this will give people the information they need to police bad behavior from fellow Page managers, better understand our Community Standards, and, let us know if we’ve made an incorrect decision on content they posted,” the company explained in its announcement.

Proactive Bans

Related to this, Facebook says it’s seen an increase in people using their existing Pages to duplicate the content that had been pulled down from Pages that were banned for violating Facebook’s Community Standards.

While it’s had policies that prohibited people from creating new Pages (or groups, events, accounts, etc.) for this purpose, it hadn’t yet been policing the use of existing Pages – and that, effectively, became a loophole for the violators to abuse.

Now, Facebook says when it removes a Page or Group for policy violations, it may also remove other Pages and Groups – even if the other Pages and Groups haven’t “met the threshold to be unpublished on its own.”

In other words, if Facebook believes the other Pages and Groups will be used as the new home for the content found to be in violation, it will proactively remove them…before they actually do so. (That’s likely to cause some debate.)

Facebook says it will make this determination based on a broad range of factors – like if the other Pages or Groups have the same admins or a use similar name, for example.

The new “Page Quality” tab will launch tomorrow, while the proactive removals will begin in the weeks ahead.

The case against behavioral advertising is stacking up

No one likes being stalked around the Internet by adverts. It’s the uneasy joke you can’t enjoy laughing at. Yet vast people-profiling ad businesses have made pots of money off of an unregulated Internet by putting surveillance at their core.

But what if creepy ads don’t work as claimed? What if all the filthy lucre that’s currently being sunk into the coffers of ad tech giants — and far less visible but no less privacy-trampling data brokers — is literally being sunk, and could both be more honestly and far better spent?

Case in point: This week Digiday reported that the New York Times managed to grow its ad revenue after it cut off ad exchanges in Europe. The newspaper did this in order to comply with the region’s updated privacy framework, GDPR, which includes a regime of supersized maximum fines.

The newspaper business decided it simply didn’t want to take the risk, so first blocked all open-exchange ad buying on its European pages and then nixed behavioral targeting. The result? A significant uptick in ad revenue, according to Digiday’s report.

“NYT International focused on contextual and geographical targeting for programmatic guaranteed and private marketplace deals and has not seen ad revenues drop as a result, according to Jean-Christophe Demarta, SVP for global advertising at New York Times International,” it writes.

“Currently, all the ads running on European pages are direct-sold. Although the publisher doesn’t break out exact revenues for Europe, Demarta said that digital advertising revenue has increased significantly since last May and that has continued into early 2019.”

It also quotes Demarta summing up the learnings: “The desirability of a brand may be stronger than the targeting capabilities. We have not been impacted from a revenue standpoint, and, on the contrary, our digital advertising business continues to grow nicely.”

So while (of course) not every publisher is the NYT, publishers that have or can build brand cachet, and pull in a community of engaged readers, must and should pause for thought — and ask who is the real winner from the notion that digitally served ads must creep on consumers to work?

The NYT’s experience puts fresh taint on long-running efforts by tech giants like Facebook to press publishers to give up more control and ownership of their audiences by serving and even producing content directly for the third party platforms. (Pivot to video anyone?)

Such efforts benefit platforms because they get to make media businesses dance to their tune. But the self-serving nature of pulling publishers away from their own distribution channels (and content convictions) looks to have an even more bass string to its bow — as a cynical means of weakening the link between publishers and their audiences, thereby risking making them falsely reliant on adtech intermediaries squatting in the middle of the value chain.

There are other signs behavioural advertising might be a gigantically self-serving con too.

Look at non-tracking search engine DuckDuckGo, for instance, which has been making a profit by serving keyword-based ads and not profiling users since 2014, all the while continuing to grow usage — and doing so in a market that’s dominated by search giant Google.

DDG recently took in $10M in VC funding from a pension fund that believes there’s an inflection point in the online privacy story. These investors are also displaying strong conviction in the soundness of the underlying (non-creepy) ad business, again despite the overbearing presence of Google.

Meanwhile, Internet users continue to express widespread fear and loathing of the ad tech industry’s bandwidth- and data-sucking practices by running into the arms of ad blockers. Figures for usage of ad blocking tools step up each year, with between a quarter and a third of U.S. connected device users’ estimated to be blocking ads as of 2018 (rates are higher among younger users).

Ad blocking firm Eyeo, maker of the popular AdBlock Plus product, has achieved such a position of leverage that it gets Google et al to pay it to have their ads whitelisted by default — under its self-styled ‘acceptable ads’ program. (Though no one will say how much they’re paying to circumvent default ad blocks.)

So the creepy ad tech industry is not above paying other third parties for continued — and, at this point, doubly grubby (given the ad blocking context) — access to eyeballs. Does that sound even slightly like a functional market?

In recent years expressions of disgust and displeasure have also been coming from the ad spending side too — triggered by brand-denting scandals attached to the hateful stuff algorithms have been serving shiny marketing messages alongside. You don’t even have to be worried about what this stuff might be doing to democracy to be a concerned advertiser.

Fast moving consumer goods giants Unilever and Procter & Gamble are two big spenders which have expressed concerns. The former threatened to pull ad spend if social network giants didn’t clean up their act and prevent their platforms algorithmically accelerating hateful and divisive content.

While the latter has been actively reevaluating its marketing spending — taking a closer look at what digital actually does for it. And last March Adweek reported it had slashed $200M from its digital ad budget yet had seen a boost in its reach of 10 per cent, reinvesting the money into areas with “‘media reach’ including television, audio and ecommerce”.

The company’s CMO, Marc Pritchard, declined to name which companies it had pulled ads from but in a speech at an industry conference he said it had reduced spending “with several big players” by 20 per cent to 50 per cent, and still its ad business grew.

So chalk up another tale of reduced reliance on targeted ads yielding unexpected business uplift.

At the same time, academics are digging into the opaquely shrouded question of who really benefits from behavioral advertising. And perhaps getting closer to an answer.

Last fall, at an FTC hearing on the economics of big data and personal information, Carnegie Mellon University professor of IT and public policy, Alessandro Acquisti, teased a piece of yet to be published research — working with a large U.S. publisher that provided the researchers with millions of transactions to study.

Acquisti said the research showed that behaviourally targeted advertising had increased the publisher’s revenue but only marginally. At the same time they found that marketers were having to pay orders of magnitude more to buy these targeted ads, despite the minuscule additional revenue they generated for the publisher.

“What we found was that, yes, advertising with cookies — so targeted advertising — did increase revenues — but by a tiny amount. Four per cent. In absolute terms the increase in revenues was $0.000008 per advertisment,” Acquisti told the hearing. “Simultaneously we were running a study, as merchants, buying ads with a different degree of targeting. And we found that for the merchants sometimes buying targeted ads over untargeted ads can be 500% times as expensive.”

“How is it possible that for merchants the cost of targeting ads is so much higher whereas for publishers the return on increased revenues for targeted ads is just 4%,” he wondered, posing a question that publishers should really be asking themselves — given, in this example, they’re the ones doing the dirty work of snooping on (and selling out) their readers.

Acquisti also made the point that a lack of data protection creates economic winners and losers, arguing this is unavoidable — and thus qualifying the oft-parroted tech industry lobby line that privacy regulation is a bad idea because it would benefit an already dominant group of players. The rebuttal is that a lack of privacy rules also does that. And that’s exactly where we are now.

“There is a sort of magical thinking happening when it comes to targeted advertising [that claims] everyone benefits from this,” Acquisti continued. “Now at first glance this seems plausible. The problem is that upon further inspection you find there is very little empirical validation of these claims… What I’m saying is that we actually don’t know very well to which these claims are true and false. And this is a pretty big problem because so many of these claims are accepted uncritically.”

There’s clearly far more research that needs to be done to robustly interrogate the effectiveness of targeted ads against platform claims and vs more vanilla types of advertising (i.e. which don’t demand reams of personal data to function). But the fact that robust research hasn’t been done is itself interesting.

Acquisti noted the difficulty of researching “opaque blackbox” ad exchanges that aren’t at all incentivized to be transparent about what’s going on. Also pointing out that Facebook has sometimes admitted to having made mistakes that significantly inflated its ad engagement metrics.

His wider point is that much current research into the effectiveness of digital ads is problematically narrow and so is exactly missing a broader picture of how consumers might engage with alternative types of less privacy-hostile marketing.

In a nutshell, then, the problem is the lack of transparency from ad platforms; and that lack serving the self same opaque giants.

But there’s more. Critics of the current system point out it relies on mass scale exploitation of personal data to function, and many believe this simply won’t fly under Europe’s tough new GDPR framework.

They are applying legal pressure via a set of GDPR complaints, filed last fall, that challenge the legality of a fundamental piece of the (current) adtech industry’s architecture: Real-time bidding (RTB); arguing the system is fundamentally incompatible with Europe’s privacy rules.

We covered these complaints last November but the basic argument is that bid requests essentially constitute systematic data breaches because personal data is broadcast widely to solicit potential ad buys and thereby poses an unacceptable security risk — rather than, as GDPR demands, people’s data being handled in a way that “ensures appropriate security”.

To spell it out, the contention is the entire behavioral advertising business is illegal because it’s leaking personal data at such vast and systematic scale it cannot possibly comply with EU data protection law.

Regulators are considering the argument, and courts may follow. But it’s clear adtech systems that have operated in opaque darkness for years, without no worry of major compliance fines, no longer have the luxury of being able to take their architecture as a given.

Greater legal risk might be catalyst enough to encourage a market shift towards less intrusive targeting; ads that aren’t targeted based on profiles of people synthesized from heaps of personal data but, much like DuckDuckGo’s contextual ads, are only linked to a real-time interest and a generic location. No creepy personal dossiers necessary.

If Acquisti’s research is to be believed — and here’s the kicker for Facebook et al — there’s little reason to think such ads would be substantially less effective than the vampiric microtargeted variant that Facebook founder Mark Zuckerberg likes to describe as “relevant”.

The ‘relevant ads’ badge is of course a self-serving concept which Facebook uses to justify creeping on users while also pushing the notion that its people-tracking business inherently generates major extra value for advertisers. But does it really do that? Or are advertisers buying into another puffed up fake?

Facebook isn’t providing access to internal data that could be used to quantify whether its targeted ads are really worth all the extra conjoined cost and risk. While the company’s habit of buying masses of additional data on users, via brokers and other third party sources, makes for a rather strange qualification. Suggesting things aren’t quite what you might imagine behind Zuckerberg’s drawn curtain.

Behavioral ad giants are facing growing legal risk on another front. The adtech market has long been referred to as a duopoly, on account of the proportion of digital ad spending that gets sucked up by just two people-profiling giants: Google and Facebook (the pair accounted for 58% of the market in 2018, according to eMarketer data) — and in Europe a number of competition regulators have been probing the duopoly.

Earlier this month the German Federal Cartel Office was reported to be on the brink of partially banning Facebook from harvesting personal data from third party providers (including but not limited to some other social services it owns). Though an official decision has yet to be handed down.

While, in March 2018, the French Competition Authority published a meaty opinion raising multiple concerns about the online advertising sector — and calling for an overhaul and a rebalancing of transparency obligations to address publisher concerns that dominant platforms aren’t providing access to data about their own content.

The EC’s competition commissioner, Margrethe Vestager, is also taking a closer look at whether data hoarding constitutes a monopoly. And has expressed a view that, rather than breaking companies up in order to control platform monopolies, the better way to go about it in the modern ICT era might be by limiting access to data — suggesting another potentially looming legal headwind for personal data-sucking platforms.

At the same time, the political risks of social surveillance architectures have become all too clear.

Whether microtargeted political propaganda works as intended or not is still a question mark. But few would support letting attempts to fiddle elections just go ahead and happen anyway.

Yet Facebook has rushed to normalize what are abnormally hostile uses of its tools; aka the weaponizing of disinformation to further divisive political ends — presenting ‘election security’ as just another day-to-day cost of being in the people farming business. When the ‘cost’ for democracies and societies is anything but normal. 

Whether or not voters can be manipulated en masse via the medium of targeted ads, the act of targeting itself certainly has an impact — by fragmenting the shared public sphere which civilized societies rely on to drive consensus and compromise. Ergo, unregulated social media is inevitably an agent of antisocial change.

The solution to technology threatening democracy is far more transparency; so regulating platforms to understand how, why and where data is flowing, and thus get a proper handle on impacts in order to shape desired outcomes.

Greater transparency also offers a route to begin to address commercial concerns about how the modern adtech market functions.

And if and when ad giants are forced to come clean — about how they profile people; where data and value flows; and what their ads actually deliver — you have to wonder what if anything will be left unblemished.

People who know they’re being watched alter their behavior. Similarly, platforms may find behavioral change enforced upon them, from above and below, when it becomes impossible for everyone else to ignore what they’re doing.

Facebook launches petition feature, its next battlefield

Gather a mob and Facebook will now let you make political demands. Tomorrow Facebook will encounter a slew of fresh complexities with the launch of Community Actions, its News Feed petition feature. Community Actions could unite neighbors to request change from their local and national elected officials and government agencies. But it could also provide vocal interest groups a bully pulpit from which to pressure politicians and bureaucrats with their fringe agendas.

Community Actions embodies the central challenge facing Facebook. Every tool it designs for positive expression and connectivity can be subverted for polarization and misinformation. Facebook’s membership has swelled into such a ripe target for exploitation that it draws out the worst of humanity. You can imagine misuses like “Crack down on [minority group]” that are offensive or even dangerous but some see as legitimate. The question is whether Facebook puts in the forethought and aftercare to safeguard its new tools with proper policy and moderation. Otherwise each new feature is another liability.

Community Actions start to roll out to the US tomorrow after several weeks of testing in a couple of markets. Users can add a title, description, and image to their Community Action, and tag relevant government agencies and officials who’ll be notified. The goal is to make the Community Action go viral and get people to hit the “Support” button. Community Actions have their own discussion feed where people can leave comments, create fundraisers, and organize Facebook Events or Call Your Rep campaigns. Facebook displays the numbers of supporters behind a Community Action, but you’ll only be able to see the names of those you’re friends with or that are Pages or public figures.

Facebook is purposefully trying to focus Community Actions to be more narrowly concentrated on spurring government action than just any random cause. That means it won’t immediately replace Change.org petitions that can range from the civilian to the absurd. But one-click Support straight from the News Feed could massively reduce the friction to signing up, and thereby attract organizations and individuals seeking to maximize the size of their mob.

You can check out some examples here of Community Actions here like a non-profit Colorado Rising calling for the governor to put a moratorium on oil and gas drilling, citizens asking the a Florida’s mayor and state officials to build a performing arts center, and a Philadelphia neighborhood association requesting that the city put in crosswalks by the library. I fully expect one of the first big Community Actions will be the social network’s users asking Senators to shut down Facebook or depose Mark Zuckerberg.

The launch follows other civic-minded Facebook features like its Town Hall and Candidate Info for assessing politicians, Community Help for finding assistance after a disaster, and local news digest Today In. A Facebook spokesperson who gave us the first look at Community Actions provided this statement:

“Building informed and civically engaged communities is at the core of Facebook’s mission. Every day, people come together on Facebook to advocate for causes they care about, including by contacting their elected officials, launching a fundraiser, or starting a group. Through these and other tools, we have seen people marshal support for and get results on issues that matter to them. Community Action is another way for people to advocate for changes in their communities and partner with elected officials and government agencies on solutions.”

The question will be where Facebook’s moderators draw the line on what’s appropriate as a Community Action, and the ensuing calls of bias that line will trigger. Facebook is employing a combination of user flagging, proactive algorithmic detection, and human enforcers to manage the feature. But what the left might call harassment, the right might call free expression. If Facebook allows controversial Community Actions to persist, it could be viewed as complicit with their campaigns, but could be criticized for censorship if it takes one down. Like fake news and trending topics, the feature could become the social network’s latest can of worms.

Facebook is trying to prioritize local Actions where community members have a real stake. It lets user display “constituent” badges so their elected officials know they aren’t just a distant rabble-rouser. It’s why Facebook will not allow President Donald Trump or Vice President Mike Pence to be tagged in Community Actions. But you’re free to tag all your state representatives demanding nude parks, apparently.

Another issue is how people can stand up against a Community Action. Only those who Support one may join in its discussion feed. That might lead trolls to falsely pledge their backing just to stir up trouble in the comments. Otherwise, Facebook tells me users will have to share a Community Action to their own feed with a message of disapproval, or launch their own in protest. My concern is that an agitated but niche group could drive a sense of false equivocacy by using Facebook Groups or message threads to make it look like there’s as much or more support for a vulgar cause or against of a just one. A politician could be backed into a corner and forced to acknowledge radicals or bad-faith actors lest they look negligent

While Facebook’s spokesperson says initial tests didn’t surface many troubles, the company is trying to balance safety with efficiency and it will consider how to evolve the feature in response to emergent behaviors. The trouble is that open access draws out the trolls and grifters seeking to fragment society. Facebook will have to assume the thorny responsibility of shepherding the product towards righteousness and defining what that even means. If it succeeds, there’s an amazing opportunity here for citizens to band together to exert consensus upon government. A chorus of voices carries much further than a single cry.

Facebook fears no FTC fine

Reports emerged today that the FTC is considering a fine against Facebook that would be the largest ever from the agency. Even if it were ten times the size of the largest, a $22.5 million bill sent to Google in 2012, the company would basically laugh it off. Facebook is made of money. But the FTC may make it provide something it has precious little of these days: accountability.

A Washington Post report cites sources inside the agency (currently on hiatus due to the shutdown) saying that regulators have “met to discuss imposing a record-setting fine.” We may as well say here that this must be taken with a grain of salt at the outset; that Facebook is non-compliant with terms set previously by the FTC is an established fact, so how much they should be made to pay is the natural next topic of discussion.

But how much would it be? The scale of the violation is hugely negotiable. Our summary of the FTC’s settlement requirements for Facebook indicate that it was:

  • barred from making misrepresentations about the privacy or security of consumers’ personal information;
  • required to obtain consumers’ affirmative express consent before enacting changes that override their privacy preferences;
  • required to prevent anyone from accessing a user’s material more than 30 days after the user has deleted his or her account;
  • required to establish and maintain a comprehensive privacy program designed to address privacy risks associated with the development and management of new and existing products and services, and to protect the privacy and confidentiality of consumers’ information; and
  • required, within 180 days, and every two years after that for the next 20 years, to obtain independent, third-party audits certifying that it has a privacy program in place that meets or exceeds the requirements of the FTC order, and to ensure that the privacy of consumers’ information is protected.

How many of those did it break, and how many times? Is it per user? Per account? Per post? Per offense? What is “accessing” under such and such a circumstance? The FTC is no doubt deliberating these things.

Yet it is hard to imagine them coming up with a number that really scares Facebook. A hundred million dollars is a lot of money, for instance. But Facebook took in more than $13 billion in revenue last quarter. Double that fine, triple it, and Facebook bounces back.

If even a fine ten times the size of the largest it ever threw can’t faze the target, what can the FTC do to scare Facebook into playing by the book? Make it do what it’s already supposed to be doing, but publicly.

How many ad campaigns is a user’s data being used for? How many internal and external research projects? How many copies are there? What data specifically and exactly is it collecting on any given user, how is that data stored, who has access to it, to whom is it sold or for whom is it aggregated or summarized? What is the exact nature of the privacy program it has in place, who works for it, who do they report to, and what are their monthly findings?

These and dozens of other questions come immediately to mind as things Facebook should be disclosing publicly in some way or another, either directly to users in the case of how one’s data is being used, or in a more general report, such as what concrete measures are being taken to prevent exfiltration of profile data by bad actors, or how user behavior and psychology is being estimated and tracked.

Not easy or convenient questions to answer at all, let alone publicly and regularly. But if the FTC wants the company to behave, it has to impose this level of responsibility and disclosure. Because, as Facebook has already shown, it cannot be trusted to disclose it otherwise. Light touch regulation is all well and good… until it isn’t.

This may in fact be such a major threat to Facebook’s business — imagine having to publicly state metrics that are clearly at odds with what you tell advertisers and users — that it might attempt to negotiate a larger initial fine in order to avoid punitive measures such as those outlined here. Volkswagen spent billions not on fines, but in sort of punitive community service to mitigate the effects of its emissions cheating. Facebook too could be made to shell out in this indirect way.

What the FTC is capable of requiring from Facebook is an open question, since the scale and nature of these violations are unprecedented. But whatever they come up with, the part with a dollar sign in front of it — however many places it goes to — will be the least of Facebook’s worries.

Facebook is secretly building LOL, a cringey teen meme hub

How do you do, fellow kids? After Facebook Watch, Lasso and IGTV failed to become hits with teens, the company has been quietly developing another youthful video product. Multiple sources confirm that Facebook has spent months building LOL, a special feed of funny videos and GIF-like clips. It’s divided into categories like “For You,” “Animals,” “Fails,” “Pranks” and more with content pulled from News Feed posts by top meme Pages on Facebook. LOL is currently in private beta with around 100 high school students who signed non-disclosure agreements with parental consent to do focus groups and one-on-one testing with Facebook staff.

In response to TechCrunch’s questioning, Facebook confirmed it is privately testing LOL as a home for funny meme content with a very small number of U.S. users. While those testers experience LOL as a replacement for their Watch tab, Facebook says there’s no plans to roll out LOL in Watch and the team is still finalizing whether it will become a separate feature in one of Facebook’s main app or a standalone app. Facebook initially declined to give a formal statement but told us the details we had were accurate. [Update: A Facebook spokesperson has now provided an official statement: “We are running a small scale test and the concept is in the early stages right now.”]

With teens increasingly turning to ephemeral Stories for sharing and content consumption, Facebook is desperate to lure them back to its easily-monetizable feeds. Collecting the funniest News Feed posts and concentrating them in a dedicated place could appeal to kids seeking rapid-fire lightweight entertainment. LOL could also soak up some of the “low-quality” videos Facebook scrubbed out of the News Feed a year ago in hopes of decreasing zombie-like passive viewing that can hurt people’s well-being.

But our sources familiar with LOL’s design said it still feels “cringey”, like Facebook is futilely pretending to be young and hip. The content found in LOL is sometimes weeks old, so meme-obsessed teens may have seen it before. After years of parents overrunning Facebook, teens have grown skeptical of the app and many have fled for Instagram, Snapchat, and YouTube. Parachuting into the memespehere may come off as inauthentic posing and Facebook could find it difficult to build a young fanbase for LOL.

In one of the recent designs for LOL, screenshots attained by TechCrunch show users are greeted with a carousel of themed collections called “Dailies” like “Look Mom No Hands” in a design reminiscent of Snapchat’s Discover section. Below that there’s a feed of algorithmically curated “For You” clips. Users can filter the LOL feed to show categories like “Wait For It”, “Savage”, “Classics”, “Gaming”, “Celebs”, “School”, and “Stand-Up”, or tap buttons atop the screen to see dedicated sub-feeds for these topics.

Once users open a Dailies collection or start scrolling the feed, it turns into a black-bordered theater mode that auto-advances after you finish a video clip for lean-back consumption. Facebook cuts each video clip up into sections several seconds long that users can fast-forward through with a tap like they’re watching a long Instagram Story. Below each piece of content is a set of special LOL reaction buttons for “Funny”, “Alright”, and “Not Funny”. There’s also a share button on each piece of content, plus users can upload videos or paste in a URL to submit videos to LOL.

Facebook has repeatedly failed to capture the hearts of teens with Snapchat clones like Poke and Slingshot, standalone apps like Lifestage, and acquisitions like TBH. Fears that it’s losing the demographic or that the shift driven by the youth from feeds to Stories that Facebook has less experience monetizing have caused massive drops in the company’s share price over the years. If Facebook can’t fill in this age gap, the next generation of younger users might sidestep the social network too, which could lead to huge downstream problems for growth and revenue.

That’s why Facebook won’t give up on teens, even despite embarrassing stumbles. Its new Tik Tok clone Lasso saw only 10,000 downloads in the first 12 days. Despite seeming like a ghost town, Facebook still updated it with a retweet-like Relasso and camera uploads today. Unlike the Tik Tok-dominated musical video space, though, the meme sharing universe is much more fragmented and there’s a better chance for Facebook to barge in.

Teens discover memes on Reddit, Twitter, Instagram, and exchange them in DMs. Beyond Imgur that encompasses lots of visual storytelling styles, there’s no super-popular dedicated meme discovery app. Instagram is probably already the leader, with tons of users following meme accounts to get fresh daily dumps of jokes. Facebook might have more luck if it built meme creation tools and a dedicated viewing hub in Explore or a second Stories tray within Instagram. As is, it mostly ignores meme culture while occasionally shutting down curators’ accounts.

Facebook might seem out of touch, but the fact that it’s even trying to build a meme browser shows it recognizes the opportunity here. Sometimes our brains need a break and we want quick hits of entertainment that don’t require too much thought, commitment, or attention span. As Facebook tries to become more meaningful, LOL could save room for meaningless fun.

Dolby quietly preps augmented audio recorder app “234″

Dolby is secretly building a mobile music production app it hopes will seduce SoundCloud rappers and other musicians. Codenamed “234” and formerly tested under the name Dolby Live, the free app measures background noise before you record and then nullifies it. Users can also buy “packs” of audio effects to augment their sounds with EQs settings like “Amped, Bright, Lyric, Thump, Deep, or Natural”. Recordings can then be exported, shared to Dolby’s own audio social network, or uploaded directly to SoundCloud through a built-in integration.

You could call it VSCO or Instagram for SoundCloud.

234 is Dolby Labs’ first big entrance into the world of social apps that could give it more face time with consumers than its core business of integrating audio technology into devices by other manufacturers. Using 234 to convince musicians that Dolby is an expert at audio quality could get them buying more of those speakers and headphones. And by selling audio effect packs, the app could earn the company money directly while making the world of mobile music sound better.

Dolby has been covertly testing Dolby Live/234 since at least June. A source tipped us off to the app and while the company hasn’t formally announced it, there is a website for signing up to test Dolby 234. Dolby PR refused to comment on the forthcoming app. But 234’s sign-up site advertises it saying “How can music recorded on a phone sound so good? Dolby 234 automatically cleans up the sound, gives it tone and space, and finds the ideal loudness. it’s like having your own producer in your phone.”

Those with access to the Dolby 234 app can quickly record audio or audio/video clips with optional background noise cancelling. Free sound editing tools including trimming, loudness boost, and bass and treble controls. Users can get a seven-day free trial of the Dolby’s “Essentials” pack of EQ presets like ‘Bright’ before having to pay, though the pack was free in the beta version so we’re not sure how much it will cost. The “Tracks” tab lets you edit or share any of the clips you’ve recorded.

Overall, the app is polished and intuitive with a lively feel thanks to the Instagram logo-style purple/orange gradient color scheme. The audio effects have a powerful impact on the sound without being gimmicky or overbearing. There’s plenty of room for additional features, though, like multi-tracking, a metronome, or built-in drum beats.

For musicians posting mobile clips to Instagram or other social apps, 234 could make them sound way better without much work. There’s also a huge opportunity for Dolby to court podcasters and other non-music audio creators. I’d love a way to turn effects on and off mid-recording so I could add the feeling of an intimate whisper or echoey ampitheater to emphasize certain words or phrases.

Given how different 234 is from Dolby’s traditional back-end sound processing technologies, it’s done a solid job with design and the app could still get more bells and whistles before an official launch. It’s a creative move for the brand and one that recognizes the seismic shifts facing audio production and distribution. As always-in earbuds like Apple’s AirPods and voice interfaces like Alexa proliferate, short-form audio content will become more accessible and popular. Dolby could spare the world from having to suffer through amazing creators muffled by crappy recordings.

Twitter bug revealed some Android users’ private tweets

Twitter accidentally revealed some users’ “protected” (aka, private) tweets, the company disclosed this afternoon. The “Protect your Tweets” setting typically allows people to use Twitter in a non-public fashion. These users get to approve who can follow them and who can view their content. For some Android users over a period of several years, that may not have been the case – their tweets were actually made public as a result  of this bug.

The company says that the issue impacted Twitter for Android users who made certain account changes while the “Protect your Tweets” option was turned on.

For example, if the user had changed their account email address, the “Protect your Tweets” setting was disabled.

Twitter tells TechCrunch that’s just one example of an account change that could have prompted the issue. We asked for other examples, but the company declined share any specifics.

What’s fairly shocking is how long this issue has been happening.

Twitter says that users may have been impacted by the problem if they made these accounts changes between November 3, 2014, and January 14, 2019 – the day the bug was fixed. 

The company has now informed those who were affected by the issue, and has re-enabled the “Protect your Tweets” setting if it had been disabled on those accounts. But Twitter says it’s making a public announcement because it “can’t confirm every account that may have been impacted.” (!!!)

The company explains to us it was only able to notify those people where it was able to confirm the account was impacted, but says it doesn’t have a complete list of impacted accounts. For that reason, it’s unable to offer an estimate of how many Twitter for Android users were affected in total.

This is a sizable mistake on Twitter’s part, as it essentially made content that users had explicitly indicated they wanted private available to the public. It’s unclear at this time if the issue will result in a GDPR violation and fine, as a result.

The one bright spot is that some of the impacted users may have noticed their account had become public because they would have received alerts – like notifications that people were following them without their direct consent. That could have prompted the user to re-enable the “protect tweets” setting on their own. But they may have chalked up the issue to user error or a small glitch, not realizing it was a system-wide bug.

“We recognize and appreciate the trust you place in us, and are committed to earning that trust every day,” wrote Twitter in a statement. “We’re very sorry this happened and we’re conducting a full review to help prevent this from happening again.”

The company says it believes the issue is now fully resolved.

Facebook finds and kills another 512 Kremlin-linked fake accounts

Two years on from the U.S. presidential election, Facebook continues to have a major problem with Russian disinformation being megaphoned via its social tools.

In a blog post today the company reveals another tranche of Kremlin-linked fake activity — saying it’s removed a total of 471 Facebook pages and accounts, as well as 41 Instagram accounts, which were being used to spread propaganda in regions where Putin’s regime has sharp geopolitical interests.

In its latest reveal of “coordinated inauthentic behavior” — aka the euphemism Facebook uses for disinformation campaigns that rely on its tools to generate a veneer of authenticity and plausibility in order to pump out masses of sharable political propaganda — the company says it identified two operations, both originating in Russia, and both using similar tactics without any apparent direct links between the two networks.

One operation was targeting Ukraine specifically, while the other was active in a number of countries in the Baltics, Central Asia, the Caucasus, and Central and Eastern Europe.

“We’re taking down these Pages and accounts based on their behavior, not the content they post,” writes Facebook’s Nathaniel Gleicher, head of cybersecurity policy. “In these cases, the people behind this activity coordinated with one another and used fake accounts to misrepresent themselves, and that was the basis for our action.”

Sputnik link

Discussing the Russian disinformation op targeting multiple countries, Gleicher says Facebook found what looked like innocuous or general interest pages to be linked to employees of Kremlin propaganda outlet Sputnik, with some of the pages encouraging protest movements and pushing other Putin lines.

“The Page administrators and account owners primarily represented themselves as independent news Pages or general interest Pages on topics like weather, travel, sports, economics, or politicians in Romania, Latvia, Estonia, Lithuania, Armenia, Azerbaijan, Georgia, Tajikistan, Uzbekistan, Kazakhstan, Moldova, Russia, and Kyrgyzstan,” he writes. “Despite their misrepresentations of their identities, we found that these Pages and accounts were linked to employees of Sputnik, a news agency based in Moscow, and that some of the Pages frequently posted about topics like anti-NATO sentiment, protest movements, and anti-corruption.”

Facebook has included some sample posts from the removed accounts in the blog which show a mixture of imagery being deployed — from a photo of a rock concert, to shots of historic buildings and a snowy scene, to obviously militaristic and political protest imagery.

In all Facebook says it removed 289 Pages and 75 Facebook accounts associated with this Russian disop; adding that around 790,000 accounts followed one or more of the removed Pages.

It also reveals that it received around $135,000 for ads run by the Russian operators (specifying this was paid for in euros, rubles, and U.S. dollars).

“The first ad ran in October 2013, and the most recent ad ran in January 2019,” it notes, adding: “We have not completed a review of the organic content coming from these accounts.”

These Kremlin-linked Pages also hosted around 190 events — with the first scheduled for August 2015, according to Facebook, and the most recent scheduled for January 2019. “Up to 1,200 people expressed interest in at least one of these events. We cannot confirm whether any of these events actually occurred,” it further notes.

Facebook adds that open source reporting and work by partners which investigate disinformation helped identify the network. (For more on the open source investigation check out this blog post from DFRLab.)

It also says it has shared information about the investigation with U.S. law enforcement, the U.S. Congress, other technology companies, and policymakers in impacted countries.

Ukraine tip-off

In the case of the Ukraine-targeted Russian disop, Facebook says it removed a total of 107 Facebook Pages, Groups, and accounts, and 41 Instagram accounts, specifying that it was acting on an initial tip off from U.S. law enforcement.

In all it says around 180,000 Facebook accounts were following one or more of the removed pages. While the fake Instagram accounts were being followed by more than 55,000 accounts.  

Again Facebook received money from the disinformation purveyors, saying it took in around $25,000 in ad spending on Facebook and Instagram in this case — all paid for in rubles this time — with the first ad running in January 2018, and the most recent in December 2018. (Again it says it has not completed a review of content the accounts were generating.)

“The individuals behind these accounts primarily represented themselves as Ukrainian, and they operated a variety of fake accounts while sharing local Ukrainian news stories on a variety of topics, such as weather, protests, NATO, and health conditions at schools,” writes Gleicher. “We identified some technical overlap with Russia-based activity we saw prior to the US midterm elections, including behavior that shared characteristics with previous Internet Research Agency (IRA) activity.”

In the Ukraine case it says it found no Events being hosted by the pages.

“Our security efforts are ongoing to help us stay a step ahead and uncover this kind of abuse, particularly in light of important political moments and elections in Europe this year,” adds Gleicher. “We are committed to making improvements and building stronger partnerships around the world to more effectively detect and stop this activity.”

A month ago Facebook also revealed it had removed another batch of politically motivated fake accounts. In that case the network behind the pages had been working to spread misinformation in Bangladesh 10 days before the country’s general elections.

This week it also emerged the company is extending some of its nascent election security measures by bringing in requirements for political advertisers to more international markets ahead of major elections in the coming months, such as checks that a political advertiser is located in the country.

However in other countries which also have big votes looming this year Facebook has yet to announced any measures to combat politically charged fakes.