Reddit expands chat rooms to more subreddits

If you’d rather spend time chatting with strangers who share a hyper-specific interest rather than keeping up with your coworkers’ stale memes on Slack, Reddit is ready for you. The platform has quietly been working on a chat room feature for months now and today it expands beyond its early days as a very limited closed beta.

Plenty of subreddits already make use of a chat room feature, but these live outside of Reddit, usually on Slack or Discord. Given that, it makes sense for Reddit to lure those users back into the engaging on Reddit itself by offering its own chat feature.

I spent a little time hanging out in the /r/bjj (brazilian jiu jitsu) chat as well as the a psychedelics chat affiliated with r/weed to see how things went across the spectrum and it was pretty chill — mostly people asking for general advice or seeking answers to specific questions. In a Reddit chat linked to the r/community_chat subreddit — the hub for the new chat feature — redditors discussed if the rooms would lead to more or less harassment and if the team should add upvotes, downvotes and karma to chat to make it more like Reddit’s normal threads. Of course, what I saw is probably a far cry from what chat will look like if and when some of its more inflammatory subreddits get their hands on the new feature. We’ve reached out to Reddit with questions about if it will allow all subreddits, even the ones hidden behind content warnings, will be offered the new chat functionality.

Chat rooms are meant as a supplement to already active subreddits, not a standalone community, so it’s basically like watching a Reddit thread unfold in realtime. On the Reddit blog, u/thunderemoji writes about why Reddit is optimistic that chat rooms won’t just be another trolling tool:

“I was initially afraid that most people would bring out the pitchforks and… unkind words. I was pleasantly surprised to find that most people are actually quite nice. The nature of real-time, direct chat seems to be especially disarming. Even when people initially lash out in frustration or to troll, I found that if you talk to them and show them you’re a regular human like them, they almost always chill out.

“Beyond just chilling out, people who are initially harsh or skeptical of new things will actually often change their minds. Sometimes they get so excited that they start to show up in unexpected places defending the thing they once strongly opposed in a way that feels more authentic than anything I could say.”

While a few qualitative experiences can only go so far to allay fears, Reddit’s chat does have a few things going for it. For one, moderators add chat rooms. If a subreddit’s mods don’t they don’t think they can handle the additional moderation, they don’t have to activate the feature. (A Wired piece on the thinking behind chat explores some of these issues in more depth.)

In the same post, u/thunderemoji adds that Reddit “made moderation features a major priority for our roadmap early in the process” so that mods would have plenty of tools at their disposal. Those tools include an opt-in process, auto-banning users from chat who are banned from a subreddit, “kick” tools that suspend a user for 1 minutes, 1 hour, 1 day or 3 days, the ability to lock a room and freeze all activity, rate limits and more.

To sign up for chat rooms (mods can add as many as they’d like once approved), a subreddit’s moderators can add their name to a list that lives here. To find chat rooms to explore, you can check for a link on subreddits you already visit, poke around the sidebar in this post by Reddit’s product team or check out /r/SubChats, a dedicated new subreddit collecting active chat rooms that accompany interest and community-specific subreddits.

Undercover report shows the Facebook moderation sausage being made

An undercover reporter with the UK’s Channel 4 visited a content moderation outsourcing firm in Dublin and came away rather discouraged at what they saw: queues of flagged content waiting, videos of kids fighting staying online, orders from above not to take action on underage users. It sounds bad, but the truth is there are pretty good reasons for most of it and in the end the report comes off as rather naive.

Not that it’s a bad thing for journalists to keep big companies (and their small contractors) honest, but the situations called out by Channel 4’s reporter seem to reflect a misunderstanding of the moderation process rather than problems with the process itself. I’m not a big Facebook fan, but in the matter of moderation I think they are sincere, if hugely unprepared.

The bullet points raised by the report are all addressed in a letter from Facebook to the filmmakers. The company points out that some content needs to be left up because abhorrent as it is, it isn’t in violation of the company’s stated standards and may be informative; underage users and content has some special requirements but in other ways can’t be assumed to be real; popular pages do need to exist on different terms than small ones, whether they’re radical partisans or celebrities (or both); hate speech is a delicate and complex matter that often needs to be reviewed multiple times; and so on.

The biggest problem doesn’t at all seem to be negligence by Facebook: there are reasons for everything, and as is often the case with moderation, those reasons are often unsatisfying but effective compromises. The problem is that the company has dragged its feet for years on taking responsibility for content and as such its moderation resources are simply overtaxed. The volume of content flagged by both automated processes and users is immense and Facebook hasn’t staffed up. Why do you think it’s outsourcing the work?

By the way, did you know that this is a horrible job?

Facebook in a blog post says that it is working on doubling its “safety and security” staff to 20,000, among which 6,500 will be on moderation duty. I’ve asked what the current number is, and whether that includes people at companies like this one (which has about 650 reviewers) and will update if I hear back.

Even with a staff of thousands the judgments that need to be made are often so subjective, and the volume of content so great, that there will always be backlogs and mistakes. It doesn’t mean anyone should be let off the hook, but it doesn’t necessarily indicate a systematic failure other than, perhaps, a lack of labor.

If people want Facebook to be effectively moderated they may need to accept that the process will be done by thousands of humans who imperfectly execute the task. Automated processes are useful but no replacement for the real thing. The result is a huge international group of moderators, overworked and cynical by profession, doing a messy and at times inadequate job of it.

Twitter is holding off on fixing verification policy to focus on election integrity

Twitter is pausing its work on overhauling its verification process, which provides a blue checkmark to public figures, in favor of election integrity, Twitter product lead Kayvon Beykpour tweeted today. That’s because, as we enter another election season, “updating our verification program isn’t a top priority for us right now (election integrity is),” he wrote on Twitter this afternoon.

Last November, Twitter paused its account verifications as it tried to figure out a way to address confusion around what it means to be verified. That decision came shortly after people criticized Twitter for having verified the account of Jason Keller, the person who organized the deadly white supremacist rally in Charlottesville, Virginia.

Fast forward to today, and Twitter still verifies accounts “ad hoc when we think it serves the public conversation & is in line with our policy,” Beykpour wrote. “But this has led to frustration b/c our process remains opaque & inconsistent with our intented [sic] pause.”

While Twitter recognizes its job isn’t done, the company is not prioritizing the work at this time — at least for the next few weeks, he said. In an email addressed to Twitter’s health leadership team last week, Beykpour said his team simply doesn’t have the bandwidth to focus on verification “without coming at the cost of other priorities and distracting the team.”

The highest priority, Beykpour said, is election integrity. Specifically, Twitter’s team will be looking at the product “with a specific lens towards the upcoming elections and some of the ‘election integrity’ workstreams we’ve discussed.”

Once that’s done “after ~4 weeks,” he said, the product team will be in a better place to address verification.

 

Instagram is building non-SMS 2-factor auth to thwart SIM hackers

Hackers can steal your phone number by reassigning it to a different SIM card, use it to reset your passwords, steal your Instagram and other accounts, and sell them for Bitcoin. As detailed in a harrowing Motherboard article today, Instagram accounts are especially vulnerable because the app only offers two-factor authentication through SMS that delivers a password reset or login code via text message.

But now Instagram has confirmed to TechCrunch that it’s building non-SMS two-factor authentication system that works with security apps like Google Authenticator or Duo. They generate a special code that you need to login that can’t be generated on a different phone in case your number is ported to a hacker’s SIM card.

Buried in the Instagram Android app’s APK code is a prototype of the upgraded 2FA feature, discovered by frequent TechCrunch tipster Jane Manchun Wong. Her work has led to confirmed TechCrunch scoops on Instagram Video Calling, Usage Insights, soundtracks for Stories, and more.

When presented with the screenshots, an Instagram spokesperson told TechCrunch that yes, it is working on the non-SMS 2FA feature, saying “We’re continuing to improve the security of Instagram accounts, including strengthening 2-factor authentication.”

Instagram actually lacked any two-factor protection until 2016 when it already had 400 million users. In November 2015, I wrote a story titled “Seriously. Instagram needs two-factor authentication.” A friend and star Instagram stop-motion animation creator Rachel Ryle had been hacked, costing up a lucrative sponsorship deal. The company listened. Three months later, the app began rolling out basic SMS-based 2FA.

But since then, SIM porting has become a much more common problem. Hackers typically call a mobile carrier and use social engineering tactics to convince them they’re you, or bribe an employee to help, and then change your number to a SIM card they control. Whether they’re hoping to steal intimate photos, empty cryptocurrency wallets, or sell desireable social media handles that like @t or @Rainbow as Motherboard reported, there are plenty of incentives to try a SIM porting attack. This article outlines how you can take steps to protect your phone number.

Hopefully as knowledge of this hacking technique becomes more well known, more apps will introduce non-SMS 2FA, mobile providers will make it tougher to port numbers, and users will take more steps to safeguard their accounts. As our identities and assets increasingly go digital, its pin codes and authenticator apps, not just deadbolts and home security systems, that must become a part of our everyday lives.

Dems and GOP unite, slamming Facebook for allowing violent Pages

In a rare moment of agreement, members of the House Judiciary Committee from both major political parties agreed that Facebook needed to take down Pages that bullied shooting survivors or called for more violence. The hearing regarding social media filtering practices saw policy staffers from Facebook, Google and Twitter answering questions, though Facebook absorbed the brunt of the ire. The hearing included Republican Representative Steve King ask “What about converting the large behemoth organizations that we’re talking about here into public utilities?”

The meatiest part of the hearing centered on whether social media platforms should delete accounts of conspiracy theorists and those inciting violence, rather than just removing the offending posts.

The issue has been a huge pain point for Facebook this week after giving vague answers for why it hasn’t deleted known faker Alex Jones’ Infowars Page, and tweeting that “We see Pages on both the left and the right pumping out what they consider opinion or analysis – but others call fake news.” Facebook’s Head of Global Policy Management Monica Bickert today reiterated that “sharing information that is false does not violate our policies.”

As I detailed in this opinion piece, I think the right solution is to quarantine the Pages of Infowars and similar fake news, preventing their posts or shares of links to their web domain from getting any visibility in the News Feed. But deleting the Page without instances of it directly inciting violence would make Jones a martyr and strengthen his counterfactual movement. Deletion should be reserved for those that blatantly encourage acts of violence.

Rep. Ted Deutch (D-Florida) asked about how Infowars’ claims in YouTube videos that Parkland shooting’s survivors were crisis actors squared with the company’s policy. Google’s Global Head of Public Policy and Government Relations for YouTube Juniper Downs explained that “We have a specific policy that says that if you say a well-documented violent attack didn’t happen and you use the name or image of the survivors or victims of that attack, that is a malicious attack and it violates our policy.” She noted that YouTube has a “three strikes” policy, it is “demoting low-quality content and promoting more authoritative content,” and it’s now showing boxes atop result pages for problematic searches, like “is the earth flat?” with facts to dispel conspiracies.

Facebook’s answer was much less clear. Bickert told Deutch that “We do use a strikes model. What that means is that if a Page, or profile, or group is posting content and some of that violates our polices, we always remove the violating posts at a certain point” (emphasis mine). That’s where Facebook became suddenly less transparent.

“It depends on the nature of the content that is violating our policies. At a certain point we would also remove the Page, or the profile, or the group at issue,” Bickert continued. Deutch then asked how many strikes conspiracy theorists get. Bickert noted that “crisis actors” claims violate its policy and its removes that content. “And we would continue to remove any violations from the Infowars Page.” But regarding Page-level removals, she got wishy-washy, saying, “If they posted sufficient content that it would violate our threshold, then the page would come down. The threshold varies depending on the different types of violations.”

“The threshold varies”

Rep. Matt Gaetz (R-Florida) gave the conservatives’ side of the same argument, citing two posts by the Facebook Page “Milkshakes Against The Republican Party” that called for violence, including one that saying “Remember the shooting at the Republican baseball game? One of those should happen every week.”

While these posts have been removed, Gaetz asked why the Page hadn’t. Bickert noted that “There’s no place for any calls for violence on Facebook.” Regarding the threshold, she did reveal that “When someone posts an image of child sexual abuse imagery their account will come down right away. There are different thresholds for different violations.” But she repeatedly refused to make a judgement call about whether the Page should be removed until she could review it with her team.

Image: Bryce Durbin/TechCrunch

Showing surprising alignment in such a fractured political era, Democratic Representative Jamie Raskin of Florida said “I’m agreeing with the chairman about this and I think we arrived at the same exact same place when we were taking about at what threshold does Infowars have their Page taken down after they repeatedly denied the historical reality of massacres of children in public school.”

Facebook can’t rely on a shadowy “the threshold varies” explanation any more. It must outline exactly what types of violations incur not only post removal but strikes against their authors. Perhaps that’s something like “one strike for posts of child sexual abuse, three posts for inciting violence, five posts for bullying victims or denying documented tragedies occurred, and unlimited posts of less urgently dangerous false information.”

Whatever the specifics, Facebook needs to provide specifics. Until then, both liberals and conservatives will rightly claim that enforcement is haphazard and opaque.

For more from today’s hearing:

Tinder tests Bitmoji integration using the recently launched Snap Kit

Tinder will begin testing Bitmoji in its app in Canada and Mexico, the company announced today. The new integration comes courtesy of the recently launched Snap Kit, which allows third-party apps to take advantage of Snap’s login for sign-up and features, like its Snap Map and Bitmoji, among other things. In Tinder, users in the test regions will be able to send the Bitmoji to their matches within their chat conversations.

Tinder was one of Snap’s debut partners for Snap Kit, along with Patreon and Postmates. However, it hadn’t yet launched the integrations until today.

“With our Bitmoji integration, we’re giving users a playful new way to engage with matches. This is just one way we work with partners to add features that encourage users to experiment with more personalized ways of chatting; in this case, it’s the freedom to get creative with avatars,” said Tinder Chief Product Officer Brian Norgard, in an announcement.

This isn’t the first time Tinder has added functionality to enhance its chat experience. It already offers the ability to use emoji, and, thanks to a 2016 partnership with Giphy, users can send GIFs to matches to help break the ice and have more playful conversations.

For those who have access to Bitmoji, the option will appear in place of the emoji button in the chat interface. When you press the green Bitmoji icon (next to the GIF button) for the first time, you’ll have to tap “Connect to Snapchat” to authenticate.

From then on, you can search across your Bitmoji collection using keywords in the search box, or you can tap on the color-coded bubbles that aggregate commonly used Bitmoji expressions like “good morning,” “coffee,” “busy,” and others. You can also tap on “recents” to find those you have used in the past.

Tinder has been running a growing number of experiments as of late, having launched tests of things like Tinder Places for finding matches you may cross paths with, A.I. suggestions on who to “Super Like,” a real-time feed of social updates, curated selections called Tinder Picks, a video feature called Tinder Loops, and more, over the past six months or so. The video feature rolled out globally earlier this month, which indicates that at least some tests will turn into product features for all to use.

Tinder says the Bitmoji feature is in testing in Canada and Mexico, but didn’t confirm if or when it would roll out to other markets, like the U.S.

 

It’s official: Brexit campaign broke the law — with social media’s help

The UK’s Electoral Commission has published the results of a near nine-month-long investigation into Brexit referendum spending and has found that the official Vote Leave campaign broke the law by breaching election campaign spending limits.

Vote Leave broke the law including by channeling money to a Canadian data firm, AggregateIQ, to use for targeting political advertising on Facebook’s platform, via undeclared joint working with another Brexit campaign, BeLeave, it found.

Aggregate IQ remains the subject of a separate joint investigation by privacy watchdogs in Canada and British Columbia.

The Electoral Commission’s investigation found evidence that BeLeave spent more than £675,000 with AggregateIQ under a common arrangement with Vote Leave. Yet the two campaigns had failed to disclose on their referendum spending returns that they had a common plan.

As the designated lead leave campaign, Vote Leave had a £7M spending limit under UK law. But via its joint spending with BeLeave the Commission determined it actually spent £7,449,079 — exceeding the legal spending limit by almost half a million pounds.

The June 2016 referendum in the UK resulted in a narrow 52:48 majority for the UK to leave the European Union. Two years on from the vote, the government has yet to agree a coherent policy strategy to move forward in negotiations with the EU, leaving businesses to suck up ongoing uncertainty and society and citizens to remain riven and divided.

Meanwhile, Facebook — whose platform played a key role in distributing referendum messaging — booked revenue of around $40.7BN in 2017 alone, reporting a full year profit of almost $16BN.

Back in May, long-time leave supporter and MEP, Nigel Farage, told CEO Mark Zuckerberg to his face in the European Parliament that without “Facebook and other forms of social media there is no way that Brexit or Trump or the Italian elections could ever possibly have happened”.

The Electoral Commission’s investigation focused on funding and spending, and mainly concerned five payments made to Aggregate IQ in June 2016 — payments made for campaign services for the EU Referendum — by the three Brexit campaigns it investigated (the third being: Veterans for Britain).

Veterans for Britain’s spending return included a donation of £100,000 that was reported as a cash donation received and accepted on 20 May 2016. But the Commission found this was in fact a payment by Vote Leave to Aggregate IQ for services provided to Veterans for Britain in the final days of the EU Referendum campaign. The date was also incorrectly reported: It was actually paid by Vote Leave on 29 June 2016.

Despite the donation to a third Brexit campaign by the official Vote Leave campaign being for services provided by Aggregate IQ, which was also simultaneously providing services to Vote Leave, the Commission did not deem it to constitute joint working, writing: “[T]he evidence we have seen does not support the concern that the services were provided to Veterans for Britain as joint working with Vote Leave.”

It was, however, found to constitute an inaccurate donation report — another offense under the UK’s Political Parties, Elections and Referendums Act 2000.

The report details multiple issues with spending returns across the three campaigns. And the Commission has issued a series of fines to the three Brexit campaigns.

It has also referred two individuals — Vote Leave’s David Alan Halsall and BeLeave’s Darren Grimes — to the UK’s Metropolitan Police Service, which has the power to instigate a criminal investigation.

Early last year the Commission decided not to fully investigate Vote Leave’s spending but by October it says new information had emerged — which suggested “a pattern of action by Vote Leave” — so it revisited the assessment and reopened an investigation in November.

Its report also makes it clear that Vote Leave failed to co-operate with its investigation — including by failing to produce requested information and documents; by failing to provide representatives for interview; by ignoring deadlines to respond to formal investigation notices; and by objecting to the fact of the investigation, including suggesting it would judicially review the opening of the investigation.

Judging by the Commission’s account, Vote Leave seemingly did everything it could to try to thwart and delay the investigation — which is only reporting now, two years on from the Brexit vote and with mere months of negotiating time left before the end of the formal Article 50 exit notification process.

What’s crystal clear from this report is that following money and data trails takes time and painstaking investigation, which — given that, y’know, democracy is at stake — heavily bolsters the case for far more stringent regulations and transparency mechanisms to prevent powerful social media platforms from quietly absorbing politically motivated money and messaging without recognizing any responsibility to disclose the transactions, let alone carry out due diligence on who or what may be funding the political spending.

The political ad transparency measures that Facebook has announced so far come far too late for Brexit — or indeed, for the 2016 US presidential election when its platform carried and amplifiedKremlin funded divisive messaging which reached the eyeballs of hundreds of millions of US voters.

Last week the UK’s information commissioner, Elizabeth Denham, criticized Facebook for transparency and control failures relating to political ads on its platform, and also announced its intention to fine Facebook the maximum possible for breaches of UK data protection law relating to the Cambridge Analytica scandal, after it emerged that information on as many as 87 million Facebook users was extracted from its platform and passed to a controversial UK political consultancy without most people’s knowledge or consent.

She also published a series of policy recommendations around digital political campaigning — calling for an ethical pause on the use of personal data for political ad targeting, and warning that a troubling lack of transparency about how people’s data is being used risks undermining public trust in democracy

“Without a high level of transparency – and therefore trust amongst citizens that their data is being used appropriately – we are at risk of developing a system of voter surveillance by default,” she warned.

The Cambridge Analytica Facebook scandal is linked to the Brexit referendum via AggregateIQ — which was also a contractor for Cambridge Analytica, and also handled Facebook user information which the former company had improperly obtained, after paying a Cambridge University academic to use a quiz app to harvest people’s data and use it to create psychometric profiles for ad targeting.

The Electoral Commission says it was approached by Facebook during the Brexit campaign spending investigation with “some information about how Aggregate IQ used its services during the EU Referendum campaign”.

We’ve reached out to Facebook for comment on the report and will update this story with any response.

The Commission states that evidence from Facebook indicates that AggregateIQ used “identical target lists for Vote Leave and BeLeave ads”, although at least in one instance the BeLeave ads “were not run”.

It writes:

BeLeave’s ability to procure services from Aggregate IQ only resulted from the actions of Vote Leave, in providing those donations and arranging a separate donor for BeLeave. While BeLeave may have contributed its own design style and input, the services provided by Aggregate IQ to BeLeave used Vote Leave messaging, at the behest of BeLeave’s campaign director. It also appears to have had the benefit of Vote Leave data and/or data it obtained via online resources set up and provided to it by Vote Leave to target and distribute its campaign material. This is shown by evidence from Facebook that Aggregate IQ used identical target lists for Vote Leave and BeLeave ads, although the BeLeave ads were not run.

“We also asked for copies of the adverts Aggregate IQ placed for BeLeave, and for details of the reports he received from Aggregate IQ on their use. Mr Grimes replied to our questions,” it further notes in the report.

At the height of the referendum campaign — at a crucial moment when Vote Leave had reached its official spending limit — officials from the official leave campaign persuaded BeLeave’s only other donor, an individual called Anthony Clake, to allow it to funnel a donation from him directly to Aggregate IQ, who Vote Leave campaign director Dominic Cummins dubbed a bunch of “social media ninjas”.

The Commission writes:

On 11 June 2016 Mr Cummings wrote to Mr Clake saying that Vote Leave had all the money it could spend, and suggesting the following: “However, there is another organisation that could spend your money. Would you be willing to spend the 100k to some social media ninjas who could usefully spend it on behalf of this organisation? I am very confident it would be well spent in the final crucial 5 days. Obviously it would be entirely legal. (sic)”

Mr Clake asked about this organisation. Mr Cummings replied as follows: “the social media ninjas are based in canada – they are extremely good. You would send your money directly to them. the organisation that would legally register the donation is a permitted participant called BeLeave, a “young people’s organisation”. happy to talk it through on the phone though in principle nothing is required from you but to wire money to a bank account if you’re happy to take my word for it. (sic)

Mr Clake then emailed Mr Grimes to offer a donation to BeLeave. He specified that this donation would made “via the AIQ account.”

And while the Commission says it found evidence that Grimes and others from BeLeave had “significant input into the look and design of the BeLeave adverts produced by Aggregate IQ”, it also determined that Vote Leave messaging was “influential in their strategy and design” — hence its determination of a common plan between the two campaigns. Aggregate IQ was the vehicle used by Vote Leave to breech its campaign spending cap.

Providing examples of the collaboration it found between the two campaigns, the Commission quotes internal BeLeave correspondence — including an instruction from Grimes to: “Copy and paste lines from Vote Leave’s briefing room in a BeLeave voice”.

It writes:

On 15 June 2016 Mr Grimes told other BeLeave Board members and Aggregate IQ that BeLeave’s ads needed to be: “an effective way of pushing our more liberal and progressive message to an audience which is perhaps not as receptive to Vote Leave’s messaging.”

On 17 June 2016 Mr Grimes told other BeLeave Board members: “So as soon as we can go live. Advertising should be back on tomorrow and normal operating as of Sunday. I’d like to make sure we have loads of scheduled tweets and Facebook status. Post all of those blogs including Shahmirs [aka Shahmir Sami; who became a BeLeave whistleblower], use favstar to check out and repost our best performing tweets. Copy and paste lines from Vote Leave’s briefing room in a BeLeave voice”

Reminder: Other people’s lives are not fodder for your feeds

#PlaneBae

You should cringe when you read that hashtag. Because it’s a reminder that people are being socially engineered by technology platforms to objectify and spy on each other for voyeuristic pleasure and profit.

The short version of the story attached to the cringeworthy hashtag is this: Earlier this month an individual, called Rosey Blair, spent all the hours of a plane flight using her smartphone and social media feeds to invade the privacy of her seat neighbors — publicly gossiping about the lives of two strangers.

Her speculation was set against a backdrop of rearview creepshots, with a few barely there scribbles added to blot out actual facial features. Even as an entire privacy invading narrative was being spun unknowingly around them.

#PlanePrivacyInvasion would be a more fitting hashtag. Or #MoralVacuumAt35000ft

And yet our youthful surveillance society started with a far loftier idea associated with it: Citizen journalism.

Once we’re all armed with powerful smartphones and ubiquitously fast Internet there will be no limits to the genuinely important reportage that will flow, we were told.

There will be no way for the powerful to withhold the truth from the people.

At least that was the nirvana we were sold.

What did we get? Something that looks much closer to mass manipulation. A tsunami of ad stalking, intentionally fake news and social media-enabled demagogues expertly appropriating these very same tools by gamifying mind-less, ethically nil algorithms.

Meanwhile, masses of ordinary people + ubiquitous smartphones + omnipresent social media feeds seems, for the most part, to be resulting in a kind of mainstream attention deficit disorder.

Yes, there is citizen journalism — such as people recording and broadcasting everyday experiences of aggression, racism and sexism, for example. Experiences that might otherwise go unreported, and which are definitely underreported.

That is certainly important.

But there are also these telling moments of #hashtaggable ethical blackout. As a result of what? Let’s call it the lure of ‘citizen clickbait’ — as people use their devices and feeds to mimic the worst kind of tabloid celebrity gossip ‘journalism’ by turning their attention and high tech tools on strangers, with (apparently) no major motivation beyond the simple fact that they can. Because technology is enabling them.

Social norms and common courtesy should kick in and prevent this. But social media is pushing in an unequal and opposite direction, encouraging users to turn anything — even strangers’ lives — into raw material to be repackaged as ‘content’ and flung out for voyeuristic entertainment.

It’s life reflecting commerce. But a particularly insidious form of commerce that does not accept editorial let alone ethical responsibility, has few (if any) moral standards, and relies, for continued function, upon stripping away society’s collective sense of privacy in order that these self-styled ‘sharing’ (‘taking’ is closer to the mark) platforms can swell in size and profit.

But it’s even worse than that. Social media as a data-mining, ad-targeting enterprise relies upon eroding our belief in privacy. So these platforms worry away at that by trying to disrupt our understanding of what privacy means. Because if you were to consider what another person thinks or feels — even for a millisecond — you might not post whatever piece of ‘content’ you had in mind.

For the platforms it’s far better if you just forget to think.

Facebook’s business is all about applying engineering ingenuity to eradicate the thoughtful friction of personal and societal conscience.

That’s why, for instance, it uses facial recognition technology to automate content identification — meaning there’s almost no opportunity for individual conscience to kick in and pipe up to quietly suggest that publicly tagging others in a piece of content isn’t actually the right thing to do.

Because it’s polite to ask permission first.

But Facebook’s antisocial automation pushes people away from thinking to ask for permission. There’s no button provided for that. The platform encourages us to forget all about the existence of common courtesies.

So we should not be at all surprised that such fundamental abuses of corporate power are themselves trickling down to infect the people who use and are exposed to these platforms’ skewed norms.

Viral episodes like #PlaneBae demonstrate that the same sense of entitlement to private information is being actively passed onto the users these platforms prey on and feed off — and is then getting beamed out, like radiation, to harm the people around them.

The damage is collective when societal norms are undermined.

#PlaneBae

Social media’s ubiquity means almost everyone works in marketing these days. Most people are marketing their own lives — posting photos of their pets, their kids, the latte they had this morning, the hipster gym where they work out — having been nudged to perform this unpaid labor by the platforms that profit from it.

The irony is that most of this work is being done for free. Only the platforms are being paid. Though there are some people making a very modern living; the new breed of ‘life sharers’ who willingly polish, package and post their professional existence as a brand of aspiration lifestyle marketing.

Social media’s gift to the world is that anyone can be a self-styled model now, and every passing moment a fashion shoot for hire — thanks to the largess of highly accessible social media platforms providing almost anyone who wants it with their own self-promoting shopwindow in the world. Plus all the promotional tools they could ever need.

Just step up to the glass and shoot.

And then your vacation beauty spot becomes just another backdrop for the next aspirational selfie. Although those aquamarine waters can’t be allowed to dampen or disrupt photo-coifed tresses, nor sand get in the camera kit. In any case, the makeup took hours to apply and there’s the next selfie to take…

What does the unchronicled life of these professional platform performers look like? A mess of preparation for projecting perfection, presumably, with life’s quotidian business stuffed higgledy piggledy into the margins — where they actually sweat and work to deliver the lie of a lifestyle dream.

Because these are also fakes — beautiful fakes, but fakes nonetheless.

We live in an age of entitled pretence. And while it may be totally fine for an individual to construct a fictional narrative that dresses up the substance of their existence, it’s certainly not okay to pull anyone else into your pantomime. Not without asking permission first.

But the problem is that social media is now so powerfully omnipresent its center of gravity is actively trying to pull everyone in — and its antisocial impacts frequently spill out and over the rest of us. And they rarely if ever ask for consent.

What about the people who don’t want their lives to be appropriated as digital windowdressing? Who weren’t asking for their identity to be held up for public consumption? Who don’t want to participate in this game at all — neither to personally profit from it, nor to have their privacy trampled by it?

The problem is the push and pull of platforms against privacy has become so aggressive, so virulent, that societal norms that protect and benefit us all — like empathy, like respect — are getting squeezed and sucked in.

The ugliness is especially visible in these ‘viral’ moments when other people’s lives are snatched and consumed voraciously on the hoof — as yet more content for rapacious feeds.

#PlaneBae

Think too of the fitness celebrity who posted a creepshot + commentary about a less slim person working out at their gym.

Or the YouTuber parents who monetize videos of their kids’ distress.

Or the men who post creepshots of women eating in public — and try to claim it’s an online art project rather than what it actually is: A privacy violation and misogynistic attack.

Or, on a public street in London one day, I saw a couple of giggling teenage girls watching a man at a bus stop who was clearly mentally unwell. Pulling out a smartphone, one girl hissed to the other: “We’ve got to put this on YouTube.”

For platforms built by technologists without thought for anything other than growth, everything is a potential spectacle. Everything is a potential post.

So they press on their users to think less. And they profit at society’s expense.

It’s only now, after social media has embedded itself everywhere, that platforms are being called out for their moral vacuum; for building systems that encourage abject mindlessness in users — and serve up content so bleak it represents a form of visual cancer.

#PlaneBae

Human have always told stories. Weaving our own narratives is both how we communicate and how we make sense of personal experience — creating order out of events that are often disorderly, random, even chaotic.

The human condition demands a degree of pattern-spotting for survival’s sake; so we can pick our individual path out of the gloom.

But platforms are exploiting that innate aspect of our character. And we, as individuals, need to get much, much better at spotting what they’re doing to us.

We need to recognize how they are manipulating us; what they are encouraging us to do — with each new feature nudge and dark pattern design choice.

We need to understand their underlying pull. The fact they profit by setting us as spies against each other. We need to wake up, personally and collectively, to social media’s antisocial impacts.

Perspective should not have to come at the expense of other people getting hurt.

This week the women whose privacy was thoughtlessly repackaged as public entertainment when she was branded and broadcast as #PlaneBae — and who has suffered harassment and yet more unwelcome attention as a direct result — gave a statement to Business Insider.

“#PlaneBae is not a romance — it is a digital-age cautionary tale about privacy, identity, ethics and consent,” she writes. “Please continue to respect my privacy, and my desire to remain anonymous.”

And as a strategy to push against the antisocial incursions of social media, remembering to respect people’s privacy is a great place to start.