Facebook changes algorithm to promote worthwhile & close friend content

Facebook is updating the News Feed ranking algorithm to incorporate data from surveys about who you say are your closest friends and which links you find most worthwhile. Today Facebook announced it’s trained new classifiers based on patterns linking these surveys with usage data so it can better predict what to show in the News Feed. The change could hurt Pages that share click-bait and preference those sharing content that makes people feel satisfied afterwards.

For close friends, Facebook surveyed users about which people they were closest too. It then detected how this matches up with who you are tagged in photos with, constantly interact with, like the same post and check in to the same places as, and more. That way if it recognizes those signals about other people’s friendships, it can be confident those are someone’s closest friends they’ll want to see the most of. You won’t see more friend content in total, but more from your best pals instead of distant acquaintances.

A Facebook News Feed survey from 2016, shared by Varsha Sharma

For worthwhile content, Facebook conducted surveys via news feed to find out which links people said were good uses of their time. Facebook then detected which types of link posts, which publishers, and how much engagement the posts got and matched that to survey results. This then lets it determine that if a post has a simialr style and engagement level, it’s likely to be worthwhile and should be ranked higher in the feed.

The change aligns with CEO Mark Zuckerberg’s recent comments declaring that Facebook’s goal isn’t total time spent, but time well spent with meaningful content you feel good about. Most recently, that push has been about demoting unsafe content. Last month Facebook changed the algorithm to minimize clickbait and links to crappy ad-filled sites that receive a disproportionately high amount of their traffic from Facebook. It cracked down on unoriginality by hiding videos ripped off from other creators, and began levying harsher demotions to repeat violators of its policies. And it began to decrease the distribution of “borderline content” on Facebook and Instagram that comes close to but doesn’t technically break its rules.

While many assume Facebook just juices News Feed to be as addictive in the short-term as possible to keep us glued to the screen and viewing ads, that would actually be ruinous for its long-term business. If users leave the feed feeling exhausted, confused, and unfulfilled, they won’t come back. Facebook’s already had trouble with users ditching its text-heavy News Feed for more visual apps like Instagram (which it luckily bought) and Snapchat (which it tried to). While demoting click-bait and viral content might decrease total usage time today, it could preserve Facebook’s money-making ability for the future while also helping to rot our brains a little less.

Instagram is killing Direct, its standalone Snapchat clone app, in the next several weeks

As Facebook pushes ahead with its strategy to consolidate more of the backend of its various apps on to a single platform, it’s also doing a little simplifying and housekeeping. In the coming month, it will shut down Direct, the standalone Instagram direct messaging app that it was testing to rival Snapchat, on iOS and Android. Instead, Facebook and its Instagram team will channel all developments and activity into the direct messaging feature of the main Instagram app.

We first saw a message about the app closing down by way of a tweet from Direct user Matt Navarra: “In the coming month, we’ll no longer be supporting the Direct app,” Instagram notes in the app itself. “Your conversations will automatically move over to Instagram, so you don’t need to do anything.”

The details were then confirmed to us by Instagram itself:

“We’re rolling back the test of the standalone Direct app,” a spokesperson said in a statement provided to TechCrunch. “We’re focused on continuing to make Instagram Direct the best place for fun conversations with your friends.”

From what we understand, Instagram will continue developing Direct features — they just won’t live in a standalone app. (Tests and rollouts of new features that we’ve reported before include encryption in direct messaging, the ability to watch videos with other people, a web version of the direct messaging feature,

Instagram didn’t give any reason for the decision, but in many ways, the writing was on the wall with this one.

The app first appeared December 2017, when Instagram confirmed it had rolled it out in a select number of markets — Uruguay, Chile, Turkey, Italy, Portugal and Israel — as a test. (Instagram first launched direct messaging within the main app in 2013.)

“We want Instagram to be a place for all of your moments, and private sharing with close friends is a big part of that,” it said at the time. “To make it easier and more fun for people to connect in this way, we are beginning to test Direct – a camera-first app that connects seamlessly back to Instagram.”

But it’s not clear how many markets beyond ultimately have had access to the app, although Instagram did expand it to more. The iOS version currently notes that it is available in a much wider range of languages than Spanish, Turkish, Italian and Portuguese. It also includes English, Croatian, Czech, Danish, Dutch, Finnish, French, German, Greek, Indonesian, Japanese, Korean, Malay, Norwegian Bokmål, Polish, Romanian, Russian, Simplified Chinese, Slovak, Swedish, Tagalog, Thai, Traditional Chinese, Ukrainian and Vietnamese.

But with Instagram doing little to actively promote the app or its expansion to more markets, Direct never really found a lot of traction in the markets where it was active.

The only countries that make it on to AppAnnie’s app rankings for Direct are Uruguay for Android, where it was most recently at number 55 among social networking apps (with no figures for overall rankings, meaning it was too low down to be counted); and Portugal on iOS, where it was number 24 among social apps and a paltry 448 overall.

The Direct app hadn’t been updated on iOS since the end of December, although the Android version was updated as recently as the end of April.

At the time of its original launch as a test, however, Direct looked like an interesting move from Instagram.

The company had already been releasing various other features that cloned popular ones in Snapchat. The explosive growth and traction of one of them, Stories, could have felt like a sign to Facebook that there was more ground to break on creating more Snapchat-like experiences for its audience. More generally, the rise of Snapchat and direct messaging apps like WhatsApp has shown that there is a market demand for more apps based around private conversations among smaller groups, if not one-to-one.

On top of that, building a standalone messaging app takes a page out of Facebook’s own app development book, in which it launched and began to really accelerate development of a standalone Messenger app separate from the Facebook experience on mobile.

The company has not revealed any recent numbers for usage of Direct since 2017, when it said there were 375 million users of the service as it brought together permanent and ephemeral (disappearing) messages within the service.

More recently, Instagram and Facebook itself have been part of the wider scrutiny we have seen over how social platforms police and moderate harmful or offensive content. Facebook itself has faced an additional wave of criticism from some over its plans to bring together its disparate app ecosystem in terms of how they function together, with the issue being that Facebook is not giving apps like WhatsApp and Instagram enough autonomy and becoming an even bigger data monster in the process.

It may have been the depressingly low usage that ultimately killed off Direct, but I’d argue that the optics for promoting an expansion of its app real estate on to another platform weren’t particularly strong, either.

Facebook introduces ‘one strike’ policy to combat abuse of its live-streaming service

Facebook is cracking down on its live streaming service after it was used to broadcast the shocking mass shootings that left 50 dead at two Christchurch mosques in New Zealand in March. The social network said today that it is implementing a ‘one strike’ rule that will prevent users who break its rules from using the Facebook Live service.

“From now on, anyone who violates our most serious policies will be restricted from using Live for set periods of time — for example 30 days — starting on their first offense. For instance, someone who shares a link to a statement from a terrorist group with no context will now be immediately blocked from using Live for a set period of time,” Facebook VP of integrity Guy Rosen wrote.

The company said it plans to implement additional restrictions for these people, which will include limiting their ability to take out ads on the social network. Those who violate Facebook’s policy against “dangerous individuals and organizations” — a new introduction that it used to ban a number of right-wing figures earlier this month — will be restricted from using Live, although Facebook isn’t being specific on the duration of the bans or what it would take to trigger a permanent bar from live-streaming.

Facebook is increasingly using AI to detect and counter violent and dangerous content on its platform, but that approach simply isn’t working.

Beyond the challenge of non-English languages — Facebook’s AI detection system has failed in Myanmar, for example, despite what CEO Mark Zuckerberg had claimedthe detection system wasn’t robust in dealing with the aftermath of Christchurch.

The stream itself was not reported to Facebook until 12 minutes after it had ended, while Facebook failed to block 20 percent of the videos of the live stream that were later uploaded to its site. Indeed, TechCrunch found several videos still on Facebook more than 12 hours after the attack despite the social network’s efforts to cherry pick ‘vanity stats’ that appeared to show its AI and human teams had things under control.

Acknowledging that failure indirectly, Facebook said it will invest $7.5 million in “new research partnerships with leading academics from three universities, designed to improve image and video analysis technology.”

Early partners in this initiative include The University of Maryland, Cornell University and The University of California, Berkeley, which it said will assist with techniques to detect manipulated images, video and audio. Another objective is to use technology to identify the difference between those who deliberately manipulate media, and those who so “unwittingly.”

Facebook said it hopes to add other research partners to the initiative, which is also focused on combating deepfakes.

“Although we deployed a number of techniques to eventually find these variants, including video and audio matching technology, we realized that this is an area where we need to invest in further research,” Rosen conceded in the blog post.

Facebook’s announcement comes less than one day after a collection of world leaders, including New Zealand Prime Minister Jacinda Ardern, called on tech companies to sign a pledge to increase their efforts to combat toxic content.

According to people working for the French Economy Ministry, the Christchurch Call doesn’t contain any specific recommendations for new regulation. Rather, countries can decide what they mean by violent and extremist content.

“For now, it’s a focus on an event in particular that caused an issue for multiple countries,” French Digital Minister Cédric O said in a briefing with journalists.

Uber Black launches Quiet Driver Mode

Tired of chatty drivers? Uber is finally giving users its most requested feature: an in-app way to request minimal conversation during your ride. The “Quiet Mode” feature is free and will be available to everyone in the US tomorrow, but only on Uber Black and Uber Black SUV premium rides. Users can select “Quiet preferred”, “happy to chat” or leave the setting at “No preference”. The desire for silence might convince more riders to pay for Uber’s more expensive vehicle types so they can work, nap, take a call, or just relax in the car.

Quiet Mode comes as part of a new slate of Rider Preferences features that users can set up before they hail an Uber Black or SUV, but not while waiting for their ride or while in the car. A Bags option lets users signal that they have luggage with them so the driver knows to pull over somewhere they can help load them into the trunk. The Temperature control lets them request the car be warm or cold so drivers know whether to crank the air conditioning.

Uber Black drivers are now supposed to wait 15 minutes after arriving before cancelling on you as is standard with private car services, though you’ll start to be charged and they’ll be compensated after 5 plus they technically can cancel whenever they want. Uber Black riders will get premium phone support like members of Uber Rewards’ highest Diamond tier. And Uber is going to require nicer and newer cars for future drivers signing up for Uber Black, with centralized rules written by Uber HQ instead of local branches. “We’re looking to create more differentiation between the premium products and the regular products to encourage more trips” Uber product manager Aydin Ghajar tells me. Quiet Mode in particular “is something that people have been asking for for a long time.

I think Quiet Mode is going to be a hit, perhaps because I requested that Uber build a “Quiet Ride Mode” in my December product wish list after suggesting it last July. The feedback I received from many male readers was that there are worse things than having to chat with a friendly driver, and it’s rude or dehumanizing to demand they stay silent.

But that ignores the fact that women often feel uncomfortable when male drivers incessantly talk to them, and it can get scary when it turns into unwanted flirtation considering the driver is in control. In many cases, riders may feel rude or frightened to reject conversation and ask out loud for quiet. That’s why I hope Uber plans to expand this to UberX as well as international markets, though the company had nothing to share on that.

What Uber’s Ghajar did reveal was that “the reaction of Uber black drivers was overwhelmingly positive because they want to deliver a great experience to their rider…but they don’t necessarily know what the rider wants. These guys take a lot of pride in what they do as customer service agents”.

Uber did extensive research of drivers’ perceptions in the three months it took to develop the feature. But due to employment laws, it can’t actually require that drivers abide by user requests for quiet, though they might get negative ratings if they ignore them. Ghajar insists “It’s not mandatory. The driver is an independent contractor. We’re just communicating the rider’s preference. The driver can have that information and do with it what they want.”

Given premium rides often cost 2X the UberX price and over 3X the UberPool price, Uber could make a lot of money encouraging upgrades.That’s crucial at a time when it’s desperate to improve its margins and shrink its losses after a weak IPO last week saw its new share price dip. With so many competing ride services around the world, Uber is wise to try to differentiate on customer service instead of just costly efforts to win with more cars, lower prices, and sharper algorithms.

After year-long lockout, Twitter is finally giving people their accounts back

Twitter is finally allowing a number of locked users to regain control of their accounts once again. Around a  year after Europe’s new privacy laws (GDPR) rolled out, Twitter began booting users out of their accounts if it suspected the account’s owner was underage — that is, younger than 13. But the process also locked out many users who said they were now old enough to use Twitter’s service legally.

While Twitter’s rules had stated that users under 13 can’t create accounts or post tweets, many underage users did so anyway thanks to lax enforcement of the policy. The GDPR regulations, however, forced Twitter to address the issue.

But even if the Twitter users were old enough to use the service when the regulations went into effect in May 2018, Twitter still had to figure out a technical solution to delete all the content published to its platform when those users were underage.

The lock-out approach was an aggressive way to deal with the problem.

By comparison, another app favored by underage users, TikTok, was recently fined by the FTC for being in violation of U.S. children’s privacy law, COPPA. But instead of kicking out all its underage users for months on end, it forced an age gate to appear in the app after it deleted all the videos made by underage users. Those users who were still under 13 were then redirected to a new COPPA-compliant experience.

Although Twitter was forced to address the problem because of the new regulations, lest it face possible fines, the company seemingly didn’t prioritize a fix. For example, VentureBeat reported how Twitter emailed users in June 2018 saying they’d be in touch with an update about the problem soon, but no update ever arrived.

The hashtag #TwitterLockOut became a common occurrence on Twitter and cries of “Give us back our accounts!” would be found in the Replies whenever Twitter shared other product news on its official accounts. (Well, that and requests for an Edit button, of course.) 

Twitter says that it’s now beginning — no, for real this time! — to give the locked out users control of their accounts. The process will roll out in waves as it scales up, with those who have waited the longest getting their emails first.

It also claims the process “was a lot more complicated” than anticipated, which is why it took a year (or in some cases, more than a year) to complete.

However, there are some caveats.

The users will first need to give Twitter permission to delete any tweets posted before they were 13, as well as any likes, DMs sent or received, moments, lists and collections. Twitter will also need to remove all profile information besides the account’s username and date of birth.

In other words, the company is offering users a way to reclaim their username but none of their content.

Though many of these users have since moved on to new Twitter accounts, they may still want to reclaim their old username if it was a good one. In addition, their follower/following counts will return to normal after up to 24 hours after they take control of their account once again.

Twitter says it’s beginning to email details to those who are eligible, starting today. If the user doesn’t have an email address, they can instead log into the account where they’ll see a “Get Started” button to kick off the process instead.

To proceed, users will have to confirm their name and either the email or phone number that was associated with the account.

The account isn’t immediately unlocked after the steps are completed, users report. But Twitter’s dialog box informs the users they’ll be notified when the process is finalized on Twitter’s side.

Hopefully, that won’t take another year.

Image credits (of the process): Reddit user nyuszika7h, via r/Twitter 

Looking back at Zoom’s IPO with CEO Eric Yuan

Since the launch of its IPO in mid-April, Zoom stock has skyrocketed, up nearly 30% as of Monday’s open. However, as the company’s valuation continues to tick up, analysts and industry pundits are now diving deeper to try and unravel what the company’s future growth might look like.

TechCrunch’s venture capital ax Kate Clark has been following the story with a close eye and will be sitting down for an exclusive conversation with Zoom CEO Eric Yuan on Wednesday at 10:00 am PT. Eric, Kate and Extra Crunch members will be taking a look back at the company’s listing process and Zoom’s road to IPO.

Tune in to join the conversation and for the opportunity to ask Eric and Kate any and all things Zoom.

To listen to this and all future conference calls, become a member of Extra Crunch. Learn more and try it for free.

MailChimp’s Ben Chestnut on bootstrapping a startup to $700M in revenue

The well-known tech startup routine of coming up with an idea, raising money from VCs in increasing rounds as valuations continue to rise, and then eventually going public or getting acquired has been around for as long as the myth of Silicon Valley itself. But the evolution of MailChimp — a notable, bootstrapped outlier out of Atlanta, Georgia, that provides email and other marketing services to small businesses — tells a very different story of tech startup success.

As the company closes in on $700 million in annual revenues for 2019, it has no intention of letting up, or selling out: No outside funding, no plans for an IPO, and no to all the companies that have tried to acquire it. As it has grown, it has been profitable from day one.

This week, the company is unveiling what is probably its biggest product update since first starting to sell email marketing services 20 years ago: It’s launching a new marketing platform that features social media management, ad retargeting, AI-based business intelligence, domain sales, web development templates and more.

I took the opportunity to speak with its co-founder and CEO, Ben Chestnut — who started Mailchimp as a side project with two friends, Mark Armstrong and Dan Kurzius, in the trough of the first dot-com bust — on Mailchimp’s origins and plans for what comes next. The startup’s story is a firm example of how there is definitely more than one route to success in tech.


Ingrid Lunden: You’re launching a new marketing platform today, but I want to walk back a little first. This isn’t your first move away from email. We discovered back in March that you quietly acquired a Canadian e-commerce startup, LemonStand, just as you were parting ways with Shopify.

Ben Chestnut: We wanted to have a tool to help small business marketers do their initial selling. The focus is not multiple products. Just one. We’re not interested in setting up full-blown e-commerce carts. This is about helping companies sell one product in an Instagram ad with a buy button, and we felt that the people at LemonStand could help us with that.

After criticism over moderator treatment, Facebook raises wages and boosts support for contractors

Facebook has been repeatedly (and rightly) hammered for its treatment of the content moderators who ensure the site doesn’t end up becoming a river of images, videos and articles embodying the worst of humanity.

Those workers, and the hundreds (if not thousands) of other contractors Facebook employs to cook food, provide security and provide transportation for the social media giant’s highly compensated staff, are getting a little salary boost and a commitment to better care for the toll these jobs can take on some workers.

“Today we’re committing to pay everyone who does contract work at Facebook in the US a wage that’s more reflective of local costs of living,” the company said in a statement. “And for those who review content on our site to make sure it follows our community standards, we’re going even further. We’re going to provide them a higher base wage, additional benefits, and more supportive programs given the nature of their jobs.”

Contractors in the U.S. were being paid a $15 minimum wage, received 15 paid days off for holidays, sick time and vacation; and received a $4,000 new child benefit for parents that don’t receive paid leave. Since 2016, Facebook also required employees assigned to the company to be provided with comprehensive healthcare.

Now, it’s boosting those wages in San Francisco, Washington, New York and the San Francisco Bay Area to a $20 minimum wage, and $18 in Seattle.

“After reviewing a number of factors including third-party guidelines, we’re committing to a higher standard that better reflects local costs of living,” the company said. “We’ll be implementing these changes by mid-next year and we’re working to develop similar standards for other countries.”

Those raises apply to contractors that don’t work on content moderation. For contractors involved in moderation, the company committed to a $22 per hour minimum wage in the Bay Area, New York and Washington; $20 per-hour in Seattle; and $18 per hour in other metro areas outside the U.S.

Facebook also said it will institute a similar program for international standards going forward. That’s important, as a bulk of the company’s content moderation work is actually done overseas, in places like the Philippines.

Content moderators will also have access to “ongoing well-being and resiliency training.” Facebook also said it was adding preferences to let reviewers customize how they want to view content — including an option to blur graphic images by default before reviewing them. Facebook will also provide around-the-clock on-site counseling, and will survey moderators at partner sites about what reviewers actually need.

Last month, the company said it convened its first vendor partner summit at its Menlo Park, Calif. offices and is now working to standardize contracts with its global vendors. To ensure that vendors are meeting their commitments, the company is going to hold unannounced onsite checks and a biannual audit and compliance program for content review teams.