Twitter is bringing back the chronological timeline

Your Twitter prayers are answered! Well, maybe not the prayers about harassment or the ones about an edit tweet button, but your other prayers.

Today in a series of tweets, the company announced that it had heard the cries of its various disgruntled users and will bring back a form of the pure chronological timeline that users can opt into. Twitter first took an interest in a more algorithmic timeline three-ish years ago and committed to it in 2016.

Some users were under the impression that they were living that algo-free life already by toggling off the “Show the best Tweets first” option in the account settings menu. Unfortunately for all of us, unchecking this box didn’t revert Twitter to ye olde pure chronological timeline so much as it removed some of the more prominent algorithmic bits that would otherwise be served to users first thing.  Users regularly observed non-chronological timeline behaviors even with the option toggled off.

As Twitter Product Lead Kayvon Beykpour elaborated, “We’re working on making it easier for people to control their Twitter timeline, including providing an easy switch to see the most recent tweets.”

Nostalgic users who want regular old Twitter back can expect to see the feature in testing “in the coming weeks.”

We’re ready to pull the switch, just tell us when.

Instagram Shopping gets personalized Explore channel, Stories tags

Instagram is embracing its true identity as a mail-order catalog. The question will be how much power merchants will give Instagram after seeing what its parent Facebook did to news outlets that relied on it. In a move that could pit it against Pinterest and Wish, Instagram is launching Shopping features across its app to let people discover and consider possible purchases before clicking through to check out on the merchant’s website.

Today, Instagram Explore is getting a personalized Shopping channel of items it thinks you’ll want most. And it’s expanding its Shopping tags for Instagram Stories to all viewers worldwide after a limited test in June, and it’s allowing brands in 46 countries to add the shopping bag icon to Stories that users can click through to buy what they saw.

Instagram clearly wants to graduate from where people get ideas for things to purchase to being a measurable gateway to their spending. 90 million people already tap its Shopping tags each month, it announced today. The new features could soak up more user attention and lead them to see more ads. But perhaps more importantly, demonstrating that Instagram can boost retail business’ sales for free through Stories and Explore could whet their appetite to buy Instagram ads to amplify their reach and juice the conversion channel. With 25 million businesses on Instagram but only 2 million advertisers, the app has room to massively increase its revenue.

For now Instagram is maintaining its “no comment” regarding whether it’s working on a standalone Instagram Shopping app as per a report from The Verge last month.  Instagram first launched its Shopping tags for feeds in 2016. It still points users out to merchant sites for the final payment step, though, in part because retailers want to control their relationships with customers. But long-term, allowing businesses to opt in to offering in-Instagram checkout could shorten the funnel and get more users actually buying.

Shopping joins the For You, Art, Beauty, Sports, Fashion and other topic channels that launched in Explore in June. The Explore algorithm will show you shopping-tagged posts from businesses you follow and ones you might like based on who you follow and what shopping content engages you. This marks the first time you can view a dedicated shopping space inside of Instagram, and it could become a bottomless well of browsing for those in need of some retail therapy.

With Shopping Stickers, brands can choose to add one per story and customize the color to match their photo or video. A tap opens the product details page, and another sends them to the merchant’s site. Businesses will be able to see the number of taps on their Shopping sticker, and how many people tapped through to their website. Partnerships with Shopify (500,000+ merchants) and BigCommerce (60,000+ merchants) will make it easy for retailers of all sizes to use Instagram’s Shopping Stickers. 

What about bringing Shopping to IGTV? A company spokesperson tells me, “IGTV and live video present interesting opportunities for brands to connect more closely with their customers, but we have no plans to bring shopping tools to those surfaces right now.”

For now, the new shopping features feel like a gift to merchants hoping to boost sales. But so did the surge of referral traffic Facebook sent to news publishers a few years ago. Those outlets soon grew dependent on Facebook, changed their news room staffing and content strategies to chase this traffic, and now find themselves in dire straights after Facebook cut off the traffic fire hose as it refocuses on friends and family content.

Retail merchants shouldn’t take the same bait. Instagram Shopping might be a nice bonus, but just how much it prioritizes the feature and spotlights the Explore channel are entirely under its control. Merchants should still work to develop an unmediated relationship directly with their customers, encouraging them to bookmark their sites or sign up for newsletters. Instagram’s favor could disappear with a change to its algorithm, and retailers must always be ready to stand on their own two feet.

Uber fires up its own traffic estimates to fuel demand beyond cars

If the whole map is red and it’s a short ride, maybe you’d prefer taking an Uber JUMP Bike instead of an UberX. Or at least if you do end up stuck bumper-to-bumper, the warning could make you less likely to get mad mid-ride and take it out on the driver’s rating.

This week TechCrunch spotted Uber overlaying blue, yellow, and red traffic condition bars on your route map before you hail. Responding to TechCrunch’s inquiry, Uber confirmed that traffic estimates have been quietly testing for riders on Android over the past few months and the pilot program recently expanded to a subset of iOS users. It’s already live for all drivers.

The congestion indicators are based on Uber’s own traffic information pulled from its historic trip data about 10 billion rides plus real-time data from its drivers’ phones, rather than estimates from Google that already power Uber’s maps.

If traffic estimates do roll out, they could make users more tolerant of longer ETAs and less likely to check a competing app since they’ll know their driver might take longer to pick them up because congestion is to blame rather than Uber’s algorithm. During the ride they might be more patient amidst the clogged streets.

Uber’s research into traffic in India

But most interestingly, seeing traffic conditions could help users choose when it’s time to take one of Uber’s non-car choices. They could sail past traffic in one of Uber’s new electric JUMP Bikes, or buy a public transportation ticket from inside Uber thanks to its new partnership with Masabi for access to New York’s MTA plus buses and trains in other cities. Cheaper and less labor intensive for Uber, these options make more sense to riders the more traffic there is. It’s to the company’s advantage to steer users towards the most satisfying mode of transportation, and traffic info could point them in the right direction.

Through a program called Uber Movement, the company began sharing its traffic data with city governments early last year. The goal was to give urban planners the proof they need to make their streets more efficient. Uber has long claimed that it can help reduce traffic by getting people into shared rides and eliminating circling in search of parking. But a new study showed that for each mile of personal driving Uber and Lyft eliminated, they added 2.8 miles of professional driving for an 180 percent increase in total traffic.

Uber is still learning whether users find traffic estimates helpful before it considers rolling them out permanently to everyone. Right now they only appear on unshared UberX, Black, XL, SUV, and Taxi routes before you hail to a small percentage of users. But Uber’s spokesperson verified that the company’s long-term goal is to be able to tell users that the cheapest way to get there is option X, the cheapest is option Y, and the most comfortable is option Z. Traffic estimates are key to that. And now that it’s had so many cars on the road for so long, it has the signals necessary to predict which streets will be smooth and which will be jammed at a given hour.

For years, Uber called itself a logistics company, not a ride sharing company. Most people gave it a knowing wink. Every Silicon Valley company tries to trump up its importance by claiming to conquer a higher level of abstraction. But with advent of personal transportation modes like on-demand bikes and scooters, Uber is poised to earn the title by getting us from point A to point B however we prefer.

Facebook is hiring a director of human rights policy to work on “conflict prevention” and “peace-building”

Facebook is advertising for a human rights policy director to join its business, located either at its Menlo Park HQ or in Washington DC — with “conflict prevention” and “peace-building” among the listed responsibilities.

In the job ad, Facebook writes that as the reach and impact of its various products continues to grow “so does the responsibility we have to respect the individual and human rights of the members of our diverse global community”, saying it’s:

… looking for a Director of Human Rights Policy to coordinate our company-wide effort to address human rights abuses, including by both state and non-state actors. This role will be responsible for: (1) Working with product teams to ensure that Facebook is a positive force for human rights and apply the lessons we learn from our investigations, (2) representing Facebook with key stakeholders in civil society, government, international institutions, and industry, (3) driving our investigations into and disruptions of human rights abusers on our platforms, and (4) crafting policies to counteract bad actors and help us ensure that we continue to operate our platforms consistent with human rights principles.

Among the minimum requirements for the role, Facebook lists experience “working in developing nations and with governments and civil society organizations around the world”.

It adds that “global travel to support our international teams is expected”.

The company has faced fierce criticism in recent years over its failure to take greater responsibility for the spread of disinformation and hate speech on its platform. Especially in international markets it has targeted for business growth via its Internet.org initiative which seeks to get more people ‘connected’ to the Internet (and thus to Facebook).

More connections means more users for Facebook’s business and growth for its shareholders. But the costs of that growth have been cast into sharp relief over the past several years as the human impact of handing millions of people lacking in digital literacy some very powerful social sharing tools — without a commensurately large investment in local education programs (or even in moderating and policing Facebook’s own platform) — has become all too clear.

In Myanmar Facebook’s tools have been used to spread hate and accelerate ethic cleansing and/or the targeting of political critics of authoritarian governments — earning the company widespread condemnation, including a rebuke from the UN earlier this year which blamed the platform for accelerating ethnic violence against Myanmar’s Muslim minority.

In the Philippines Facebook also played a pivotal role in the election of president Rodrigo Duterte — who now stands accused of plunging the country into its worst human rights crisis since the dictatorship of Ferdinand Marcos in the 1970s and 80s.

While in India the popularity of the Facebook-owned WhatsApp messaging platform has been blamed for accelerating the spread of misinformation — leading to mob violence and the deaths of several people.

Facebook famously failed even to spot mass manipulation campaigns going on in its own backyard — when in 2016 Kremlin-backed disinformation agents injected masses of anti-Clinton, pro-Trump propaganda into its platform and garnered hundreds of millions of American voters’ eyeballs at a bargain basement price.

So it’s hardly surprising the company has been equally naive in markets it understands far less. Though also hardly excusable — given all the signals it has access to.

In Myanmar, for example, local organizations that are sensitive to the cultural context repeatedly complained to Facebook that it lacked Burmese-speaking staff — complaints that apparently fell on deaf ears for the longest time.

The cost to American society of social media enabled political manipulation and increased social division is certainly very high. The costs of the weaponization of digital information in markets such as Myanmar looks incalculable.

In the Philippines Facebook also indirectly has blood on its hands — having provided services to the Duterte government to help it make more effective use of its tools. This same government is now waging a bloody ‘war on drugs’ that Human Rights Watch says has claimed the lives of around 12,000 people, including children.

Facebook’s job ad for a human rights policy director includes the pledge that “we’re just getting started” — referring to its stated mission of helping  people “build stronger communities”.

But when you consider the impact its business decisions have already had in certain corners of the world it’s hard not to read that line with a shudder.

Citing the UN Guiding Principles on Business and Human Rights (and “our commitments as a member of the Global Network Initiative”), Facebook writes that its product policy team is dedicated to “understanding the human rights impacts of our platform and to crafting policies that allow us both to act against those who would use Facebook to enable harm, stifle expression, and undermine human rights, and to support those who seek to advance rights, promote peace, and build strong communities”.

Clearly it has an awful lot of “understanding” to do on this front. And hopefully it will now move fast to understand the impact of its own platform, circa fifteen years into its great ‘society reshaping experience’, and prevent Facebook from being repeatedly used to trash human rights.

As well as representing the company in meetings with politicians, policymakers, NGOs and civil society groups, Facebook says the new human rights director will work on formulating internal policies governing user, advertiser, and developer behavior on Facebook. “This includes policies to encourage responsible online activity as well as policies that deter or mitigate the risk of human rights violations or the escalation of targeted violence,” it notes. 

The director will also work with internal public policy, community ops and security teams to try to spot and disrupt “actors that seek to misuse our platforms and target our users” — while also working to support “those using our platforms to foster peace-building and enable transitional justice”.

So you have to wonder how, for example, Holocaust denial continuing to be being protected speech on Facebook will square with that stated mission for the human rights policy director.

At the same time, Facebook is currently hiring for a public policy manager in Francophone, Africa — who it writes can “combine a passion for technology’s potential to create opportunity and to make Africa more open and connected, with deep knowledge of the political and regulatory dynamics across key Francophone countries in Africa”.

That job ad does not explicitly reference human rights — talking only about “interesting public policy challenges… including privacy, safety and security, freedom of expression, Internet shutdowns, the impact of the Internet on economic growth, and new opportunities for democratic engagement”.

But among the role’s listed responsibilities is working with Facebook’s Politics & Government team to “promote the use of Facebook as a platform for citizen and voter engagement to policymakers and NGOs and other political influencers”. So here it looks to be continuing its ‘business as usual’ strategy of pushing for more political activity to take place on Facebook.

If Facebook wants an accelerated understanding of human rights issues around the world it might be better advised to take a more joined up approach to human rights across its own policy staff board, and at least include it among the listed responsibilities of all the policy shapers it’s looking to hire.

Twitter now puts live broadcasts at the top of your timeline

Twitter will now put live streams and broadcasts started by accounts you follow at the top of your timeline, making it easier to see what they’re doing in realtime.

In a tweet, Twitter said that that the new feature will include breaking news, personalities and sports.

The social networking giant included the new feature in its iOS and Android apps, updated this week. Among the updates, Twitter said it’s now also supporting audio-only live broadcasts, as well as through its sister broadcast service Periscope.

Last month, Twitter discontinued its app for iOS 9 and lower versions, which according to Apple’s own data still harbors some 5 percent of all iPhone and iPad users.

Facebook’s new ‘SapFix’ AI automatically debugs your code

Facebook has quietly built and deployed an artificial intelligence programming tool called SapFix that scans code, automatically identifies bugs, tests different patches and suggests the best ones that engineers can choose to implement. Revealed today at Facebook’s @Scale engineering conference, SapFix is already running on Facebook’s massive code base and the company plans to eventually share it with the developer community.

“To our knowledge, this marks the first time that a machine-generated fix — with automated end-to-end testing and repair — has been deployed into a codebase of Facebook’s scale,” writes Facebook’s developer tool team. “It’s an important milestone for AI hybrids and offers further evidence that search-based software engineering can reduce friction in software development.” SapFix can run with or without Sapienz, Facebook’s previous automated bug spotter. It uses it in conjunction with SapFix, suggesting solutions to problems Sapienz discovers.

These types of tools could allow smaller teams to build more powerful products, or let big corporations save a ton on wasted engineering time. That’s critical for Facebook as it has so many other problems to worry about.

 

Glow AI hardware partners

Meanwhile, Facebook is pressing forward with its strategy of reorienting the computing hardware ecosystem around its own machine learning software. Today it announced that its Glow compiler for machine learning hardware acceleration has signed up the top silicon manufacturers, like Cadence, Esperanto, Intel, Marvell, and Qualcomm, to support Glow. The plan mirrors Facebook’s Open Compute Project for open sourcing server designs and Telecom Infra Project for connectivity technology.

Glow works with a wide array of machine learning frameworks and hardware accelerators to speed up how they perform deep learning processes. It was open sourced earlier this year at Facebook’s F8 conference.

“Hardware accelerators are specialized to solve the task of machine learning execution. They typically contain a large number of execution units, on-chip memory banks, and application-specific circuits that make the execution of ML workloads very efficient,” Facebook’s team writes. “To execute machine learning programs on specialized hardware, compilers are used to orchestrate the different parts and make them work together . . . Hardware partners that use Glow can reduce the time it takes to bring their product to market.”

Facebook VP of infrastructure Jason Taylor

Essentially, Facebook needs help in the silicon department. Instead of isolating itself and building its own chips like Apple and Google, it’s effectively outsourcing the hardware development to the experts. That means it might forego a competitive advantage from this infrastructure, but it also allows it to save money and focus on its core strengths.

“What I talked about today was the difficulty of predicting what chip will really do well in the market. When you build a piece of silicon, you’re making predictions about where the market is going to be in two years” Facebook’s VP of infrastructure Jason Taylor tells me. “The big question is if the workload that they design for is the worlflow that’s really important at the time. You’re going to see this fragmentation. At Facebook, wew want to work with all the partners out there so we have good options now and over the next several years.” Essentially, by partnering with all the chip makers instead of building its own, Facebook future-proofs its software against volatility in which chip becomes the standard.

The technologies aside, the Scale conference was evidence that Facebook will keep hacking, policy scandals be damned. There was nary a mention of Cambridge Analytica or election interference as a packed room of engineers chuckled to nerdy jokes during keynotes packed with enough coding jargon to make the unindoctrinated assume it was in another language. If Facebook is burning, you couldn’t tell from here\

Snapchat enlists 20 partners to curate Our Stories from submissions

Themed collections of user generated content chosen by news publishers for viewing on and off Snapchat are the teen social network’s next great hope for relevance. Today Snap launches Curated Our Stories with the help of 20 partners like CNN, Cosmopolitan, Lad Bible, and NowThis. Instead of sifting through and selecting submissions to Our Story all by itself around events, holidays, and fads, these publishers can create slideshows of Snaps about whatever they want. They’ll both be featured in Snapchat Discover that sees 75 million Our Stories viewers per month, but also on the publishers’ own properties thanks to Snap’s recently-launched embeds.


To entice partners, Snap has built in monetization from day one, splitting revenue with publishers from ads run in the Our Stories they curate. Snap’s head of Stories everywhere Rahul Chopra tells me that in exchange for its cut, Snap provides a content management system that publishers can use to search through submitted Snaps using a variety of filters like keywords in captions and locations. All Curated Our Stories will be moderated by Snap to ensure that publishers aren’t choosing anything objectionable to show.

The curation possibilities are infinite. Partners could create reels of reactions to major news stories or shots from people with eyes on the ground at the scene of the action. They could highlight how people use a certain product, experience a particular place, or use a certain Snapchat creative feature. The publishers might produce daily or weekly collections around a topic or try a wide range of one-offs to surprise their viewers. You could think of it as a little bit like YouTube playlists, but cobbled together from real-time short-form submissions that might be too brief to make an impact on their own.

This is the start of Snapchat crowdsourcing not only content but curation to dig out the best citizen journalism, comedy, and beauty shot on its app and turn it into easily consumable compendiums. Given that Snapchat lost three million users last quarter, it could use the help keeping viewers coming back. But like most everything it launches, if Curated Our Stories blows up, you can bet Facebook and Instagram will turn on their copying machines.

 


The full list of publisher partners is:

  • Brut

  • CNN

  • Cosmopolitan

  • Daily Mail

  • Daquan

  • Dodo

  • Harper’s Bazaar

  • iHeart

  • The Infatuation

  • Jukin

  • Lad Bible

  • Love Stories TV

  • Mic

  • NBC News

  • NBC Sports

  • NBC, Today Show

  • New York Post

  • NowThis

  • Overtime

  • Refinery 29

  • Telemundo

  • The Tab

  • Viacom

  • Wave.TV

  • Whalar

Europe to push for one-hour takedown law for terrorist content

The European Union’s executive body is doubling down on its push for platforms to pre-filter the Internet, publishing a proposal today for all websites to monitor uploads in order to be able to quickly remove terrorist uploads.

The Commission handed platforms an informal one-hour rule for removing terrorist content back in March. It’s now proposing turning that into a law to prevent such content spreading its violent propaganda over the Internet.

For now the ‘rule of thumb’ regime continues to apply. But it’s putting meat on the bones of its thinking, fleshing out a more expansive proposal for a regulation aimed at “preventing the dissemination of terrorist content online”.

As per usual EU processes, the Commission’s proposal would need to gain the backing of Member States and the EU parliament before it could be cemented into law.

One major point to note here is that existing EU law does not allow Member States to impose a general obligation on hosting service providers to monitor the information that users transmit or store. But in the proposal the Commission argues that, given the “grave risks associated with the dissemination of terrorist content”, states could be allowed to “exceptionally derogate from this principle under an EU framework”.

So it’s essentially suggesting that Europeans’ fundamental rights might not, in fact, be so fundamental. (Albeit, European judges might well take a different view — and it’s very likely the proposals could face legal challenges should they be cast into law.)

What is being suggested would also apply to any hosting service provider that offers services in the EU — “regardless of their place of establishment or their size”. So, seemingly, not just large platforms, like Facebook or YouTube, but — for example — anyone hosting a blog that includes a free-to-post comment section.

Websites that fail to promptly take down terrorist content would face fines — with the level of penalties being determined by EU Member States (Germany has already legislated to enforce social media hate speech takedowns within 24 hours, setting the maximum fine at €50M).

“Penalties are necessary to ensure the effective implementation by hosting service providers of the obligations pursuant to this Regulation,” the Commission writes, envisaging the most severe penalties being reserved for systematic failures to remove terrorist material within one hour. 

It adds: “When determining whether or not financial penalties should be imposed, due account should be taken of the financial resources of the provider.” So — for example — individuals with websites who fail to moderate their comment section fast enough might not be served the very largest fines, presumably.

The proposal also encourages platforms to develop “automated detection tools” so they can take what it terms “proactive measures proportionate to the level of risk and to remove terrorist material from their services”.

So the Commission’s continued push for Internet pre-filtering is clear. (This is also a feature of the its copyright reform — which is being voted on by MEPs later today.)

Albeit, it’s not alone on that front. Earlier this year the UK government went so far as to pay an AI company to develop a terrorist propaganda detection tool that used machine learning algorithms trained to automatically detect propaganda produced by the Islamic State terror group — with a claimed “extremely high degree of accuracy”. (At the time it said it had not ruled out forcing tech giants to use it.)

What is terrorist content for the purposes of this proposals? The Commission refers to an earlier EU directive on combating terrorism — which defines the material as “information which is used to incite and glorify the commission of terrorist offences, encouraging the contribution to and providing instructions for committing terrorist offences as well as promoting participation in terrorist groups”.

And on that front you do have to wonder whether, for example, some of U.S. president Donald Trump’s comments last year after the far right rally in Charlottesville where a counter protestor was murdered by a white supremacist — in which he suggested there were “fine people” among those same murderous and violent white supremacists might not fall under that ‘glorifying the commission of terrorist offences’ umbrella, should, say, someone repost them to a comment section that was viewable in the EU…

Safe to say, even terrorist propaganda can be subjective. And the proposed regime will inevitably encourage borderline content to be taken down — having a knock-on impact upon online freedom of expression.

The Commission also wants websites and platforms to share information with law enforcement and other relevant authorities and with each other — suggesting the use of “standardised templates”, “response forms” and “authenticated submission channels” to facilitate “cooperation and the exchange of information”.

It tackles the problem of what it refers to as “erroneous removal” — i.e. content that’s removed after being reported or erroneously identified as terrorist propaganda but which is subsequently, under requested review, determined not to be — by placing an obligation on providers to have “remedies and complaint mechanisms to ensure that users can challenge the removal of their content”.

So platforms and websites will be obligated to police and judge speech — which they already do do, of course but the proposal doubles down on turning online content hosters into judges and arbiters of that same content.

The regulation also includes transparency obligations on the steps being taken against terrorist content by hosting service providers — which the Commission claims will ensure “accountability towards users, citizens and public authorities”. 

Other perspectives are of course available… 

The Commission envisages all taken down content being retained by the host for a period of six months so that it could be reinstated if required, i.e. after a valid complaint — to ensure what it couches as “the effectiveness of complaint and review procedures in view of protecting freedom of expression and information”.

It also sees the retention of takedowns helping law enforcement — meaning platforms and websites will continue to be co-opted into state law enforcement and intelligence regimes, getting further saddled with the burden and cost of having to safely store and protect all this sensitive data.

(On that the EC just says: “Hosting service providers need to put in place technical and organisational safeguards to ensure the data is not used for other purposes.”)

The Commission would also create a system for monitoring the monitoring it’s proposing platforms and websites undertake — thereby further extending the proposed bureaucracy, saying it would establish a “detailed programme for monitoring the outputs, results and impacts” within one year of the regulation being applied; and report on the implementation and the transparency elements within two years; evaluating the entire functioning of it four years after it’s coming into force.

The executive body says it consulted widely ahead of forming the proposals — including running an open public consultation, carrying out a survey of 33,500 EU residents, and talking to Member States’ authorities and hosting service providers.

“By and large, most stakeholders expressed that terrorist content online is a serious societal problem affecting internet users and business models of hosting service providers,” the Commission writes. “More generally, 65% of respondent to the Eurobarometer survey considered that the internet is not safe for its users and 90% of the respondents consider it important to limit the spread of illegal content online.

“Consultations with Member States revealed that while voluntary arrangements are producing results, many see the need for binding obligations on terrorist content, a sentiment echoed in the European Council Conclusions of June 2018. While overall, the hosting service providers were in favour of the continuation of voluntary measures, they noted the potential negative effects of emerging legal fragmentation in the Union.

“Many stakeholders also noted the need to ensure that any regulatory measures for removal of content, particularly proactive measures and strict timeframes, should be balanced with safeguards for fundamental rights, notably freedom of speech. Stakeholders noted a number of necessary measures relating to transparency, accountability as well as the need for human review in deploying automated tools.”