“It’s an amazing feeling,” says David Mzee, whose left leg was paralyzed in 2010. Mzee has now regained some ability to walk thanks to a breakthrough in spinal-cord stimulation technology. “I can do a knee extension of my left leg… flex my hip and even move my toes.”
Mzee is one of three participants in a study that used a new technique to overcome spinal-cord injury and restore walking ability in patients with varying degrees of paralysis. The results, published in Nature and Nature Neuroscience today, are dramatic. All three patients recovered some degree of walking ability, and their progress in physical-therapy sessions has translated to improved mobility in their daily lives.
The basis of the technique, called epidural electrical stimulation (EES), is not new at all—it’s been investigated as a potential treatment for paralysis for decades, with a lot of success in animals. And in September this year, two separate papers reported breakthroughs in allowing patients with paralysis to walk, with assistance, as a result of EES.
As America comes to grips with two more violent, homegrown plots — an attempt to mail pipe bombs to prominent Democrats and a mass shooting at a Pittsburgh synagogue — reality and surreality may seem hard to disentangle. Experts are working to figure out exactly what happened in each case and why, on levels ranging from the societal to forensic. But it appears that the two suspects shared at least one habit: engaging with extreme content online.
Robert Bowers, the suspect in the Pittsburgh shooting, posted a message on a niche social network known to be used by white supremacists shortly before opening fire at the Tree of Life synagogue. Cesar Sayoc, the Florida man charged with sending explosive material to political figures, left a trail of conspiracy theories and right-wing sensationalism on Facebook. While their use of technology may help reveal their motives, it also speaks to bigger problems that researchers are racing to better understand. Chief among them is the way that the Internet can make irrational viewpoints seem commonplace.
“A lot of our behavior is driven by what we think other people do and what other people find acceptable,” says Nour Kteily, an associate professor at Northwestern’s Kellogg School of Management who studies dehumanization and hostility. And there’s a good chance that even those who avoid the dark corners of the web are encountering extreme ideas about what is right and who is wrong. A Facebook spokesperson says the company took action on 2.5 million pieces of content classified as hate speech in the first quarter of 2018.
There have always been people who espouse vitriol. “But the emergence of these online platforms has reshaped the conversation,” Kteily says. “They in many ways amplify the danger of things like dehumanizing speech or hate speech.” Marginal ideas can now spread faster and further, creating an impression that they are less marginal and more mainstream.
Big technology companies are acknowledging the dangers researchers have already uncovered when it comes to the ways that encountering hateful speech can skew attitudes. In one 2015 study, people who were exposed to homophobic epithets tended to rate gay people as less human and physically distance themselves from a gay man in subsequent tasks. And researchers have long warned that dehumanizing people is a tactic that goes hand-in-hand with oppressing them, because it helps create mental distance between groups.
“We are permitted to treat non-human animals in ways that are impermissible in the treatment of human beings,” David Livingstone Smith, a professor of philosophy at the University of New England, explained in a previous interview with TIME. Such language can help “disable inhibitions against acts of harm,” he said.
One question raised by the Pittsburgh shooting is what happens when extremists are shut out of mainstream social networks, as companies like Facebook and Twitter take a harder line on these issues. Facebook has been hiring content moderators and subject matter experts at a rapid clip, hoping to do a better job of proactively finding hate speech and identifying extremist organizations. Twitter continues to develop a more stringent policy on what constitutes dehumanizing speech that violates its terms. “Language that makes someone less than human can have repercussions off the service, including normalizing serious violence,” Twitter employees wrote in a post announcing proposed policy language.
Gab, a social media site on which Bowers wrote anti-Semitic posts, disavowed all acts of violence and terrorism in statements to TIME and other publications in the aftermath of the shooting. But the site has become a haven for white supremacists and other extremists, given its promise of letting people espouse ideas that might get them banned elsewhere, says Joan Donovan, an expert in media manipulation at research institute Data & Society. “What that does is create a user population on Gab of people who are highly tolerant of those views,” she says. That, in turn, might make things like rantings about Jewish conspiracies seem more widespread than they would on a platform where poisonous posts are surrounded — and perhaps diluted — by billions of rational ones.
Bowers’ final post before the shooting read, in part, “Screw your optics, I’m going in.” The term “optics,” Donovan says, likely refers to tactics discussed among white supremacists, specifically the idea that the movement will be more successful if its members are perceived as non-violent victims of “anti-white” thought police. Among the figures the movement portrays as its own oppressors, she says, are big technology companies. “[W]e are in a war to speak freely on the internet,” a Gab-associated account wrote on Medium, before that company suspended it in the wake of the shooting. The post accused Silicon Valley companies of “purg[ing] any ideology that does not conform to their own echo chamber bubble world.” Such sites, where the alt-right flocks, have been described as “alt tech.”
Donovan says that these niche platforms are places “where many harassment campaigns are organized, where lots of conspiracy talk is organized.” Racist and sexist memes that might get an account suspended on other platforms are easy to find. “The problem is when you’re highly tolerant of those kinds of things,” Donovan explains, “other more sane and more normal people don’t stay.”
Though social networks might seem well-established at this point, more than a decade after Facebook was founded, academics are lagging behind when it comes to understanding all the effects these evolving platforms might be having on users’ behavior and well-being. Experts interviewed for this article were not aware of research that investigates, on an individual level, the possible link between posting extreme or hateful content online and the likelihood of being aggressive offline. Posting can serve “a public commitment device,” Kteily says. But that’s far from a causal link.
Newer research is attempting, at least in the aggregate, to better understand the relationship between activity on social networks and violence in the offline world. Carlo Schwarz and Karsten Müller, researchers associated with the University of Warwick and Princeton University, respectively, analyzed every anti-refugee attack that had occurred in Germany over a two-year period — more than 3,000 instances — and looked at variables ranging from the wealth of each community to the numbers of refugees living there. One factor that cropped up across the country is that attacks tended to occur in towns where there was more usage of Facebook, a platform where users encounter anti-refugee sentiment.
The study’s methodology has come under some criticism, and Schwarz emphasizes that the findings need to be replicated before universal conclusions are drawn, especially because isolated Internet outages across Germany helped provide special circumstances for their study. (When access to the Internet went down in localities with high amounts of Facebook usage, they found that attacks on refugees dropped too.) But what their research suggests, Schwarz says, is that there is a sub-group of people “who seem to be pushed toward violent acts by the exposure to online hate speech.” The echo chamber effect of social networks may be part of the problem. When people are exposed to the same targeted criticisms over and over, he says, it may change their perception about “how acceptable it is to commit acts of violence against minority groups.”
Facebook, Twitter and Google are dedicating resources to the problem, yet there are many challenges: as algorithms are designed to pick up certain red-flag words, extremist groups adopt coded language to spread the same old ideas; content moderators need to understand myriad languages and cultures; and the sheer volume of posts on Facebook alone, which number in the billions each day, is overwhelming. The company says that it finds 38% of hate speech before it’s reported, a smaller proportion than for terror propaganda and nudity. The company expects that number to improve, a spokesperson says, also acknowledging the difficulty of tackling content that tends to be context-dependent.
And while major tech companies may feel that getting a handle on this problem is a business imperative — a Twitter spokesperson says that maintaining healthy conversation is a “top priority” — current law largely shields platforms from responsibility for the content on their platforms. That means that while some social networks may get serious in tackling extremist speech, there is no legal mandate for all platforms to follow suit. That is one reason, in the wake of these latest plots, that some lawmakers are renewing calls for tighter regulation on social media.
In the meantime, academics will keep trying to provide research that helps companies make decisions based on data rather than good intentions. “Research is obviously slow,” says Schwarz, who is now investigating whether there is a connection between Twitter usage and offline violence in the U.S. “It’s still a new field.”
Robots just want to get things done, but it’s frustrating when their rigid bodies simply don’t allow them to do so. Solution: bodies that can be reconfigured on the fly! Sure, it’s probably bad news for humanity in the long run, but in the meantime it makes for fascinating research.
A team of graduate students from Cornell University and the University of Pennsylvania made this idea their focus and produced both the modular, self-reconfiguring robot itself and the logic that drives it.
Think about how you navigate the world: If you need to walk somewhere, you sort of initiate your “walk” function. But if you need to crawl through a smaller space, you need to switch functions and shapes. Similarly, if you need to pick something up off a table, you can just use your “grab” function, but if you need to reach around or over an obstacle you need to modify the shape of your arm and how it moves. Naturally you have a nearly limitless “library” of these functions that you switch between at will.
That’s really not the case for robots, which are much more rigidly designed both in hardware and software. This research, however, aims to create a similar — if considerably smaller — library of actions and configurations that a robot can use on the fly to achieve its goals.
In their paper published today in Science Robotics, the team documents the groundwork they undertook, and although it’s still extremely limited, it hints at how this type of versatility will be achieved in the future.
The robot itself, called SMORES-EP, might be better described as a collection of robots: small cubes (it’s a popular form factor) equipped with wheels and magnets that can connect to each other and cooperate when one or all of them won’t do the job. The brains of the operation lie in a central unit equipped with a camera and depth sensor it uses to survey the surroundings and decide what to do.
If it sounds a little familiar, that’s because the same team demonstrated a different aspect of this system earlier this year, namely the ability to identify spaces it can’t navigate and deploy items to remedy that. The current paper is focused on the underlying system that the robot uses to perceive its surroundings and interact with it.
Let’s put this in more concrete terms. Say a robot like this one is given the goal of collecting the shoes from around your apartment and putting them back in your closet. It gets around your apartment fine but ultimately identifies a target shoe that’s underneath your bed. It knows that it’s too big to fit under there because it can perceive dimensions and understands its own shape and size. But it also knows that it has functions for accessing enclosed areas, and it can tell that by arranging its parts in such and such a way it should be able to reach the shoe and bring it back out.
The flexibility of this approach and the ability to make these decisions autonomously are where the paper identifies advances. This isn’t a narrow “shoe-under-bed-getter” function, it’s a general tool for accessing areas the robot itself can’t fit into, whether that means pushing a recessed button, lifting a cup sitting on its side, or reaching between condiments to grab one in the back.
A visualization of how the robot perceives its environment.
As with just about everything in robotics, this is harder than it sounds, and it doesn’t even sound easy. The “brain” needs to be able to recognize objects, accurately measure distances, and fundamentally understand physical relationships between objects. In the shoe grabbing situation above, what’s stopping a robot from trying to lift the bed and leave it in place floating above the ground while it drives underneath? Artificial intelligences have no inherent understanding of any basic concept and so many must be hard-coded or algorithms created that reliably make the right choice.
Don’t worry, the robots aren’t quite at the “collect shoes” or “collect remaining humans” stage yet. The tests to which the team subjected their little robot were more like “get around these cardboard boxes and move any pink-labeled objects to the designated drop-off area.” Even this type of carefully delineated task is remarkably difficult, but the bot did just fine — though rather slowly, as lab-based bots tend to be.
The authors of the paper have since finished their grad work and moved on to new (though surely related) things. Tarik Tosun, one of the authors whom I talked with for this article, explained that he’s now working on advancing the theoretical side of things as opposed to, say, building cube-modules with better torque. To that end he helped author VSPARC, a simulator environment for modular robots. Although it is tangential to the topic immediately at hand, the importance of this aspect of robotics research can’t be overestimated.
The U.S. government is monitoring for possible foreign interference in Tuesday’s congressional elections and is prepared to sanction any company or individual involved in such activity, a senior intelligence official said on Wednesday.
The U.K.’s Civil Aviation Authority is cautioning police departments and other emergency services to suspend operations of a specific drone model after some of the devices lost power unexpectedly and fell while in flight.
The Civil Aviation Authority (CAA) safety warning applies to DJI Matrice 200 series drones, used by some emergency services in the U.K. The failures were first reported by West Midlands police department, though law enforcement in Norfolk, Devon and Cornwall also uses DJI drones. Devon and Cornwall have grounded two affected drones out of their fleet of 20, according to the BBC.
According to the CAA, “A small number of incidents have been recently reported where the aircraft has suffered a complete loss of power during flight, despite indications that there was sufficient battery time still remaining.” No injuries have been reported, despite “immediate loss of lift with the remote pilot unable to control its subsequent flight path.”
While no reports have surfaced in the U.S. so far, a study by Bard College noted that 61 U.S. public safety agencies (law enforcement, fire departments, EMS, etc.) use the specific model of Mavic drone affected. Collectively, drone models by DJI dominate the space, though the Matrice is not the most popular model.
The manufacturer has responded to the reports, urging Matrice operators to push a firmware update that resolves the issue. “When prompted on the DJI Pilot App, we recommend all customers to connect to the internet on the app or DJI Assistant 2 and update the firmware for their aircraft and all batteries to ensure a safe flight with their drone,” the company wrote in a product warning.
DJI faced a similar issue last year when some of its DJI Spark consumer-grade drones suddenly lost power and fell from the sky.
Mavrck has raised another $5.8 million in funding, bringing its total raised to $13.8 million.
When the company raised its Series A back in 2015, it was focused on helping brands work with “micro-influencers” who were already using their products. Now it describes itself as an “all-in-one” influencer marketing platform, offering a number of tools to automate and measure the process.
Last month, Mavrck announced new features for Pinterest, where it’s now an official marketing partner. It also says it’s been doing more to improve measurement and detect fraud — on the fraud side, it promises to analyze a “statistically significant sample” of an Instagram account’s followers, and of the accounts that engage with their content, to determine if they’re bots.
Customers include P&G, Godiva and PepsiCo, and the company says recurring revenue has grown 400 percent year-over-year.
“Everything that we have done at Mavrck this year has been done with the intention to drive the influencer industry forward,” said co-founder and CEO Lyle Stevens in the funding announcement. “Every new capability that we’ve introduced, every partner that we’ve started working with, every influencer behavior that we’ve tracked was part of our mission to help marketers harness the power of content that people trust to drive tangible business value for their brands.”
The new funding comes from GrandBanks Capital and Kepha Partners. A spokesperson said this isn’t a Series B, but rather additional capital raised to support increased demand and channel partnerships.
Fitbit is slowly righting its financial ship, courtesy of a successful push into the smartwatch category. The wearable company reported a profit (when adjusted for items such as stock-based compensation) thanks to growing sales in the new category.
Total revenues rose slightly to $393.6 million in the third quarter compared with the same period last year. The company did report a loss this quarter under generally accepted accounting principles (GAAP). But it was rosier than in previous quarters and showed that Fitbit is moving in the right direction. Net losses narrowed considerably to $2.1 million from $113.4 million this time last year. A good deal of the company’s revenue is being driven by the shift to smartwatches, which now comprise around half of Fitbit’s total revenue.
It’s a gamble that’s finally starting to pay off for the company. Fitbit launched its first smartwatch in August of last year. The Ionic was the result of three high-profile acquisitions: Pebble, Coin and Vector. It was an ambitious product that found the company embracing the one bright spot in an otherwise stagnant wearables market.
What felt like an extremely expensive Hail Mary for the company was ultimately bogged down by poor reviews (including one on this site), thanks to poor industrial design, among other issues. In an interview with TechCrunch earlier this year, CEO James Park admitted that the Ionic ultimately wasn’t a mainstream device. “It was a performance-oriented product,” Park said at the time. “That audience is much smaller than a mass appeal device.”
Its followup, the Versa, however, address many of the biggest complaints plaguing the Ionic, and has clearly proven a hit for Fitbit.
This is the first time the company has posted adjusted profitability since Q3 of 2016. Forty-nine percent of the revenue on the 3.5 million wearables it sold this quarter came courtesy of its smartwatches. Fitbit’s combined smartwatch sales currently put it in the No. 2 position in the U.S., behind only Apple. It seems the company’s gamble is beginning to pay off.
Xiaomi, the electric scooter manufacturer that a handful of the shared electric scooter services in the U.S. (like ones from Uber, Lyft, Spin and Bird) rely on, has sent a cease-and-desist letter to Lyft. In the letter, obtained by TechCrunch, Xiaomi says it did not consent to associate its brand with Lyft.
Xiaomi alleges Lyft has referenced Xiaomi’s brand in its advertisements and other documentation referring to its shared electric scooter business.
“We also do not condone Lyft’s unauthorized modification or retrofitting of our electric scooters for general public use,” Xiaomi wrote in its letter.
If Lyft does not cease to use, purchase and modify its scooters, Xiaomi says it will pursue legal action against Lyft. Xiaomi also demands that Lyft must stop deploying its scooters “that have been modified without our consent in public scooter rentals.”
But Lyft says it has no knowledge of using Xiaomi’s trademarks in its advertising.
“We have no intention of using any other company’s trademarks in advertising our scooters, and are not aware of any instance of having done so with our existing suppliers,” a Lyft spokesperson said in a statement to TechCrunch. “We will address these concerns with them directly. Safety modifications, including slowing scooter speeds, have been made to satisfy local regulatory guidelines.”
“Lyft’s modification to any scooters originally manufactured by Xiaomi without our knowledge, participation, or approval undoubtedly exposes Xiaomi to serious legal risks and liabilities for consumer safety and product liability,” the letter states.