Research has traditionally been broken down along simple lines. There’s basic, fundamental research into how the world works, and there’s applied research that attempts to take these insights and make something useful out of them. The two have very different end goals and require very different approaches to the research process.
But there’s a large gray area in between, where the approach is more applied but the end goal may be little more than “make something cool”: things like tiny flying robots or 3D computer displays that rely on beads levitated by lasers. How do researchers find direction for these open-ended engineering challenges?
It’s one of the biggest mysteries of recent human evolution. Roughly 70,000 years ago, Homo sapiens went through a genetic bottleneck, a period when our genetic diversity shrank dramatically. But why? In the late 1990s, some scientists argued that the culprit was a massive volcanic eruption from what is now Lake Toba, in Sumatra, about 74,000 years ago, whose deadly effects reduced our species to a few thousand hardy individuals. Now, new evidence suggests we were right about the volcano—but wrong about pretty much everything else.
The so-called Toba Catastrophe Theory was first proposed by University of Illinois anthropologist Stanley Ambrose and popularized by University of Utah anthropologist Henry Harpending, who was trying to understand what caused the genetic bottleneck. At the time, mounting evidence suggested that the volcano had had a global effect, because debris from it can be found throughout the world. Many scientists thought it was likely that airborne particles from Toba caused a “volcanic winter” that lowered Earth’s temperatures. Harpending and his colleague Gregory Cochran suggested that it ushered in a millennium of frigid temperatures, driving humanity to near-extinction and pushing it out of Africa in search of better habitats.
Once the globe warmed up again, the theory goes, humanity started to recover its ranks. But the population crash meant that we had lost a lot of genetic diversity. This hypothesis sounded reasonable at first, but then scientists began to uncover intriguing new evidence that humans hadn’t died out at all.
By increasing the energy stored in our atmosphere, climate change is expected to generate more severe storms and heat waves. Severe storms and heat waves, however, also happen naturally. As a result, it’s tough to figure out whether any given event is a product of climate change.
A corollary to that is that detecting a signal of climate change using weather events is a serious challenge. Are three nor’easters in quick succession, as the East Coast is now experiencing, a sign of a changing climate? Or is it simply a matter of natural variability?
A team of researchers has now looked at heat waves in the US, trying to determine when a warming-driven signal will stand out above the natural variability. And the answer is that it depends. In the West, the answer is “soon,” with climate-driven heat waves becoming the majority in the 2020s. But for the Great Plains, the researchers show that a specific weather pattern will push back the appearance of a warming signal until the 2070s.
Former Vice President Dick Cheney once said that “Conservation may be a sign of personal virtue, but it is not a sufficient basis for a sound, comprehensive energy policy.” But in the US, increased energy efficiency has helped drive a drop in total electricity use. That, combined with the rise of renewable power, caused the use of both coal and natural gas to decline last year.
The changes, according to the Energy Information Agency, are relatively small. Total electric generation last year was down 1.5 percent compared to the year before, a drop of 105,000 GigaWatt-hours. But both coal and natural gas saw declines that were even larger. Coal use was down by 2.5 percent, a smaller decline than it has seen in many recent years. But the numbers for its future aren’t promising; no new coal plants were opened, and 6.3 Gigawatts of coal capacity were retired in 2017.
Continuing recent trends, 9.3GW of natural gas capacity were brought online, although that was partly offset by the retirement of 4.0GW of older gas plants. Despite the additional capacity, however, natural gas use was also down, dropping by nearly 8 percent.
In 2012, a 76-year-old Connecticut doctor had surgery to repair a life-threatening bulge in his aortic arch—the hulking bend that hooks the massive artery around the heart, routing oxygenated blood both upward and downward. Surgeons successfully used a synthetic graft to shore up the vital conduit. But soon after, a tenacious film of drug-resistant Pseudomonas aeruginosa bacteria formed on the graft.
The doctor spent the next four years battling the infection, slipping in and out of the hospital. His surgeons and doctors at Yale deemed him too high risk for another operation and put him on mega-doses of antibiotics, prescribed indefinitely. The drugs couldn’t clear the infection, they merely knocked it back enough to keep it from killing him. But the chronic inflammation that ensued took its own toll. His team of doctors started to worry his immune system was chipping away at his aorta. With a bleak outlook, the man agreed in 2016 to an experimental treatment: a virus that researchers had fished out of a nearby pond.
Marshall Space Flight Center has a long and storied history when it comes to rocket design and production. It was there that Wernher von Braun and his German compatriots helped NASA design the Saturn line of rockets that took humans into deep space and land on the Moon. There, too, key components of the space shuttle’s rockets were designed.
Now, however, US rockets and engines are much more commonly developed outside of northern Alabama, where the NASA center is located in Huntsville. SpaceX has designed and built its Merlin rocket engines in California, and it is doing the same thing with its more powerful Raptor engines. Blue Origin has designed four engines in the state of Washington. Both companies have tested their rocket engines in Texas.
Smaller firms, too, such as Virgin Orbit, Vector, Rocket Lab, Relativity Space, Firefly, and a host of other firms have developed innovative new rocket engines and boosters outside the walls of the Marshall Space Flight Center. Certainly, these companies have at time drawn on the NASA center for its expertise, but these efforts have largely been privately financed and independently led.
Studies of the bones of dog, large cat, turkey, and other animal bones found in the Maya city of Ceibal show that, as early as 400 BCE, the Mayan elite were importing dogs from distant corners of Guatemala and raising large cats like jaguars in captivity, probably all for use in elaborate rituals at the pyramids in the center of the city.
“Animal trade helped sustain many large civilizations, such as the Romans in Europe, the Inca Empire in South America, the Mesopotamians in the Middle East, and the ancient Chinese dynasties,” said archaeologist Ashley Sharpe of the Smithsonian Tropical Research Institute, who led the study. But at Ceibal, the imported animals seem to have served purely ceremonial or political purposes, which may have played an important role in the growth of the powerful Maya state.
The work is based on discoveries at a pyramid near the ceremonial center of Ceibal, an important Maya city in what is now Guatemala (the city is also known as Seibal and El Ceibal). Archaeologists found the jawbone of a large cat—probably a jaguar—mixed in with ancient construction fill. A jawbone doesn’t sound like much, but it’s enough to let archaeologists reconstruct what the animal ate and where it came from. The ratio of stable carbon isotopes stored in the bone, for example, can tell researchers whether the animal or its prey ate a lot of grain or foraged on more woody plants in the forests around Ceibal, while nitrogen isotope ratios reveal the amount of protein in the animal’s diet.
A self-driving vehicle made by Uber has struck and killed a pedestrian. It’s the first such incident and will certainly be scrutinized like no other autonomous vehicle interaction in the past. But on the face of it it’s hard to understand how, short of a total system failure, this could happen when the entire car has essentially been designed around preventing exactly this situation from occurring.
Something unexpectedly entering the vehicle’s path is pretty much the first emergency event that autonomous car engineers look at. The situation could be many things — a stopped car, a deer, a pedestrian — and the systems are one and all designed to detect them as early as possible, identify them, and take appropriate action. That could be slowing, stopping, swerving, anything.
Uber’s vehicles are equipped with several different imaging systems which work both ordinary duty (monitoring nearby cars, signs, and lane markings) and extraordinary duty like that just described. No less than four different ones should have picked up the victim in this case.
Top-mounted lidar. The bucket-shaped item on top of these cars is a lidar, or light detection and ranging, system that produces a 3D image of the car’s surroundings multiple times per second. Using infrared laser pulses that bounce off objects and return to the sensor, lidar can detect static and moving objects in considerable detail, day or night.
This is an example of a lidar-created imagery, though not specifically what the Uber vehicle would have seen.
Heavy snow and fog can obscure a lidar’s lasers, and its accuracy decreases with range, but for anything from a few feet to a few hundred feet, it’s an invaluable imaging tool and one that is found on practically every self-driving car.
The lidar unit, if operating correctly, should have been able to make out the person in question, if they were not totally obscured, while they were still more than a hundred feet away, and passed on their presence to the “brain” that collates the imagery.
Front-mounted radar. Radar, like lidar, sends out a signal and waits for it to bounce back, but it uses radio waves instead of light. This makes it more resistant to interference, since radio can pass through snow and fog, but also lowers its resolution and changes its range profile.
Tesla’s Autopilot relies mostly on radar.
Depending on the radar unit Uber employed — likely multiple in both front and back to provide 360 degrees of coverage — the range could differ considerably. If it’s meant to complement the lidar, chances are it overlaps considerably, but is built more to identify other cars and larger obstacles.
The radar signature of a person is not nearly so recognizable, but it’s very likely they would have at least shown up, confirming what the lidar detected.
Short and long-range optical cameras. Lidar and radar are great for locating shapes, but they’re no good for reading signs, figuring out what color something is, and so on. That’s a job for visible-light cameras with sophisticated computer vision algorithms running in real time on their imagery.
The cameras on the Uber vehicle watch for telltale patterns that indicate braking vehicles (sudden red lights), traffic lights, crossing pedestrians, and so on. Especially on the front end of the car, multiple angles and types of camera would be used, so as to get a complete picture of the scene into which the car is driving.
Detecting people is one of the most commonly attempted computer vision problems, and the algorithms that do it have gotten quite good. “Segmenting” an image, as it’s often called, generally also involves identifying things like signs, trees, sidewalks and more.
That said, it can be hard at night. But that’s an obvious problem, the answer to which is the previous two systems, which work night and day. Even in pitch darkness, a person wearing all black would show up on lidar and radar, warning the car that it should perhaps slow and be ready to see that person in the headlights. That’s probably why a night-vision system isn’t commonly found in self-driving vehicles (I can’t be sure there isn’t one on the Uber car, but it seems unlikely).
Safety driver. It may sound cynical to refer to a person as a system, but the safety drivers in these cars are very much acting in the capacity of an all-purpose failsafe. People are very good at detecting things, even though we don’t have lasers coming out of our eyes. And our reaction times aren’t the best, but if it’s clear that the car isn’t going to respond, or has responded wrongly, a trained safety driver will react correctly.
Worth mentioning is that there is also a central computing unit that takes the input from these sources and creates its own more complete representation of the world around the car. A person may disappear behind a car in front of the system’s sensors, for instance, and no longer be visible for a second or two, but that doesn’t mean they ceased existing. This goes beyond simple object recognition and begins to bring in broader concepts of intelligence such as object permanence, predicting actions, and the like.
It’s also arguably the most advance and closely guarded part of any self-driving car system and so is kept well under wraps.
It isn’t clear what the circumstances were under which this tragedy played out, but the car was certainly equipped with technology that was intended to, and should have, detected the person and caused the car to react appropriately. Furthermore, if one system didn’t work, another should have sufficed — multiple failbacks are only practical in high stakes matters like driving on public roads.
We’ll know more as Uber, local law enforcement, federal authorities, and others investigate the accident.