Today we present the second and final installment of my wide-ranging interview with holographer, former Oculus CTO, and current entrepreneur Mary Lou Jepsen. Part One ran yesterday—so if you missed it, click right here. Otherwise, you can press play on the embedded player, or pull up the transcript—they’re below.
Today we open by talking about some astounding work of University of California-Berkeley neuroscientist Jack Gallant—he trained an AI system to infer what test subjects were viewing on a video screen just by watching their brains light up on an MRI. The AI’s inference videos are grainy, but they’re often creepily accurate. Jepsen first saw his work several years ago, then presented it at TED as part of a main-stage talk in 2013.
fMRI technology was improving at a respectable pace back then—but nothing resembling Moore’s law—so the days of nodding off in an MRI and receiving a 4K-quality video of your dreams upon awakening seemed extremely distant. But if Jepsen actually finagles all she intends, they could be nigh.