In a study that is bound to entice numerous moral and ethical controversies, scientists at the University of California in Berkeley (UCB) announce the development of a new technique that allows them to tap into the human brain's video feed, hijacking the signal for display on computer monitors.
According to the team, this could be used to communicate with comatose patients, enabling doctors to experience whatever the person in front of them is experiencing. In the future, it may even be possible to post videos of one's dreams on YouTube.
But this cutting-edge blend of brain imaging and computer simulation will undoubtedly be used against the people as well, for torture or peering into someone's brain without their consent. This is why the work will cause so much stir.
The UCB group was able to obtain this new capability by combining computer models with functional Magnetic Resonance Imaging (fMRI). Together, these two instruments can decode and reconstruct a person's visual signal, albeit with some errors and a lot of fuzziness.
Interestingly, the method also works for moving images, in the sense that scientists can look at a live feed recorded from the brain of a test subject watching a video. The images of researchers' screens also change as the participant views different actions.
At this point, the technology is limited by the fact that it can only see videos we've already seen ourselves. However, the team plans to improve this approach by becoming capable of taping into our dreams and memories as well.
“This is a major leap toward reconstructing internal imagery. We are opening a window into the movies in our minds,” UCB neuroscience professor Jack Gallant explains. He is also the coauthor of a new study detailing the findings, in the September 22 online issue of the journal Current Biology.
“Our natural visual experience is like watching a movie. In order for this technology to have wide applicability, we must understand how the brain processes these dynamic visual experiences,” UCB post-doctoral researcher and lead study author Shinji Nishimoto adds.
“We addressed this problem by developing a two-stage model that separately describes the underlying neural population and blood flow signals. We need to know how the brain works in naturalistic conditions. For that, we need to first understand how the brain works while we are watching movies,” he concludes.
Video description: This is how the new UCB technology perceives what people are seeing. Video credit: UCB Gallant Lab