Top: presented clip; bottom: clip reconstructed from brain activity. Clips and reconstructions courtesy of Gallant lab
Mind reading has historically been science fiction, but in light of recent work by Berkeley neuroscientists this idea may soon be science reality. In a paper published in Current Biology, Jack Gallant’s lab in the Helen Wills Neuroscience Institute reported a computational model that can predict human brain activity recorded with functional magnetic resonance imaging (fMRI) as subjects view natural movie clips taken from YouTube. The aim of developing this encoding model was to understand how early visual areas of the human brain process visual input—specifically, motion information. However, in validating the model the lead author of the paper, Shinji Nishimoto, has spurred media interest by using a Bayesian decoder to reverse the prediction and “read” subjects’ minds. The decoder reconstructs visualized movies from each subject’s fMRI blood oxygen level-dependent (BOLD) brain signals. While the reconstructed movies do not reveal the exact contents of the actual movies, they do reveal correct shapes and motion. Nishimoto notes that what is particularly significant about the work is that the lab was actually able to model fMRI BOLD signals encoding video stimuli, as the BOLD response had traditionally been thought too slow to do so. The immediate next step, according to Nishimoto, is to “extend the encoding model framework into higher-order processing brain areas” to allow researchers to investigate how the human brain encodes more complex visual input, like faces. Upon further development, the decoding aspect of this model may one day be used to peer into dreams, in the hope of unraveling the ongoing mystery of why we dream. It could also be used to help handicapped individuals communicate images and thoughts through a brain-machine interface system. Using fMRI to explore the way human brains encode visual stimuli, the Gallant lab has brought science fiction within reach of reality.
This article is part of the Spring 2012 issue.
Notice something wrong?
Please report it here.