I especially like the commentary on the work of Marcel van Gerven, who is doing cutting-edge work:
40 hours of fMRI data from a single individual is an insane amount.
One researcher developing advanced computer models for decoding brain activity is Marcel van Gerven, Ph.D., associate professor and principal investigator at Radboud University’s Donders Institute for Brain, Cognition, and Behavior in Nijmegen, The Netherlands (Figure 2, right). His group is especially interested in exploring neural networks as computational models of human brain function and using the power of these models to improve decoding algorithms.
“On the computational side, these models are difficult to develop, but we have early, unpublished results showing that we can basically condition these models on brain function,” van Gerven says. “What happens is, we measure brain activity, the models observe this brain activity, and the models are able to make reconstructions based on that brain activity.”
One of the biggest challenges with the models is that they are built on fMRI data, which have inherent temporal limitations. “If something happens in my brain now, it could cause a change in blood oxygenation six seconds later, so we have these very slow measurements in fMRI while we are trying to reconstruct what people are perceiving or imagining,” van Gerven describes. “For static stimuli, it’s doable. But the next steps—and we have been working on this—are to move toward more naturalistic stimuli, such as audiovisual stimuli, that are changing on a moment-to-moment basis.”
To continue down that path, van Gerven is amassing as much fMRI information as he can. “One of the things my group is focusing on is collecting huge amounts of data in individual subjects. In fact, we now have one participant who will be in the scanner for a [combined] total of 40 hours, with the objective of getting enough data to be able to estimate those models,” he says. “And the more data we have, the better those models become.”
Notice how he and his team are using Deep Learning to not only decode the brain, but to also model it based on brain data. This approach is to be contrasted with high-fidelity brain emulation, where you build digital anatomical models of brains down to the micron level.