fMRI: One in a million (neurons)
Even though Kamil Uludag, PhD initially started a PhD in elementary particle physics, his journey led him to fMRI where he has immersed himself into topics ranging from big data down to the role of a single neuron. During his interview, he clarifies whether a higher magnetic field will bring more sensitive and specific functional imaging, to what physical question laminar fMRI is the answer and why the current trends in MRI can go in opposite directions from each other.
What research questions brought you into the MRI field?
I studied physics and initially started a PhD in elementary particle physics. I did not like it because it involved heavy numerical simulations on parameter details of elementary particles, and I felt that all of the fundamental questions were already answered in the early 1900s. Additionally, I was studying philosophy, and all of the major newspapers in Germany featured biological topics in relationship to social and philosophical issues, such as molecular biology, genetics and brain imaging relative to free will or biological determinants of social behavior.
I liked the combination of biology, physics and questions from humanities. However, I did not directly start with fMRI but with a related approach, called near-infrared-spectroscopy (NIRS). It is very similar to fMRI but uses non-invasive sources of light and detectors and measures the reflectance of light related to changes in absorption, which depends on the blood oxygenation as does the fMRI signal.
You started with optical imaging and then moved to fMRI. Why did you stay in the fMRI field versus optical imaging?
In general, fMRI is superior in certain parameters to non-invasive optical imaging because of the higher spatial resolution and the fact that you can measure the entire brain. Nevertheless, non-invasive optical imaging also has advantages: e.g., you can use it on babies and in real-life situations (such as walking), it is portable and less expensive. For the research questions I am interested in, fMRI seemed to be the better choice.
Based on your general fMRI expertise, can you illustrate a standard fMRI experiment, especially the steps going from an image to a neurophysiological answer?
At the beginning, you need to have a psychological, cognitive, neuroscience or clinical question. This is beyond the fMRI field per se. These are very complex and specialized fields with rich histories, where a lot of expertise is needed to develop the right questions. Aside from that starting point, a lot of the steps are similar: Having acquired the high-quality functional or anatomical images, the standard way is to do motion correction, to do registration to anatomy, to compare it with different subjects or to pull data for a group analysis, and to do a statistical analysis, taking into account the temporal and spatial properties of the fMRI signal.
However, there are a lot of choices to be made along the way. Often, one person achieves good results with certain data while another person using the same data does not achieve good or different results. This creates a lot of discussion in terms of how reproducible fMRI is. The results heavily depend on choices, and not all of the choices can be easily reported or done in the same way. After obtaining the results, the questions remain whether the results are generalizable and if they would hold under scrutiny.
Why we need to do group analyses in fMRI?
The simplest reason is that there is not enough statistical power in a single subject to detect something because the activation is small. People are controversially discussing the best group size to detect an effect. The second reason is that the same behavioral phenomenon can have individually unique causes: e.g., the behavioral symptoms for schizophrenia in two different subjects may be similar but the brain correlates and social causes can be totally different. To find common patterns, one needs to have a big sample size given the brain patterns of the symptoms may not overlap very much. The third reason is that there is no comprehensive framework to understand single-subject fMRI data. The conceptional problem is that the brain is complex and that the fMRI signal is an indirect reflection of neuronal activity. However, a lot of researchers are currently trying to make the single-subject activation patterns more interpretable.
For a group analysis, is it always important to warp the functional data to the same anatomical space such as an atlas?
It depends on which level you will do the group analysis. The standard way would be to morph each individual’s anatomy to a template. However, there is the possibility to not do the group analysis on the brain space itself but on the derived parameters of the activation patterns. For example, in laminar fMRI analyses, we never register different subjects on the same brain template because this would lead to loss of important information. Therefore, we create spatial laminar profiles within each subject and perform subsequent analyses on the spatial profiles, which are parametrized in the same one-dimensional coordinate system ranging from CSF to white matter.
You have now mentioned laminar fMRI, which generally requires high spatial and temporal resolution. Can you explain what physical question we try to answer with laminar fMRI?
The general justification for laminar fMRI is that there are more processes going on in the patch of the cortex than what we are typically resolving. In one cubic millimeter, you have approximately 1 million neurons. For standard fMRI with macroscopic resolution, e.g., 3 millimeters in three dimensions, there will be 27 million neurons. So, the brain design would be quite limited and ineffective if all of the 27 million neurons were to do the same thing. Going to high resolution, we are trying to make more distinctions on mesoscopic brain circuits, albeit still far away from microscopic investigations.
The driver of functionally laminar-resolved questions primarily comes from invasive electrophysiology, the so-called canonical microcircuit-model. It means that the different layers of a cortical patch in the brain are differently involved in cognitive processes. The basic idea, which I think is too simple, is that input to a cortical area goes to layer 4, while output and feedback usually targets layers 2/3 and 5/6. Electrophysiologist have shown in many experiments that the laminar profile of stimulus-induced neuronal activity changes with the cognitive load. Based on such unique data, it is possible to formulate more elaborate cognitive theories of human brain function.
It seems that the primary reason to go 1 millimeter and below for fMRI are the cortical layers. Are there any other physical structures you would like to investigate?
Columns are another example. However, there are different reasons why columns pose unique challenges. Firstly, columns have not been shown very often outside of sensory areas. Thus, it is not clear whether the functional relevance of columnar structures is generalizable for the entire cortex, whereas the laminar canonical microcircuit-model is generally accepted. Additionally, you can average different layers for laminar analysis, which are separated over a cortical patch. This is not possible with columns because you have to be able to distinguish the individual columns.
High spatial resolution is also needed to study subcortical structures, which are important for mood, arousal, a lot of disorders and neurodegeneration. At the moment, it is difficult to resolve them with standard fMRI. In the cortex, you also have structures, such as the hippocampus with different layering, where just recently scientists in Magdeburg have shown layer-specific activation with high-resolution fMRI. To sum up, with high-resolution fMRI, we can answer questions, which could not been answered before or only in a limited way. You need to have both high-spatial resolution and high SNR, which is a given at 7T and beyond, and more difficult with just averaging at 1.5T and 3T.
Is the higher main magnetic field strength the answer to get more sensitive and specific functional imaging?
On a very abstract level, my statement is not true. If there is infinite time for measurements, one can always continue to average and resolve any structures at any field strength. However, in a 2-hour time frame, almost all of the laminar studies have utilized 7T. So, high field is the solution for high-resolution fMRI. Please note that the statement is not generally applicable to every MRI modality, e.g., diffusion. There are some convincing arguments why diffusion benefits from a higher field strength, but the answer is much more complex than for fMRI. For fMRI, it is clear that the SNR, especially if you go to higher resolution, increases more than linearly with the field strength.
For high-field fMRI, are there any additional tools, which are not yet developed that you would need to make the next step?
When we think about patients, movement comes to my mind. I am always amazed how little human subjects move. Even if somebody moves, say 2 millimeters, which is not much within a 10-minute run, it can make the run unusable for further analysis. We usually have well trained subjects, often colleagues who know how important it is to lay still. The problems are in principle solvable: The fixation of the subject can be made more robust so that there is less movement. Feedback to the subject can be given when he/she moves. Online prospective motion correction can be utilized to improve data quality. And magnetic field fluctuations can be measured for real-time correction or improvements in image reconstruction. Currently, even though solutions are known and being implemented, the set-up time and expertise are too high for everyday use by non-experts.
You have brought up a few different contrasts. How does the BOLD contrast interact with other contrasts?
Perfusion is the most straight-forward contrast related to the BOLD contrast. BOLD signal stands for blood oxygenation level-dependent signal, which signifies the amount of paramagnetic deoxygenated hemoglobin in the blood. The changes in BOLD are driven by blood flow. Therefore, perfusion and BOLD are directly related and causally linked but neither spatially nor temporally identical. Another determinant of the BOLD contrast is oxidative metabolism, which, however, is difficult to assess. Blood volume changes also contribute to the BOLD signal albeit in a complicated way. Finally, vascular parameters, such as baseline blood volume, hematocrit or transit time of blood through the tissue determine the spatiotemporal properties of the BOLD signal.
Looking outside of fMRI or MRI: What would best complement an fMRI study?
There is no general answer. For a lot of cognitive neuroscience questions, the most standard complimentary tool is EEG, which can be acquired at the same time with fMRI. PET is obviously very useful for a lot of other questions, particularly in the context of neurologic and psychiatric disorders. However, not many functional imaging investigators currently use PET as it is quite difficult to use and requires injection of radioactive tracers. When everything is in place, the relationships between neurotransmitters, neuromodulators and fMRI can be studied.
If animal models are considered, there are other powerful tools such as intrinsic optical imaging, optogenetics or invasive electrophysiology. Then there are also non-imaging tools such as psychophysical measures: e.g., reaction times or performance indicators, which are relevant for cognitive theories. Imaging can be also related to non-imaging biological parameters such as genetics. In the UK, there are doing a big study where they relate behavior, clinical symptoms and whole-genome patterns to brain patterns. It is very promising as it can reveal a lot, but a lot of resources and big data are needed for this study. So, there is no single answer to what complements fMRI the best. It depends on your scientific or clinical question.
It seems the answer is to get more data and then see what happens.
No, I think people should foremost think carefully about the research question and determine which imaging tools to use and how big the data size has to be. The current trends seem to go in opposite directions. There are single-subject studies where ideally data from only one subject is needed. There is of course underlying big data because single-subject data have to be comprehensively acquired and respective computational models and cognitive theories developed.
For other current popular studies such as utilizing genetics, machine learning and deep learning, big data often is mandatory. Open data repositories will certainly be a big step forward in big data studies and reproducible science, even though proponents do not often point out the potential limitations of this approach: e.g., non-optimized data acquisition, prohibiting exploratory science and single-subject medicine and science. I am a big proponent of open science but we have to be careful to not develop a monocultural science. Science thrives on diversity, also on the level of scientific inquiry.
So, there are two trends and sometimes even the same investigator, including me, is following both trends. With big data, novel questions can be addressed, which are small in effect size not only because of low image SNR but because the underlying phenomena are so complex. Therefore, I would not say that anything goes. The scientific question will determine what the best approach is. I think that conceptional theories, which can actually integrate all of these different findings, are still lacking.