Anatomy and function
Anatomy and function

We were given the opportunity to test Prof Dr Nikolaus Weiskopf‘s diverse knowledge in the field of MRI through 6 questions and got answers on topics ranging from mesoscopic resolutions to standards for open science.

Nikolaus Weiskopf, Prof Dr

Max Planck Institute for Human Cognitive and Brain Sciences

What are the structural and functional information that we expect to gain when we go from 1mm isotropic to 0.5mm isotropic?

That question goes to the heart of the research we do in Leipzig. We are trying to get down to mesoscopic resolutions with MRI, somewhere around a few hundred micrometers in vivo. In fact, we even try to go below that, but it is very difficult or maybe impossible to achieve. Typically, when you think about functional MRI and also anatomical MRI, we basically focus on brain areas to know what a certain part of the cortex does. Using MRI, we understand better how these brain areas interact, what functions they mediate. I think we have come a very long way since fMRI was invented some 25 years ago. But conventional kind of fMRI cannot look at structures within cortical areas, which we are interested in. We want to dissect the cortical areas and better understand what is happening within those. Here, you have structures such as the cortical columns, ocular dominance columns or orientation columns. The diameter of these structures is around a millimeter. You will certainly need submillimeter resolution to resolve them properly. We believe, and there is no disagreement in the neuroscience community, that these structures at the mesoscopic level, such as cortical columns or cortical layers, are very important structures that mediate and have a high impact on function.

Resolution is one measure, another one is contrast. And there too, you haven been pushing multi-parameter mapping to get out a set of parameters and contrasts. Is there a contrast that you cannot but would like to measure for your research?

There are probably two aspects to that. Of course, there are plenty of contrasts missing we would love to have. If you look at what you can do with photonics and microscopy: It is amazing what kind of information you can get post mortem from tissue in terms of resolution but also in terms of contrast, very specific information with very high sensitivity. The downside is that we cannot apply these photonics methods in vivo in humans and also in vivo MRI does not offer this information directly. The upside is that the contrasts we have actually target very important features in the tissue in regards to anatomy. With relaxometry, we have pretty well-established markers for myelination levels in the cortex or in white matter. In a similar way, we are sensitive to paramagnetic substances in the brain, as for example iron. And we know that iron plays an important role in tissue function and also in pathology.

It would be great to have more and new contrasts. But at this point, quantitative MRI is a very important advance by quantifying contrast parameters and making them comparable across time points and sites. At the same time, what we believe in is to acquire multi-contrast data, because each of the different contrasts have differential sensitivities to the underlying tissue microstructure. So, when you combine these different contrasts you can learn more about the tissue. This approach needs to be linked with biophysical modeling to understand the contrasts better and how to combine them to make inferences onto microstructure that is much smaller than the voxel size. This is necessary, as we have at best 400 micrometers voxel sizes in anatomical MRI at 7T in vivo but we are interested in microstructure at the spatial scale of micrometers.

In neuroscience, the processing chain from the raw data to the interpretation can be very long. What does this mean for quality assurance?

It certainly poses a great challenge to quality assurance (QA). In many situations in physics experiments you can rely on doing quality control of the instruments you have. With neuroimaging methods based on MRI we go far beyond the scientific measurement instrument, the MRI scanner, which by the way is not built for scientific measurement purposes. As you mentioned, sometimes you get very rich datasets consisting of functional MRI, anatomical MRI and diffusion MRI. These are highly complex datasets, which often times do not have a very high signal-to-noise ratio. You cannot simply look at the image and it will be obvious if the effect is there or not, meaning you have to perform extensive processing on the data. The data processing then becomes part of your measurement process and measurement instrument.

We have seen effects of this in data analysis challenges with different sites analyzing the same data and comparing the results. They often look different because they were analyzed in a different way. In a sense, the measurement instrument is different. What we try to achieve is to include the processing chain in the QA. However, you might have 10, 20, 30 different processing chains and you cannot do QA on all of them. Hence, this issue still needs to be resolved by the neuroimaging community.

An important part of data processing is image reconstruction, the step that turns the raw data into images, as it has a significant effect on your results. The combination of a monitored imaging process, together with a transparent image reconstruction process is very helpful for open science and reproducibility. The former, for example, includes the measurement of deviations from the desired k-space trajectory with a field camera, the latter can correct for imperfections by taking them into account in the image reconstruction step. Then everybody can reproduce the image reconstruction with that raw data. Moreover, in terms of quality assurance, this reconstruction approach allows you to repeat the image reconstruction with different settings. It will tell you exactly if there are some artifacts or other unwanted effects in the data. Quality assurance and open science are domains that would benefit from exact knowledge about your measurement instrument.

You have brought it up already: Neuroscience does not only work with MRI but with many modalities. What are important combinations of modalities, either acquired simultaneously or as sequential experiments combined in a study?

There is a long tradition in neuroscience to combine different methods. When we talk about human neuroimaging and what is useful to combine, there are still very strong contenders. Obviously, there is MRI, anatomical MRI but also functional MRI, which gives you a very high spatial resolution, exquisite soft tissue contrast or contrast sensitive to physiological processes. Quite obviously, there are other methods that have a much higher temporal resolution. Noninvasive electrophysiological methods such as EEG or MEG excel when it comes to the temporal resolution. Quite a few people combine these techniques. I would say, most of the time they would not apply them concurrently but sequentially. A lot of paradigms are so highly reproducible, that you would not need to acquire data concurrently.

In some cases, you would acquire multi-modal data concurrently. Starting with the combination of these electrophysiological methods, EEG in particular, and functional MRI: The main application of concurrent data acquisition is to study spontaneous activity. In this case you do not have any control over the activity. It is not a particular stimulus or stimulus response you study, but it might be something along the lines of resting state activity. There, obviously, you cannot repeat the experiment. I cannot put my volunteer into the same kind of resting state activity during the EEG experiment as in the following MRI scan. Concurrent measurements have been used to study neurovascular coupling. There, you have to make sure to base the estimates of the neurovascular coupling on the fMRI or the BOLD response at the same time when you measure the EEG response. Another good example are studies in the area of epilepsy where you have spontaneous, epileptic activity in the brain.

It is not only crucial to have transparency on the experimental data but also imperative to know what the raw data is about and what it means. This implies that standards and pertinent metadata are required. Do you see standards that are capable of supporting open science practice?

At the moment there are a lot of developments because of the open science movement and the strong movements towards wider data sharing. There is a lot of interest in the community in terms of developing standardized data formats. The DICOM standard is an example. There are other ones, developed more by the scientific community, such as the NIFTI standard for imaging. One of the areas where there is a lot of activity is not only the data itself but the metadata. A challenging example would be a functional MRI experiment which comes with a lot of imaging data. Even if that data is properly documented and stored in terms of its imaging parameters and all of its technical aspects, you would still be unable to reanalyze that data as you would be missing the rationale for the experiment and the experimental design itself. Hence, you need to find a way to document this properly as well. On top of that, you need to integrate the MRI data with the information about the experimental design. One of the domains we pursue is to not only store image data but also raw data from the MRI scanner, such as k-space raw data. Unfortunately, I think there is not a well-established standard for that. Hence, the community’s stronger focus on establishing standards is crucial.

To bring the conversation to a close, I would like to talk about a technique which you have been looking into a lot: neurofeedback. What will neurofeedback be able to treat? And will fMRI have a role in treatment or will neurofeedback remain a research tool?

At this point in time, it is too early to tell what real-time fMRI neurofeedback would be able to treat. But it is an exciting time as there are several clinical trials going on with real-time fMRI and neurofeedback. In the next couple of years, we will know much better what clinical effect can be achieved. So far, a lot of the clinical trials are looking at mental disorders like depression or addiction. There is one major project going on, the EU funded BRAINTRAIN project, which we are part of. It is a multi-country and multi-site endeavor which pursues several clinical trials in the domain of mental disorder.

In my opinion real-time fMRI, as compared to EEG neurofeedback, is probably not going to be a research tool only. This might be regarded anecdotal, but what we tend to see is that you can train volunteers with real-time fMRI neurofeedback based on the BOLD response relatively well, and typically more quickly than with the EEG approaches. There seems to be an advantage in having this temporal low-pass filtering of the neuronal activity due to the neurovascular coupling. This holds especially if you want to train cognitive abilities or look at mental disorders. More importantly, the biggest advantage we would see is spatial accuracy or specificity of fMRI. You can target particular brain areas or combinations of brain areas, networks, which is much more difficult with the EEG methods. On the other hand, the EEG methods are nice as they are portable. However, there are cases where you probably need the spatial specificity and coverage of functional MRI, which you cannot achieve with any other technique, such as to access deep brain areas. I expect that there will be applications where you need to use fMRI neurofeedback and cannot replace it with any other technique.

Sign up

By submitting your email address, you agree on receiving news around and about Skope and to the Terms & Conditions and Privacy Policy.