Working in two different labs, Lars Kasper, PhD is not only experienced in the development of MR methods but also in using fMRI for psychatric applications. We were particularly interested in Lars’ preference for spiral acquisitions strategies.
Lars Kasper, PhD
IBT University of Zurich and ETH Zurich
You did a lot of work on non-Cartesian acquisition strategies for MRI. Where do you see non-Cartesian acquisitions make a difference big enough to be adopted by scientific or even clinical imaging?
I am mostly working in functional MRI and I am based in two labs. In Klaas Prüssmann’s lab, I am developing new methodologies for MRI. And in Klaas Enno Stephan’s lab, I am using functional neuroimaging for clinical psychiatric applications. For functional MRI I see a huge benefit of non-Cartesian acquisitions due to a strong demand for going faster and into more spatial detail. In terms of temporal resolution, we can finally critically sample non-BOLD fluctuations in the signal. We can have a very accurate picture of the noise in the brain if we have a temporal resolution below half a second. For the spatial resolution, mapping the organizational structure of the brain that is functionally organized into cortical layers and columns requires resolutions of about half a millimeter. These are both goals that with the current scanner hardware you can only meet with non-Cartesian acquisitions. The acquisition efficiency of non-Cartesian readouts is a lot higher than of Cartesian acquisitions, and TE can be chosen with more care, giving SNR a boost. This counts for both the 2D case, where spirals are faster than EPIs, and the 3D case, where 3D spiral-type readouts are available.
Do you see specific applications and anatomical areas in the brain that will benefit the most?
Non-Cartesian acquisitions give you way more freedom in designing trajectories tailored to the application you envisage. That refers to anatomical regions – such as the prefrontal cortex where you need to fight drop-outs – but it also refers to your post processing, e.g., matching acquisition weighting to a desired image filter. On the other hand, a lot of the current layer fMRI studies are in superficial brain regions, such as the motor cortex. I think one of the reasons for that is that here one can get a very high resolution in a reasonable time, even with Cartesian acquisition by cleverly positioning a small FOV covering just the region of interest. But for more inferior or medial brain regions, or if you are interested in multiple brain regions, you will often have to acquire the whole slice with whole-brain FOV, so that ultra-high resolutions will require a more efficient non-Cartesian acquisition strategy.
In neuroscience, the most typical contrast currently being used is BOLD fMRI. Do you see other contrasts that will be equally important for neuroscience?
The balloon model for BOLD has three components: The blood flow, the blood volume and the blood oxygenation level. With a typical T2*-image, you only acquire the combination of the three. But what if you were able to use it in combination with ASL to also measure the blood flow, or to combine it with VASO to get approximate blood volume changes? Maybe then you can marry the two worlds of the acquisition side of multi-contrast functional imaging with a more sophisticated model of brain physiology that creates these different contrast images. I am very interested in exploiting the increasing sophistication of both the image acquisition and post processing methods on the one hand, and the behavioral and neuro-modelling analysis pipelines on the other hand. As MR physicists, we often lack knowledge of the existing model-based analysis approaches, e.g., dynamic system models reflecting the network architecture of the brain, combined with regional parameterizations of how the balloon model elicits the observed contrasts from neuronal activations. But I am sure these models could benefit from more specific contrasts with a clear physical interpretation.
You have mentioned image combinations. People would like to do different combinations of contrasts. Maybe they would also like to add other modalities. Do you think this is the case? And if so, where are the limits of adding other modalities?
The combination of anatomical imaging with functional imaging is very common. You use the anatomical image to extract the regions for your functional analysis. This requires congruence between the images of the two different contrasts. How do you make sure that your different modalities all live in the same geometric space? For the typical 3mm fMRI studies that have been done in the past, the anatomical congruence might have not been the main issue. But now, if you want to extract data from a certain layer, it is important that you are congruent on a submillimeter level.
With regard to other modalities, in our lab we have mainly been combining EEG with fMRI, using the different scopes of these modalities. EEG has millisecond temporal resolution at centimeter spatial resolution. But if you want to know precisely which source of the brain created the electrical activity in EEG, you can rely on so-called source priors retrieved from a separate fMRI study with millimeter resolution. In a Bayesian framework, these priors make your model inversion for source localization more robust. In general, it is important to combine modalities with different spatiotemporal characteristics or those that rely on different physical effects related to brain function, e.g., MEG or EEG with fMRI, to give you complementary information.
Where are single-subject studies used today, and what questions are being answered by multi-subject studies?
When it comes to clinical studies and cognitive neuroimaging questions, multi-subject studies are usually used. The reason is that you want to generalize to a population in the clinical study, as you do not want to characterize one patient only. Another reason is that the effects in which you are interested are often tiny. With sensory-motor functions you can achieve several percent of BOLD signal change. But for things such as prediction errors or higher-level cognitive concepts – such as trust – the changes you see in the BOLD signal are in the per mille range. In order to see that in a single subject you would have to scan them for dozens of hours.
If you are interested in very basic neuroscience questions where you think that the functional organization is the same in every brain, the single-subject study is a good approach. For example, if you look at the structure of the visual cortex and investigate how certain sub-regions are related to the topography of your visual field. In order not to let individual differences in anatomy average out effects between subjects, you need to retain very fine detail by mapping between the single subjects and between the functional and structural data. These are typically few-subject studies where there is much more measurement time per subject and much more attention on the individual post processing of the images than in large group studies where a standard pipeline is used.
Where do you see the two approaches – single- and multi-subject – moving towards in the future?
With regard to the future of these approaches, big data could be considered a trend. But I think an equal trend, one that I find more appealing, is to characterize individual subjects much better. I think, this is what is needed in the end in the clinical practice. I never come as a group of patients to a practitioner, I always come as an individual. The doctor has to assess what symptoms I have and look at my patient history. I think that every generalization brushes over details. If you are able to get a lot of information from a single subject, you might be able to characterize them better. And then there are secondary effects such as when you acquire fewer subjects, the individual subject’s data is more valuable to the people acquiring the data. In big data, there is a risk of losing information due to different people acquiring and analyzing the data.
What can MR technology help in this conversion towards more careful analysis of individual datasets?
My school of thought is a model-based approach where you use a lot of acquired data from different sources, but also inform a model of what has been happening. And that goes for both the effects which you are interested in – e.g., the BOLD effect – but also the effects that are limiting the sensitivity, the noise in the data. If you are able to characterize the physiological fluctuations that are not BOLD-related, e.g., such as cardiac pulsation or breathing-related changes in the encoding magnetic fields, then you are able to increase your sensitivity. Another perturbing effect is motion of the head, which is still the highest SNR killer in fMRI. Independent motion tracking data can support noise separation in post-processing much better, or even allow for prospective correction during the acquisition.
Also, acquisition efficiency is key to make most of individual datasets by pushing their spatiotemporal information content. To this end, any technology facilitating non-Cartesian acquisition strategies helps. In my day-to-day work, field monitoring accelerates sequence design here, since I do not need to calibrate inaccuracies of every new trajectory, and it enables routine use of such acquisition strategies by increasing their robustness, because non-reproducible changes in the field, such as physiological effects, are considered and do not compromise the reconstructed image.
You have alluded to layered fMRI before. What are the learnings you expect from layered fMRI?
Layers are anatomically clearly delineated structures in the brain. In Klaas Enno Stephan’s lab, we look a lot at this idea of the Bayesian brain and predictive coding where you think of the brain as a Bayesian inference machine that takes sensations from the world and tries to infer on the state of the world on the basis of these sensations and its own prior knowledge of the world. You can formulate this in terms of the brain making predictions and updating them via prediction errors through the sensations it gets. For the way this processing stream is implemented in the brain, there are specific hypotheses that certain layers transport the information of prediction errors up the hierarchy from the primary sensor areas to the higher level of frontal/prefrontal areas and other layers that transport the prediction down to the sensory regions. Something I truly like about this idea is that up to this implementation level of which layer is computing what, the models have been specified. However, it has not been tested experimentally in humans. If the model indeed describes information processing in the brain, it might open up a new way of looking at psychiatric disorders as changes in the layered circuitry implementing inference about the world.