An insight view on neuroimaging
An insight view on neuroimaging

Working at the Wellcome Trust Center for Human Neuroimaging, Prof Martina Callaghan and her colleagues study higher cognitive function using neuroimaging techniques to better understand neurological and psychiatric diseases. We were lucky enough to pick Martina’s brain on several MRI topics, gaining insight into her view on neuroimaging.

Martina Callaghan, Prof

Wellcome Trust Center for Human Neuroimaging, UCL

What quantitative measures do we have available in MRI today? In particular, how quantitative are volume measurements?

We have many different quantitative measures available to us, e.g., relaxation measurements, diffusion measurements and so on, which allow us to ultimately gain insight into brain microstructure. These measurements add a lot of value as you can take a multi-model view to try to understand what the underlying organization and composition of the brain’s microstructure is. From a clinical perspective, the utility of different physical parameters has already been shown in a non-direct way: T1-weighted images which are already in extensive use for volumetric measurements, and susceptibility-weighted images are very popular clinically.

I think quantitative MRI can add value because it can complement the type of volumetric measures that have been made in the past. In a weighted image, you have many different factors that are involved in determining the contrast, from which you then try to characterize the volume. Going from multiple factors contributing to the contrast to a single one, quantitative MRI can provide a more specific measure and a more direct window onto the microstructure of the brain.

What quantitative measures are already being applied in clinics and which ones are currently venturing into the clinic?

I am not sure that there are any quantitative measures that are truly established within the clinical environment. I would say the use of diffusion tensor imaging, calculating fractional anisotropy, is probably the most established within the clinical environment. Whether the other measures will make it into the clinical environment in the near future, beyond research-oriented groups, is not so clear. Although they provide this great specificity, there are a number of different factors which have to be considered, e.g., many hardware related effects that need to be calibrated and removed. If that cannot be done easily, it becomes more difficult to get the techniques adopted in the clinic. The ease of use and also the added value needs to be demonstrated and established. That is certainly not to say that there are not research groups in the clinical environment that are already capitalizing on more advanced qMRI measures.

Building a biophysical model of tissue is a field of research you have worked on quite a bit. What results could you find on this basis?

An important aspect of biophysical modelling is to try to characterize biological features such as the myelin distribution within the brain, the iron distribution, or some other feature of interest. From that perspective, we need to have models that might be anywhere from a simple correlation, something measured by us that can correlate with these underlying tissue features, to a more complex model, which could be geometrically driven. It is also important that we can estimate the parameters of the model with sufficient sensitivity and precision, and that we have a feedback loop back to the acquired data to progress further toward that goal of truly quantitatively characterizing these biological features. At the moment we are learning more about what it is exactly within the tissue microstructure that we can be sensitive to with MRI, and also how different measures are differently sensitive to those underlying features.

How do you choose your model amongst all the possible models?

This is a demanding task. It depends on what the aim is. You need to find the best measure that will allow you to access the tissue feature you are interested in given the constraints of your measurement scheme. It is about determining: What is feasible, what can be achieved and what can we quantify at the moment? You can of course develop very complex models with many different parameters, but can you expect to reliably estimate all of those parameters in a meaningful way?

What data do you feed into these models, both MR- and non-MR-based?

The primary focus for me is on MRI data. We routinely use the multi-parameter mapping protocol at our center. For that, we are acquiring data to measure longitudinal and effective transverse relaxation rates, magnetization transfer, proton density and more recently tissue susceptibility. We are measuring all of these concurrently in a fairly reasonable scan time that allows us to get whole brain coverage with a pretty high resolution for the field strength we are working at (3T). In the past, we tried to link these to functional measures from MEG, magnetoencephalography, which is another prominent neuroimaging modality at our center. That is very interesting because it gives you a modality-independent verification, or linking, between structure and function. MEG, which has a very high temporal resolution that we do not have with functional MRI, is very beneficial to us.

How important is the consistency between these datasets, e.g., between different MRI datasets or PET?

It depends on what exactly you mean by data consistency. If we talk about within modalities, so within MRI, I can give you a concrete example: We try to calculate the g-ratio so that we can describe the myelin sheath surrounding the axons within the brain. There we have to take a measure from diffusion imaging with an EPI readout together with measures from structural imaging for which we typically use a spoiled gradient echo. You can imagine that the distortions in the diffusion data are very different to those in the spoiled gradient echo data. Linking those together and trying to account for them, and linking in models that can deal with these sorts of artifacts is very important in order to have alignment and consistency between those data. Otherwise, these artifacts can lead to misquantification and erroneous effects.

What are the image artefacts you struggle most with in your daily practice?

It probably again depends on the modality and what it is that we are doing, e.g., functional versus structural imaging. For quantitative MRI, the transmit field inhomogeneity is particularly important to calibrate for. Thus, having a scheme that allows us to map that B1-field is very important as it is going to influence what exactly we quantify as being for example a T1 relaxation time. A rapid way of doing that might be with an EPI readout, but we then again have the business of linking the different measures and ensuring they are consistent. From the fMRI perspective, we try to make sure that our systems are very stable and that we do not have any unnecessary fluctuations within the data. These fluctuations can obscure the very small functionally driven effects that we are actually trying to quantify as part of the measurement scheme.

You did not mention motion. Is that not so much of an issue for your research?

Actually, it is. Often times when you develop a novel technique, it can look great but then you put a more typical cohort into the scanner, and you can start to see that when you do not have your expert volunteers, motion becomes a much bigger issue. For fMRI, we often look at accelerated techniques such as multi-band imaging or 3D EPI where we can capitalize on higher acceleration factors. In these cases, motion between your calibration data and your actual reconstructions can be a huge issue and introduce more variance into the data, which is a problem in terms of being sensitive functionally. To address motion, we work on developing prospective motion correction scheme. But even just from the perspective of explaining to your participants the real impact that motion can have and the fact that they really do need to remain as still as possible – these little coaching tips can actually be very impactful in terms of data quality.

What QA do you run on your scanner? And what QA data acquisition do you run with every scan?

We run QA on our scanners every week across all of the different coils we use. We will be looking at the stability, the temporal SNR, what fluctuations we are seeing in the data, present drifts, and we are looking at all of the hardware elements. It allows us to prospectively and proactively look for whether or not there might be an imminent failure. We also use online monitoring: When we have participants in the scanner, we have set up a tool that can run analyses on the data in real time. It will look at the variance over time, over short windows and longer temporal windows, and differences from volume to volume so that you can identify whether or not the participant is moving. That also gives a real-time view of how the data are and whether or not something might break down from a hardware perspective.

You are working on developing Gadgetron. What is the goal you want to achieve with Gadgetron?

Gadgetron is a hugely useful tool for the medical imaging community in general. It allows us to just tap off the data, do our processing, extract what measures we think are going to be useful and have them displayed to the users on a separate monitor. We also use it to have image reconstruction schemes which are not so trivial to integrate on a vendor’s platform. Additionally, it allows us to be software-independent. In the past 12 months, we have upgraded two of our systems quite significantly in terms of the software as well as the hardware. From a Gadgetron perspective, although you have to modify the interface, you already have your developments. So, you get consistency in terms of how you are handling the data, which is very powerful.

Another aspect is the flexibility in terms of the rapid prototyping; we use MATLAB a lot within the Gadgetron framework. For susceptibility measurements, it can sometimes be difficult to get good quality phase data. However, when we do our own reconstruction offline we can ensure that we are getting the data that we need and have it sent back to the scanner. For the user it is completely seamless, while we have control and complete clarity on exactly what is being done in the pipeline. From the perspective of open science and reproducibility, having that kind of an opensource framework is intensely valuable and beneficial for the community as a whole.

You recently got the funding for a 7T system. Congratulations! What are your plans with the 7T and what do you expect to learn from its data that you cannot learn from your 3T fleet?

At the moment, we are up against the limits of resolution that we can achieve for our functional imaging at 3T – same for anatomical imaging including the quantitative MRI. We work at maybe 1.5 millimeter isotropic for the functional imaging, our highest resolution, and 800 microns for the anatomical imaging. One of the big benefits of the 7T would be to push that forward and try to access much more discrete units of neuronal computation, so to be able to look at laminar-specific processing or to be able to access units like cortical columns. This would open up a new dimension for researchers in terms of the questions they can ask.

Sign up

By submitting your email address, you agree on receiving news around and about Skope and to the Terms & Conditions and Privacy Policy.