Dr. Haldar is an Associate Professor of Electrical and Computer Engineering at the University of Southern California. He has spent quite some effort in breaking down the barriers for different types of students to learn the ins-and-outs of MRI.

## Dr. Justin Haldar

## Associate Professor of Electrical and Computer Engineering at the University of Southern California

**When you think of the academic profile of the typical MR research community member, what is their educational background?**

The MR research community is extraordinarily diverse, which makes it hard to define a “typical” educational background. The overwhelming majority of the research community either already has an advanced degree or is in the process of obtaining one, but these degrees are from a wide variety of different disciplines. Clinicians are not trained in the same way as neuroscientists and biologists, who are not trained in the same way as physicists and chemists, who are not trained in the same way as mathematicians and engineers. Even for people within the same broad discipline, training can be vastly different. An example of this is electrical and computer engineering (my home department) – an electrical engineer trained in signal processing and information theoretic aspects of MRI often knows very different things from an electrical engineer trained in electromagnetic aspects of MRI. It can also be dangerous to stereotype – I know chemists who know substantially more about RF coil design than most electrical engineers (with soldering skills to match!), and mechanical engineers who are experts in rodent surgery.

Everyone has developed expertise along their own unique paths, and this broad diversity is generally a great thing for our field – but it also means that explanations that are intended to reach everyone in the broader community may be less helpful than they could have been for certain subcommunities who have deeper expertise in certain areas.**Coming from the “traditional” classical physics approach to understanding MR signal generation, I lean on the Bloch equations. What assumptions do we make in teaching the classic Bloch-Torrey equations as the way to describe spin dynamics in a system? How does this limit our intuiting in designing experiments or new contrasts?**

Let us take the case of the description of signal formation in diffusion-weighted MRI. There are a lot of different explanations of diffusion MRI out there, so I should be careful not to overgeneralize. But a lot of descriptions start by stating that diffusion contrast occurs because of the random thermal motion of individual molecules, and then make a quick jump to something like the diffusion equation or the Bloch-Torrey equation, which are partial differential equations that describe macroscopic ensemble-average behavior while abstracting away the lower-level details. For example, when you look at the standard form of the Bloch-Torrey equation, the underlying randomness is not very obvious, and the assumptions being made about the nature of the diffusion process for individual molecules are largely hidden. Moreover, the Bloch-Torrey equation is quite simple, with the ensemble behavior of a large number of randomly-moving spins somehow being replaced by very simple quantities like the diffusion coefficient (if diffusion is isotropic) or the diffusion tensor (if diffusion is anisotropic). It is natural to think that some strong simplifying assumptions or approximations must have been made to end up with such a simple expression, but the actual details are often glossed over. And another major issue is that the simple model for diffusion encoding that arises from the Bloch-Torrey equation works well for very simple phantoms, but does not actually fit the signal from complicated biological tissues very well. The lack of consistency between theory and practice can be confusing and can make it harder for learners to feel confident in their understanding.

Lack of understanding and lack of confidence are serious impediments to doing good science or coming up with creative/innovative new ideas. It is hard to gain fundamental new insights or to disrupt the status quo if you do not have a strong understanding of basic principles – so anything that improves understanding is potentially valuable.**Recently you have described a method of arriving at the descriptive parameters of diffusion-encoded MR from a random process perspective[1] rather than a classical physics perspective. Why try to reinvent the wheel here?**

I should probably be clear that the random process perspective is still heavily reliant on the same physical principles as other approaches – we are just computing theoretical models of the signal behavior in a different way that is potentially more illuminating for certain audiences.

One of my main motivations was that I saw an opportunity to teach my own students (largely from the electrical and computer engineering department) more efficiently and effectively. I felt that the random process perspective was likely to be substantially easier for my students to understand compared to the descriptions designed for more general audiences. As part of their graduate training, all of my students take advanced courses in probability and random processes – topics that are central to how diffusion MRI works, but which frequently are not included in standard descriptions (likely because many members of the MRI community do not have substantial training in those topics). The random process perspective also feels much more tangible to me, involving probabilistic models for the trajectories of individual molecules with very clearly-stated assumptions, in contrast to the standard approaches where the individual molecules are often abstracted away using assumptions and approximations that are not always clear. Of course, quantum mechanics means that there are also problems with trying to think about the behavior of individual particles, but the math still works out the right way if we neglect those issues.

But I should also say that the random process description of diffusion MRI is in no way a replacement for other descriptions – these perspectives are all complementary. When I teach courses at my university, I very frequently spend class time to derive the same results in different ways. This is in part because some derivations resonate better with some students than they do with others, but it is also because repetition is a key component of learning. The same considerations apply here as well.

[1] https://arxiv.org/abs/2108.01188**Given the last 15 years of progress in MR, somewhat arbitrarily starting with parallel imaging then compressive sensing, and now AL/ML techniques, who wins between physicists, mathematicians, and signal processing experts?**

The short answer is that I think the whole field wins when people from different disciplines come together and synergistically share their complementary expertise and perspectives.

A longer answer might include the observation that it can be difficult to draw clear lines between these different disciplines – signal processing could be viewed as a specialized form of applied math that has a lot of very important real-world applications (including to physics). The line between math and physics can also be fuzzy, particularly when it comes to the theoretical side of physics. Consider someone like Fourier – a mathematician and physicist who made contributions that are also fundamental to signal processing. Or to give a more modern example, think about someone like Sir Roger Penrose. I am most aware of Penrose because of the Moore-Penrose pseudoinverse, a concept from linear algebra that is really useful for solving inverse problems (like MR image reconstruction) with many signal processing applications. But he also won the 2020 Nobel Prize in Physics for important contributions to the general theory of relativity, and has lots of other contributions. Over the years, I have taught plenty of non-engineers in my electrical engineering classes, and I also know plenty of people who were trained as electrical engineers who have gone on to work in a completely different field. Disciplinary boundaries are only a barrier if you allow yourself to be confined by them.

An even more nuanced answer might push back a little on your description of progress in MR. If we are talking specifically about things like constrained image reconstruction in MRI, this is a field with a history that goes back much further than 15 years. You probably do not want a full accounting (Zhi-Pei Liang and I recently wrote a book chapter on ” ‘Early’ Constrained Reconstruction Methods ” that gives a much more detailed description for those who might want more – publication date TBD), but there were plenty of useful constrained image reconstruction methods in the 1980s and 1990s that are still relevant today. There were even ML methods back in the early 1990s that derived optimal k-space sampling and optimal image reconstruction methods based on a database of existing images. Even in those days, the researchers did not all come from the same department, with a good mix of physicists, mathematicians, and engineers.

If we are talking more broadly than just constrained image reconstruction, there are also a lot of important contributions beyond what is been listed – way too many to enumerate. But the contributions are definitely coming from all kinds of researchers from all kinds of different departments. I was a student at the University of Illinois when Paul Lauterbur won the Nobel Prize, and had just started grad school when the ensuing celebrations were held. I was there when many of the pioneers of our field flew in to give speeches and offer congratulations to Paul, which was a wonderful early experience for me – the people I saw speaking there (and also the MRI experts I knew at Illinois) definitely did not all have the same academic backgrounds. Every field has something unique to offer, and I do not see a need to fight over artificial/poorly-defined disciplinary boundaries.**Where can we reach across disciplines to better understand what our samples are doing? Do you have a favorite “Ah-hah!” moment where you decided to think about a problem both from classical physics and then from statistics/signal processing?**

I am probably not answering this question in the way it was originally intended, but I have always benefitted from having awareness of work that was going on outside of MRI. For instance, my first first-author journal paper (published in 2008, back when I was a grad student) was highly influenced by image reconstruction papers I had read about positron emission tomography (PET). PET and MRI have lots of differences, but they also have enough similarities for certain principles and ideas to transcend. We are still pushing this line of work – a paper we recently published on high-resolution diffusion MRI (which was selected as an MRM Editor’s Pick for August 2020) is a direct descendant of my early PET-inspired work.

Another example is that my work on sparse and low-rank modeling for MRI benefitted greatly from my awareness of theoretical developments within the math and statistics (and signal processing) communities. And perhaps the most recent example is our work on Region-Optimized Virtual (ROVir) coils that uses beamforming ideas to suppress unwanted spatial regions (selected as an MRM Editor’s Pick for July 2021) – the development of ROVir was highly influenced by my involvement in magnetoencephalography (MEG) projects where beamforming principles were being routinely applied.

And make no mistake, ideas that were originally developed for MRI can also be useful outside of MRI. An example of this is the low-rank modeling of k-space neighborhoods (LORAKS) framework that my group has been developing for the past several years for constrained MRI reconstruction. One of my students has recently shown that this same type of approach can also be useful in applications like X-ray computed tomography.

My view has always been that, when trying to develop something new, it is good to have as many tools in your toolbox as possible – and a great way to expand your toolbox is to have broad awareness of things that are going on in different disciplines.

I do not know if these really count as “ah-hah” moments, but let me offer some examples of situations where electrical engineering students can benefit from alternative explanations. I have found that signal processing students sometimes initially struggle to understand the g-factor concept from parallel imaging. However, because of their training in estimation theory, their understanding improves almost immediately when I explain how the g-factor can be viewed in terms of estimation theoretic concepts like Cramer-Rao bounds. Similarly, signal processing students are generally familiar with linear constant-coefficient difference equations and solution methods based on z-transforms, which can be potentially leveraged to enhance their understanding of things like steady-state pulse sequences in MRI. Learning can benefit tremendously when ideas can be connected back to things that students are already familiar with.**Any requests for input on what you would want to tackle or re-derive from a different perspective? Requests for help on the topics?**

People are free to send me suggestions, although I cannot promise that I will be able to find the time to actually work on them! My motivation for creating this description of diffusion MRI was largely self-serving, since my group does a lot of work with diffusion MRI, and it could save me a lot of time if my students were able to understand the principles more quickly. The fact that the broader community could also benefit from my description is a nice perk that I am very happy about, but I cannot pretend that my initial motivation purely altruistic. When I was in grad school, a math professor I knew liked to tell graduate students that there were infinitely many unsolved math problems in the universe, but most of them were probably not worth spending time on – since we do not have an infinite amount of time in our lives, choosing what to spend time on is something that requires taste. The same principle applies here – and as the old saying goes, there is no accounting for taste, and I cannot promise that my tastes will match those of others in the community!

I would probably also just generally encourage the community to try and do things like this for themselves if they see a worthwhile opportunity. A side benefit is that deriving something for yourself in a new way will give you deeper understanding of a topic than simply reading someone else’s derivation – I know for sure that my personal understanding of concepts like b-values, isotropic diffusion encoding, and oscillating-gradient experiments was enhanced after going through this exercise, and I am sure that others will reap similar rewards if they choose to do something similar with their own topic of choice! This is also one of the reasons I frequently encourage my students to implement their own software from scratch, instead of relying on software written by other people – if you really want to understand something deeply, you have got to wrestle with the details for yourself. The expertise and insight you gain through that process puts you in a much better position to be innovative.