Computational psychiatry
Computational psychiatry

Applying formal mathematical system models to brain function, Prof Dr Klaas Enno Stephan is shedding light on psychiatric diseases. We posed a few questions to the computational neuroscientist who also holds a doctoral degree in medicine.

Klaas Enno Stephan, Prof Dr I

IBT University of Zurich and ETH Zurich

You apply neural modelling in order to understand psychiatric diseases and find treatments for them. Why do you do this?

Firstly, I have always wanted to apply formal mathematical system models to brain function. Secondly, I wanted to apply them in a context that was clinically useful and not simply l’art pour l’art modelling. And thirdly, because I do believe that across the whole spectrum of medicine, it is psychiatry that is least well developed in terms of diagnostics and principle clinical care, where there is the greatest need for developments.

Should models always be mechanistic in a way?

I would not say always but preferably. If you have a black box predictor that turns measurements from a patient into a statistical prediction about treatment response or clinical outcome, then this is useful and should not be rejected. However, if you have the choice between a black box predictor and a prediction that rests on a mechanistic insight into the individual person’s brain, then the latter is to be preferred. This is for a number of reasons: Firstly, when you have a prediction that is mechanistically interpretable you can derive new treatment ideas from it. Secondly, things can easily go wrong in statistical predictions and if you cannot understand the mechanism that feeds the prediction, it is easy to believe a statistical result that may be flawed. And thirdly, humans in general and patients in particular are in need of a narrative. It is not good enough to say: Here is an oracle that will tell you that if you take this drug you will be fine. Here is an oracle that tells you in two years you will transition to psychosis. You need a narrative. You need some meaning behind your suffering or your healing. That is fundamentally important for patients. This can only be provided by a model that can be interpreted in terms of mechanisms.

When it comes to shedding light on these mechanisms, how important is it for you to be able to perform single-subject analysis versus group analysis in your research, and maybe later in treatment?

When it comes to clinical applications, there is nothing but single-subject analysis, of course, as a group analysis does not allow you to make statements about an individual (unless you take a group result to provide you with a reference distribution). In research it depends very much on the stage of research. If you are at a stage where you try to evaluate the general utility of a new method, then a group analysis is perfectly fine. First of all, you want to see whether on average you can detect a mechanism that you know is present. So, group analyses are not useless at all, but when it comes to the actual clinical application, the conclusion that you derive and that you turn into clinical recommendation would necessarily have to rest on data from that single patient.

In what ways can the sensitivity of these single-subject scans be increased? Does the combination of modalities help?

I am not sure there is a general recipe. Just increasing the dimensionality of the data can be more harmful than useful. I think as a researcher you have to tailor your measurement to your scientific question. The key thing in my particular domain is to have a strong and empirically well supported hypothesis about a psychological or physiological mechanism and obtain data in an individual under optimal conditions. In psychiatry, this includes the necessity to welcome the patients in an environment that is calming and pleasant because the patients are often incredibly nervous and anxious. If you do not manage to calm them down and make the stress disappear, your measurements will be completely useless as you would be measuring processes that are dominated by affective states and artifacts. For example, in EEG measurements, simple eye blinks and movements of facial muscles produce potentials that are far bigger than the subtle brain responses we are trying to measure.

In the field of psychiatry, is it important to spot spontaneous activation?

If I may reformulate the question: Are there experimental designs where we deliberately refrain from exerting experimental control and allow for unconstrained cognition, which is what people euphemistically refer to as the “resting state”?

In our research, that has played a minor role so far. We do like unconstrained cognition or “resting state” because it is so simple and does not put any requirements on patients. However, because it is so unconstrained, it is a tough call to derive mechanistical insights from those data. We tend to use experimentally controlled situations.

You are performing a large layered fMRI study resolving layers of the cortex. What do you expect from the layer-wise analysis of your data?

We are interested in laminar fMRI because some of the most influential hypotheses in cognitive science at the moment, a class of families that one might refer to as the “Bayesian brain hypothesis”, postulate that the brain is essentially a statistical machine that does inference and prediction all the time. More technically speaking, the brain is assumed to represent a “generative” model of the world; this is a forward model of how states of the world cause the sensory inputs that the brain receives. The brain can then invert this forward model and infer from the received inputs on the hidden states of the world. Now, the algorithmic architecture to implement this is thought to be a hierarchical Bayesian one. Anatomically, we believe that it maps onto message passing between different cortical areas where specific layers fulfill specific functions in terms of sending and receiving prediction errors or predictions. Because this theory is incredibly important for psychiatric disease and has spawned many concrete ideas about different psychiatric disturbances, we are keen to have good measurements of layer-wise activity given we need them in order to test the hypotheses. With respect to clinical applications, we would like to have functional readouts from different laminae to understand how information processing in specific patients might diverge from the healthy population.

The reproducibility of research is being challenged, particularly in neuroscience. What do you think are the shortcomings at the acquisition stage?

With respect to data acquisition, the greatest problem is perhaps head motion. If you do not control for head motion well, you can easily overshadow all sorts of desired experimental signals. However, for reproducibility the bigger problem is perhaps data analysis. This is because the data we acquire are very high-dimensional and require sophisticated analysis procedures that are not easy to master for scientists from fields without a formal mathematical grounding.

To finish up, what is your biggest wish from the people who build MRI equipment, sequences and technology?

What I think is really needed for MRI research to get closer to applications are measures that are of sufficiently high spatial resolution so that we can image even very small structures of the brain stem, which is critical for most psychiatric diseases. Personally, I will start to experience a sense of satisfaction when we are at a resolution below 0.5 mm isotropic. This is the sort of level we have to get to. Secondly, those measures should ideally also have a fairly high sampling rate because it turns out to be important for models of brain connectivity. Here we are talking perhaps 0.5 seconds, 0.3 seconds. Finally, we need excellent control for physiological noise, which includes head motion. This is probably the single most important problem for good quality data.

Sign up

By submitting your email address, you agree on receiving news around and about Skope and to the Terms & Conditions and Privacy Policy.