Between the lines: Current Biology, 2018

The first paper from my PhD, titled “Confirmation bias through selective overweighting of choice-consistent evidence” is just out in Current Biology. A blogpost explaining the paper and its findings, co-written by me, can be found here. In the blogpost below, I share the methodological insights I gained while working on the project, and some mistakes that otherwise hindered my progress. The purpose of the blog is to inform early career researchers such as myself about some potential pitfalls, and efficient practices to get the best out of their time.

1. Model comparison using a single measure (for e.g. Bayes Information Criterion, BIC) need not always reveal the correct model for the data. In the paper, model comparison using BIC (Figure 1A) was further corroborated by model simulations. Specifically, we simulated data using the best fitting parameters (Figure 1B), and with a range of parameters (Figure 1C) in each model. We then investigated for the presence of the model-free ROC index, a diagnostic feature of behavioural data, in the simulated data (Figure 1B, C below). This allowed us to suggest with certainty that Choice-based Selective Gain model accounted for the data.

Figure 1
talluri_blog_CurrBiol2018

2. If and where possible, use non-parametric measures for statistical tests, and for model-free analyses. In the study, we had a continuous estimate as the behavioural measure in every trial. To compare how these estimates differ across two different conditions of interest, we used the ROC index (illustrated below, Figure 2A), instead of mean (or median) and standard deviation. ROC-index quantifies the separability of two distributions (Figure 2B). ROC curves (Figure 2A, right panel) were constructed by shifting a criterion across both distributions (Figure 2A, left panel), and plotting against one another, for each position of the criterion, the fraction of trials for which responses were larger than the criterion in each distribution. The area under the resulting ROC curve (AUC; grey shading), here referred to as ‘ROC-index’, quantifies the probability with which an ideal observer can predict the distribution from the single-trial estimation. An ROC-index of 0.5 implies that the distributions are inseparable (dashed line in Figure 2A, right panel). It is important to note that ROC indices between the conditions differ from 0.5 when either the means or the spread of the distributions are different. Two model-based measures (weights and noise, Figure 2C) explain the effect captured by the ROC-indices.

Figure 2

talluri_blog_CurrBiol2018_fig2

3. Simulated data can be used to validate model-free measures. This is a common practice by researchers developing methods, where simulated data with a null effect is used to validate a new method. We established ROC-index as a diagnostic feature of confirmation bias in our dataset after we validated it with data simulated using the Baseline model, which by design does not differentiate the two conditions (Figure 1C, second panel from the right).

4. An often overlooked aspect of modelling studies is the ability of models to recover parameters from simulated data. Specifically, the model should recover individual parameters faithfully (visualising the correlation between the actual and recovered parameters helps), and the recovered parameters should not exhibit correlations that were non-existent between the actual parameters. Parameter recovery is the reason we ruled out the Extended Choice-based Selective Gain model, the most complex model in the paper (see section Parameter Recovery in STAR methods in the paper), as the recovered parameters exhibited spurious correlations.

5. While doing a fixed-effects analyses (by treating the sample population as one subject), bootstrapping to obtain confidence intervals for the parameter estimates allowed us to better understand the difference between the estimates. On the same note, obtaining confidence intervals for the parameter estimates in each individual subject gave us confidence about the effect-size in each subject.

6. The last, but a very important aspect, of every project is adopting good coding practices. Specifically, in the project, I found it extremely useful to check every snippet of code to make sure it is doing what it is supposed to do, possibly by visualising the outputs, before starting to do computationally extensive analyses. I am currently working on an fMRI project and this practice saved a lot of time and effort allowing me to identify any bugs, and check if the pipeline introduces any unintended artefacts in the analyses.

Before I end, a note of thanks to Konstantinos Tsetsos, and Anne Urai, my code-buddies, who taught me some of the above points.

 

Rewind 2017: My top 5 favourite papers

I read research papers for inspiration and to inform myself about the latest developments in my research field. In this process, I came across some papers that I was very impressed with and started to compile a list of these articles. These are my top 5 favourite papers from 2017. Since I am currently working on serial dependencies in perceptual decision-making, this list might be biased towards that topic.

1. Computational precision of mental inference as critical source of human choice suboptimality, Drugowitsch, Wyart et al. 2016, Neuron (doi: 10.1016/j.neuron.2016.11.005)

Technically this paper is not from 2017 but it came out around Christmas in 2016 so I am including it in the list. In this paper, Drugowitsch, Wyart and colleagues investigate the source of suboptimalities in human choices. Using an elegant task design and a much more elegant quantitative formulation, the authors show that imperfections in inference alone give rise to a dominant fraction of suboptimal choices and that two-thirds of this suboptimality arises from the limited precision of neural computations implementing the inference process itself. This allowed them to suggest an upper bound on the accuracy and predictability of human choices in uncertain environments. Suboptimality in decision making has been well documented in various behavioural experiments across a wide range of species but it did not get the importance it deserves in theoretical models of decision making. Using simple behavioural experimental manipulations and a very thorough mathematical framework, the paper addresses this issue and in my opinion, is a must-read for any researcher working on suboptimality in decision-making.

2. How race affects evidence accumulation during the decision to shoot, Pleskac et al. 2017, Psychon Bull Rev (doi: 10.3758/s13423-017-1369-6)

Accumulation-to-bound class of models, and in particular, the drift diffusion model has been extensively used to model behavioural data from perceptual decision-making studies. This paper uses a similar modelling framework to understand how racial stereotypes bias the observers’ choice to shoot in a first person shooter task. The authors found that racial stereotypes systematically bias the rate at which evidence accumulation takes place in the decision to shoot, with faster rate of accumulation to shoot Black targets. They also found that some participants counteracted this bias by setting a higher decision threshold thereby collecting more evidence for Black targets before reaching a decision. While this study probes how racial stereotypes enter the evidence accumulation process in the context of shooting decisions, the findings have broader implications in understanding how biases affect our decision-making in general. I am especially impressed by the paper because it takes a quantitative framework that was used in studying low-level processes and applied it to a more practical situation and showed analogies between the two.

3. History-based action selection bias in posterior parietal cortex, Hwang et al. 2017 Nature Communications (doi: 10.1038/s41467-017-01356-z)

There has been a recent surge in papers investigating history dependent biases in perceptual decision-making. Among all those papers that provide excellent insights into serial dependencies, this paper stands out as it combines behavioural modelling, two-photon calcium imaging and optogenetic inactivation to show a causal role of posterior parietal cortex (PPC) in mediating the subjective use of history in biasing action selection. The authors found that the activity of PPC neurons during the ITI reflect history-dependent biases and inactivation of the PPC during the ITI but not during task removes the effect of history-dependent biases on behaviour. The paper uses rodent as a model animal and the findings have important implications in decision-making research.

4. Lateral orbitofrontal cortex anticipates choices and integrates prior with current information, Nogueira, Abolafia et al. 2017 Nature Communications (doi: 10.1038/ncomms14823)

Combining prior with current information is a crucial component in decision-making. In this paper, Nogueira, Abolifa et al. used a novel experimental paradigm by introducing outcome-dependent correlations between consecutive stimuli and found that rats adapted their behaviour by learning the task contingency. Interestingly, neurons in the lateral orbitofrontal cortex showed choice-related activity even before stimulus presentation and this activity increased with time following the stimulus onset. This suggests an important role of OFC in transforming immediate prior and stimulus information into choices. I am impressed by the authors’ choice of experimental manipulation to tackle the questions asked. This paper is a very good example of why sufficient thought should be given to the task design and not just to the data analysis pipeline.

5. Dynamic modulation of decision biases by brainstem arousal systems, de Gee et al. 2017 eLife (doi: 10.7554/eLife.23232.001)

Variability in choice behaviour during decision-making has been well observed in decision making community but the underlying neural mechanisms are less understood. de Gee et al. combined behaviour, pupillometry and neuroimaging to show that changes in the brain arousal systems, specifically the activity of locus coeruleus, explains this variability- by boosting the global brain-wide arousal, locus coeruleus reduced the intrinsic biases in subjects’ decisions. This paper is particularly impressive in its methods- the authors used 7T fMRI to study the BOLD activity in locus coeruleus, an extremely challenging task in itself and showed that task-evoked pupil response is predicted by the activity of locus coeruleus, and that the phasic arousal resulting from this activity modulates choice-specific signals but not sensory responses in the brain. In my opinion, this paper acts like a guide to any researcher attempting to do brain-stem fMRI.