logo
ResearchBunny Logo
Confidence reports in decision-making with multiple alternatives violate the Bayesian confidence hypothesis

Psychology

Confidence reports in decision-making with multiple alternatives violate the Bayesian confidence hypothesis

H. Li and W. J. Ma

Discover how confidence in decision-making is influenced by the proximity of options rather than just the perceived best choice. This groundbreaking research conducted by Hsin-Hung Li and Wei Ji Ma challenges established theories and reveals a new perspective on how we evaluate our decisions!

00:00
00:00
Playback language: English
Introduction
Confidence, the "sense of knowing" accompanying a decision, significantly influences subsequent actions, learning, and group decision-making. Its malfunction is linked to psychiatric disorders. While humans can report their decision quality, the underlying computations remain unclear. The dominant theory, the "Bayesian confidence hypothesis," posits that confidence reflects the probability of a correct decision (the chosen option's posterior probability). However, an alternative suggests that confidence is reduced if the top two options have similar posterior probabilities, irrespective of their absolute likelihood. This hasn't been thoroughly investigated as, in two-alternative decisions, it's equivalent to the Bayesian hypothesis. This study tests this alternative using a three-alternative visual categorization task, aiming to determine if confidence reflects the probability of making the best decision, rather than simply the probability of being correct. This is crucial because existing studies primarily utilize two-alternative tasks, which are not sufficient to differentiate between these models of confidence.
Literature Review
Previous research on confidence has yielded mixed results concerning the Bayesian confidence hypothesis. Some studies found correlations between Bayesian confidence and empirical data, while others indicated deviations. While the Bayesian confidence hypothesis is the prevailing theory, it lacks evidence against the possibility that confidence is influenced by unchosen options' probabilities. Specifically, if the next-best option is close in probability to the best option, individuals might be less confident. This possibility has been unexplored due to the prevalence of two-alternative tasks in previous studies, where both models predict the same behavior. A key contribution of this paper is to directly test the influence of unchosen options on confidence in a multi-alternative setting.
Methodology
A three-alternative visual categorization task was designed. Participants viewed exemplar dots from three color-coded categories and a target dot (a different color). They categorized the target and reported confidence on a four-point Likert scale. Category distributions were manipulated to vary posterior probabilities. A generative model was created, incorporating sensory noise (Gaussian distribution) and decision noise (Dirichlet distribution). Three confidence models were developed: 1. The Max model (Bayesian confidence hypothesis), where confidence is the chosen category's posterior probability; 2. The Difference model, where confidence is the difference between the best and next-best option's posterior probabilities; and 3. The Entropy model, where confidence is the negative entropy of the posterior distribution. These models were fit to individual data using maximum-likelihood estimation, comparing them using the Akaike Information Criterion (AIC). Three experiments were conducted using different stimulus configurations, with Experiment 3 including feedback on each trial to investigate the effect of learning on the reported confidence. Model recovery analysis was done to verify the capability of the proposed methodology to select the true model from tested models. Psychometric curves were generated, visualizing confidence as a function of target position and comparing model fits against actual data.
Key Findings
Experiment 1, with vertically aligned categories and four conditions (even and uneven spacing), revealed that the Difference model best fit the data, showing better agreement in psychometric curve patterns compared to the Max and Entropy models. The Difference model outperformed the Max model by an AIC score of 391 and the Entropy model by 1937. Experiment 2, using two-dimensional stimulus configurations, replicated the findings; the Difference model again outperformed other models with a group-summed AIC of 541 better than the Max model and 1631 better than the Entropy model. The influence of decision noise (Dirichlet) exceeded that of sensory noise. Experiment 3, including feedback, further confirmed the superiority of the Difference model, with AIC differences of 100 (relative to Max) and 1113 (relative to Entropy). Consistently, across all experiments, the inclusion of decision noise significantly enhanced model fit compared to models with only sensory noise or neither. Alternative models, including heuristic ones, failed to outperform the Difference model. The only exception was one experiment in which one heuristic model was indistinguishable from the Difference model. The relative smallness of the AAIC in Experiment 3 compared to other experiments is explained by stimulus selection. The result from model recovery analysis supports that.
Discussion
The study's findings demonstrate that confidence is best explained by the Difference model, contradicting the Bayesian confidence hypothesis (Max model). The Difference model suggests that confidence reflects the probability of selecting the best option, considering the proximity of competing alternatives. This challenges the dominant view that confidence solely reflects the probability of correctness. The robustness across experimental conditions, including feedback, strongly supports this conclusion. The Difference model could be interpreted as an approximation for the probability of choosing the truly best option. Alternatively, it could reflect an iterative decision process, where the least likely option is initially discarded before final evaluation, resulting in a confidence calculation based on the top two options. The significant role of decision noise over sensory noise highlights the importance of internal processing variability in confidence judgments. The results contrast with studies showing superiority of heuristic models in two-alternative tasks. This could stem from differences in experimental designs (stimulus duration, uncertainty levels, explicit presentation of category distributions).
Conclusion
This study provides compelling evidence against the Bayesian confidence hypothesis in multi-alternative decision-making. The Difference model, suggesting confidence reflects the relative strength of evidence for the top two options, offers a superior explanation of human confidence reports. Future research could explore the generalizability of this model to other decision-making contexts (value-based decisions) and investigate the neural mechanisms underlying the Difference model, distinguishing it from neural correlates of probability correct. Further investigation is needed to distinguish the Difference model from a Ratio model in which the least-favorite option does not participate in normalization. Examining tasks with varying numbers of categories could help to differentiate these models.
Limitations
The study's focus on visual categorization tasks limits the generalizability of findings to other domains. The specific design choices (stimulus characteristics, noise levels) could have influenced the results, necessitating further investigation with variations in these parameters. While three experiments were conducted, further experiments may be needed to confirm the generality of these findings. Finally, though model recovery analysis was done, the model selection does not guarantee that the models are perfectly representing the underlying human inference.
Listen, Learn & Level Up
Over 10,000 hours of research content in 25+ fields, available in 12+ languages.
No more digging through PDFs, just hit play and absorb the world's latest research in your language, on your time.
listen to research audio papers with researchbunny