Psychology
Decision making with visualizations: a cognitive framework across disciplines
L. M. Padilla, S. H. Creem-regehr, et al.
The paper addresses how people make decisions using visualizations and proposes that integrating decision-making frameworks with visualization cognition is necessary for effectiveness. Decision making is defined as choosing between competing courses of action. The authors note a lack of emphasis on mental processes in visualization decision making and argue for adopting a dual-process account: Type 1 (fast, automatic, low working-memory demand) and Type 2 (slow, effortful, working-memory intensive). The goal is to integrate a dual-process decision framework with established models of visualization comprehension to explain how visual information leads to decisions across domains. The study underscores the importance of a unifying cognitive model to enhance generalizability, cross-domain integration, and practical design recommendations, given the high stakes of decisions supported by visualizations (e.g., weather, health).
The review synthesizes two main literatures. First, decision-making frameworks under risk include rational models (weighted probabilities) and heuristic/intuitive models. Dual-process theories reconcile these by positing Type 1 (autonomous, minimal working-memory demand) and Type 2 (controlled, working-memory dependent) processes. The authors adopt Evans and Stanovich’s (2013) definition centered on working-memory demands and cognitive control. Second, visualization cognition models include perceptually focused frameworks predicting efficient information acquisition and models incorporating prior knowledge (e.g., Cognitive Fit Theory). Pinker’s (1990) model of graph comprehension delineates stages from visual array to visual description, schema matching, message assembly, inference, and conceptual message, with both bottom-up and top-down influences. Patterson et al. (2014) extended to decision contexts with working memory but left gaps in how working memory influences decisions and in the dynamics of top-down/bottom-up interactions. The authors critique these gaps and propose integrating dual-process decision theory into visualization cognition to account for working-memory roles and decision outputs.
This is a selective review of empirical studies on complex decision making with computer-generated, static 2D visualizations across diverse application areas (e.g., meteorology, health risk, finance, geospatial tasks). The authors categorized tasks by domain (Table 1 in the paper) and synthesized findings to evaluate support for a dual-process account. They propose an integrated model that embeds dual-process decision making within Pinker’s comprehension framework, explicitly positioning working memory as influencing all stages except bottom-up attention and distinguishing Type 1 versus Type 2 decision paths. The review identifies four cross-domain findings: (1) bottom-up attention in visualizations, (2) visual-spatial biases arising from encoding, (3) cognitive fit effects requiring mental transformations under mismatch, and (4) interactions of knowledge-driven processing with encoding effects. Evidence cited includes behavioral performance, eye tracking, computational saliency models, and individual-differences studies. A dual-task paradigm is suggested as a method to differentiate Type 1 and Type 2 processing demands in future work.
- Visualizations drive bottom-up attention (Type 1), aiding or hindering decisions. Salient features capture attention and can bias judgments: e.g., icon arrays led participants to focus on foreground icons and ignore base rates, increasing willingness to pay for improved products (e.g., $125 more for improved tires) relative to text-only formats. Salient map features (temperature vs. pressure) captured gaze; after instruction, increased salience of task-relevant features improved performance.
- Visual-spatial biases (Type 1) stem directly from visual encoding. Examples include containment biases (bounded regions imply categorical membership/similarity), deterministic construal (interpreting uncertainty displays as deterministic), and anchoring effects in graphs. Google Maps’ circular uncertainty overlays increased containment-based misinterpretations versus Gaussian fades. Error-bar temperature forecasts were misconstrued as high/low deterministic ranges despite keys. High-quality images (e.g., 3D brain scans) increase perceived scientific credibility, sometimes degrading actual task performance compared to simpler displays.
- Cognitive fit (Type 2) affects speed and accuracy. When visualization, schema, and task align, decisions are faster and more accurate; mismatches require working-memory-intensive mental transformations, increasing time and errors. Network diagrams matched to disconnection tasks eliminated working-memory advantages; mismatched diagrams disadvantaged lower-capacity participants. Graphs vs. tables facilitated spatial vs. textual tasks respectively; mismatches slowed responses.
- Knowledge-driven processing (Type 1 and/or 2) interacts with encoding. Short-term training can mitigate familiarity biases and enable users to exploit salience effectively. However, visual-spatial biases can persist despite instructions. Individual differences (health literacy, numeracy, graph literacy) moderate benefits from visuals; icon arrays and natural frequencies aid low-numeracy users by enabling perceptual comparisons. Time pressure modulates visualization effects: under rapid decisions (e.g., 5 s) wildfire uncertainty encoding influenced choices; with more time (30 s), differences diminished.
- Proposed integrated model shows two decision paths: a minimal working-memory path (Type 1) enabling fast judgments that may be biased by salience/encoding, and a working-memory-intensive path (Type 2) enabling strategic, effortful computations and schema realignment, with downstream influence on earlier stages via top-down attention and inference.
The findings support a dual-process account of decision making with visualizations integrated into a comprehension framework. Type 1 processes, particularly bottom-up attention, rapidly guide focus toward salient features that can either facilitate task-relevant extraction or introduce biases (e.g., containment, deterministic construal). Type 2 processes are invoked to resolve mismatches between visualization, mental schemas, and task demands (cognitive fit), leveraging working memory to perform mental transformations and controlled attention. This framework explains heterogeneous outcomes across domains and tasks: when encoding aligns with users’ schemas and tasks, Type 1 can yield fast, accurate decisions; when misaligned, Type 2 is needed, and performance depends on working-memory capacity and knowledge. The model clarifies why certain visual-spatial biases are robust (arising early and automatically) and offers actionable design guidance: direct attention to task-relevant information, minimize required mental transformations, and consider user knowledge and individual differences. It also motivates methodological advances (e.g., saliency analyses, dual-task paradigms) to diagnose processing type and cognitive load across visualization designs.
The paper proposes and supports a dual-process cognitive framework for decision making with visualizations by embedding Type 1 and Type 2 processes into an augmented model of visualization comprehension. Across domains, evidence shows that bottom-up attention and visual-spatial biases shape rapid judgments, while cognitive fit and knowledge-driven processing determine when effortful, working-memory-dependent strategies are required. The authors provide practical recommendations for visualization designers: highlight task-relevant information, align encodings with users’ schemas and tasks, reduce unnecessary mental transformations, assess individual differences, and use saliency models and dual-task paradigms to evaluate designs. They call for future research on the mechanisms of schema matching, taxonomy of task–visualization alignment, the nature and mitigation of visual-spatial biases, and the conditions under which knowledge shifts from controlled to automatic application.
- The review is selective and focused on static, computer-generated 2D visualizations; findings may not generalize to dynamic, interactive, or 3D displays.
- Cross-domain findings are illustrative rather than exhaustive; some studies could fit multiple categories.
- The mechanism of schema–visualization matching remains unclear; existing work often uses broad task categories (spatial vs. textual) without a detailed predictive theory.
- The mapping of knowledge-driven processing onto Type 1 vs. Type 2 remains unresolved; it is unclear when long-term knowledge is applied automatically versus effortfully.
- Visual-spatial biases can be difficult to override; more research is needed to determine effective debiasing strategies and when Type 2 activation improves accuracy.
- Heterogeneity in participant expertise and individual differences complicates generalization; more systematic assessments are needed.
Related Publications
Explore these studies to deepen your understanding of the subject.

