Psychology
Decision making with visualizations: a cognitive framework across disciplines
L. M. Padilla, S. H. Creem-regehr, et al.
The paper addresses how people make decisions with visualizations and argues for integrating decision-making frameworks into visualization cognition research. The authors define decision making as choosing between two or more competing actions and note that visualization research has not fully incorporated cognitive models of decision processes. They emphasize dual-process theories—Type 1 (fast, effortless, minimal working memory) and Type 2 (slow, effortful, working-memory demanding)—as a dominant account in applied decision research. The goal is to integrate a dual-process account of decision making with established frameworks of visualization comprehension to develop an effective cognitive model for decisions made with visualizations. The paper previews a proposed integrated model and a selective cross-domain review demonstrating four findings that map onto Type 1 and Type 2 processing, culminating in recommendations for research and design.
The review synthesizes two major strands: decision-making frameworks and visualization cognition. In decision making, rational models and heuristic/intuition models are contrasted, converging on dual-process accounts distinguishing autonomous, low–working-memory Type 1 processes from controlled, high–working-memory Type 2 processes (Evans & Stanovich, 2013). The review clarifies working memory’s role and cognitive control as criteria for Type 2 processes. In visualization cognition, perceptually focused frameworks predict information acquisition speed/efficiency, while knowledge-influenced models (e.g., Cognitive Fit Theory) stress schema–display alignment and the cognitive costs of mental transformations when displays violate conventions (e.g., reversed axes). Pinker’s (1990) graph comprehension model is summarized (visual array → visual description → schema match/instantiation → message assembly → conceptual message guided by bottom-up and top-down processes). Prior attempts (Patterson et al., 2014) to link cognition and decision making are discussed, noting gaps such as limited pathways for working-memory influences on decision steps. The authors propose integrating dual-process decision theory with visualization comprehension to address these gaps.
The paper is a selective cross-domain review of empirical studies on complex decision making with computer-generated, static 2D visualizations. It highlights representative studies across application areas (e.g., meteorology, health risk communication, land-use planning, finance, geospatial tasks, graphs and statistics, social networks, emergency management). The authors identify four recurring cross-domain findings supporting a dual-process account. They also introduce an integrated model adding a decision-making stage to Pinker’s framework, emphasizing working memory’s influence across stages (except bottom-up attention) and distinguishing Type 1 versus Type 2 decision paths. Evidence is drawn both directly (e.g., working-memory capacity moderating performance when cognitive fit is low) and indirectly (e.g., saliency affecting attention and judgments). The review does not claim exhaustiveness; rather, it documents convergent exemplars and proposes testable mechanisms (e.g., dual-task paradigms) to assess working-memory involvement.
- Bottom-up attention (Type 1): Visual saliency directs attention and can help or hinder decisions. Salient features bias focus and judgments, leading to neglect of base rates or relevant but non-salient information (e.g., Stone et al., 1997; 2003; Schirillo & Stone, 2005). In hurricane forecasts, salient cone boundaries or path lines biased damage judgments; saliency model outputs correlated with performance differences (Padilla, Ruginski et al., 2017; Ruginski et al., 2016). Training allows beneficial use of salient, task-relevant features (Hegarty et al., 2010; Fabrikant et al., 2010). Quantitatively, Stone et al. (1997) found participants would pay about $125 more for improved tires with icon arrays versus text; similar effects for toothpaste (~$0.95 more).
- Visual-spatial biases (Type 1): Encoding-induced biases arise from display techniques (e.g., containment, deterministic construal). Binning or bounded regions can induce a containment heuristic, as with circular positional uncertainty displays (McKenzie et al., 2016), or misinterpretation of probabilistic intervals as deterministic ranges (Joslyn & LeClerc, 2013). Anchoring effects occur with error-bar tasks (Belia et al., 2005). High-quality, realistic images increase perceived credibility (McCabe & Castel, 2008; Keehner et al., 2011) but can impair performance relative to simpler displays (Hegarty et al., 2012; St. John et al., 2001). Individuals with lower working-memory capacity are more distracted by seductive images (Sanchez & Wiley, 2006), supporting the Type 1/autonomous nature and resistance to override of these biases.
- Cognitive fit (Type 2): Misalignment between visualization, schema, and task necessitates effortful mental transformations using working memory, slowing responses and increasing errors. When displays align with tasks/schemata, performance improves (Vessey & Galletta, 1991; Dennis & Carte, 1998; Huang et al., 2006). Working-memory capacity moderated performance when fit was poor in network-disconnect tasks; with good fit, capacity differences vanished (Zhu & Watts, 2010). Primed schemas interacted with encoding (thickness, containment, distance) to influence route choices in network diagrams (Tversky et al., 2012). Error-bar temperature displays led to schema mismatches and deterministic construal (Joslyn & LeClerc, 2013).
- Knowledge-driven processing (Type 1 and/or Type 2): Training can overcome familiarity biases in choosing effective views for emergency management (Shen et al., 2012; Bailey et al., 2007), but visual-spatial biases often persist despite keys/instructions (Joslyn & LeClerc, 2013). Individual differences (health literacy, numeracy, graph literacy) modulate benefits of visual risk displays; natural-frequency icon arrays improve comprehension and decisions, especially for low-numerate individuals (Galesic et al., 2009; Galesic & Garcia-Retamero, 2011; Keller et al., 2009; Okan et al., 2012; 2015). Time pressure manipulations show visualization effects under fast (Type 1) decisions that diminish with more time (Type 2) (Cheong et al., 2016).
The findings support the central hypothesis that a dual-process framework, integrated with visualization comprehension models, explains how people decide with visualizations. Type 1 processes, particularly bottom-up attention and visual-spatial biases, exert strong early influences and can either facilitate or impair decisions depending on whether saliency and encoding align with task-relevant information. Type 2 processes are recruited when cognitive fit is low, requiring working-memory-intensive transformations to reconcile visualization, schema, and task, which increases time and error rates; when fit is high, decisions are faster and less dependent on working-memory capacity. Knowledge-driven processes interact with these mechanisms: training and expertise can help leverage saliency and reduce mismatches, but visually induced biases can be stubborn and not easily corrected by instructions. The integrated model formalizes how working memory and attention influence encoding, schema matching, inference, and decision stages, offering a bridge across domains (e.g., meteorology, health, finance) and practical guidance for design and evaluation (e.g., saliency analysis, dual-task methods). Remaining open questions include how schemas are selected/matched to displays at a fine-grained level and how to predict task–visualization alignment beyond broad spatial vs. symbolic classifications.
The paper contributes an integrated dual-process cognitive framework for decision making with visualizations, extending Pinker’s comprehension model by incorporating a decision stage and explicit roles for working memory and attention. It identifies four cross-domain findings—bottom-up attentional guidance, visual-spatial biases, cognitive fit effects, and knowledge-driven interactions—as evidence for the framework. Practical recommendations include directing saliency to task-critical information, minimizing unnecessary mental transformations by maximizing cognitive fit, evaluating user processing type via dual-task paradigms, and considering individual differences (numeracy, graph literacy). Future research directions include developing precise theories of schema–display matching, systematically characterizing visual-spatial biases and their susceptibility to intervention, probing expertise under time/WM load (e.g., dual-task designs), and expanding evaluation across visualization types (e.g., uncertainty displays) and tasks.
The review is selective rather than systematic and may not capture all relevant studies or domains. It focuses on static 2D computer-generated visualizations and does not address dynamic/interactive displays. The proposed mechanisms (e.g., schema matching, the locus of knowledge effects) are supported by indirect evidence in several cases, and a comprehensive cross-disciplinary theory of task–visualization alignment is lacking. Distinguishing when knowledge is applied automatically (Type 1) versus via working memory (Type 2) remains difficult. No new empirical data were collected, and some recommendations (e.g., dual-task evaluation) require further validation in visualization contexts.
Related Publications
Explore these studies to deepen your understanding of the subject.

