logo
ResearchBunny Logo
Using games to understand the mind

Psychology

Using games to understand the mind

K. Allen, F. Brändle, et al.

Board, card and video games, being intuitive and fun, offer a unique, ecologically valid lens on human cognition—revealing inductive biases that guide behavior and enabling new study of play and intrinsic motivation. This Perspective outlines advantages, drawbacks and recommendations for using games to scale and strengthen cognitive research; research conducted by Authors present in <Authors> tag.... show more
Introduction

The Perspective argues that while classic psychological and cognitive science paradigms prioritize precise control and modelling, they also constrain the kinds of questions that can be asked about real-world cognition. Games offer a complementary approach to expand this repertoire by enabling tests of psychological theories in more ecologically rich settings and by opening avenues to study questions about inductive biases, complex action/state spaces, planning depth, and intrinsic motivation. Because games are designed to challenge abilities and engage interest—and because modern online platforms produce large datasets and analytic tools have advanced—cognitive scientists can leverage games to better understand the mind at scale and in contexts closer to natural behavior. The authors set out to summarize advantages, drawbacks, and practical recommendations for using games as a research platform.

Literature Review

The article synthesizes decades of work on games and cognition, referencing classic laboratory tasks (e.g., two-step decision tasks, n-back, bandits, Towers of London, social coordination) and contrasting them with game-based paradigms (e.g., 4-in-a-row, programmatically generated video games, Sea Hero Quest, Little Alchemy 2, Virtual Tools, Overcooked-like collaborative games). It draws on prior theory about inductive biases (e.g., relational and object-centric biases), research on gamification in education and therapy, and large-scale online experiments. It reviews evidence that game environments can elicit complex behavior reflective of real-world cognition: tool-use learning (Virtual Tools), reliance on object structure in Atari-like games, planning depth in large state spaces (4-in-a-row), collaborative theory of mind in multi-agent games (Overcooked, Codenames), and intrinsically motivated exploration without explicit goals (Little Alchemy 2). It also surveys large-scale citizen science and mobile game datasets (Sea Hero Quest) demonstrating cross-cultural and longitudinal reach, and classic findings on expertise acquisition from games like chess and purpose-built games (Axon). Beyond benefits, the review covers prior concerns about ecological vs. internal validity, variability in prior experience, and the need for validation of game-derived measures, including techniques for model comparison and analysis suitable for complex, high-dimensional game data.

Methodology

This is a Perspective that synthesizes existing empirical findings and methodological practices rather than reporting a single new experiment. The authors: (1) define games for research purposes as intuitive, engineered environments whose main purpose is enjoyment; (2) compare classic laboratory paradigms with game-based tasks to highlight differences in state/action complexity, planning depth, and social interaction; (3) compile case studies demonstrating how games reveal inductive biases, support study of intrinsic motivation, and enable large-scale and longitudinal data collection; (4) delineate pitfalls in experimental design (confounds from game elements, uncontrolled prior experience) and in data collection/analysis (data access, infrastructure, ad hoc metrics, model complexity); and (5) provide practical guidance on choosing between pre-existing versus self-made games, partnering with developers, recruiting diverse participants, and analyzing large-scale game data (database storage, a priori predictions, effect sizes, conditioning on exposure, test levels). They further outline strategic research designs that combine game-based and laboratory experiments in bottom-up or top-down validation pipelines and suggest statistical/computational tools, including sampling-based likelihood estimation and Bayesian optimization, and model class comparisons to identify necessary computational features (e.g., tree search, feature dropping).

Key Findings
  • Games, as intuitive engineered environments, allow researchers to study inductive biases and complex cognition more ecologically than classic tasks while retaining experimental structure. Examples include: Virtual Tools (action-space compression via relational action representations); Atari-like tasks showing reliance on object-based state structure for planning and exploration; and large state-space planning in a 4-in-a-row game with >1.2 million online players showing increased planning depth with expertise and differences between online and lab participants.
  • Games’ intrinsic enjoyment enables studying curiosity, exploration, play, and persistence without explicit external rewards. In Little Alchemy 2, analysis of naturalistic mobile gameplay revealed intrinsically motivated exploration strategies (e.g., empowerment). In Skill Lab, citizen science recruitment improved conscientious engagement relative to paid platform participation.
  • Games support unprecedented scale, diversity, and longitudinal measurement: Sea Hero Quest collected navigation data from ~4 million participants across 195 countries, informing national differences in navigation, environmental influences, and personalized diagnostics for Alzheimer’s risk. Purpose-built games like Axon enabled tracking skill acquisition from first exposure, revealing individual differences in initial performance and learning rates.
  • Pitfalls and constraints: potential confounds from game ‘bells and whistles’; heterogeneous prior experience; challenges in data access/infrastructure; risk of ad hoc measures and spurious significance in large datasets; and modelling complexity of high-dimensional, naturalistic tasks.
  • Mitigation strategies: validate findings with controlled experiments or complementary games; build custom games for greater control; partner with developers; predefine hypotheses and measures; ensure predictive and concurrent validity of derived metrics; store and manage data in databases; report effect sizes; condition analyses on exposure/experience; include session ‘test levels’. For modelling, compare classes of algorithms and identify necessary computational features (e.g., tree search and feature dropping) rather than single-model fit.
  • Strategic integration: combine games and classical experiments via bottom-up (generalize lab findings to games) and top-down (validate game-derived mechanisms in simplified experiments) designs to balance internal and external validity and enhance generalization.
Discussion

The Perspective addresses how to leverage games to overcome limitations of classic laboratory paradigms by enabling the study of cognition in richer, more naturalistic settings while maintaining enough structure for rigorous analysis and modelling. The synthesis shows that intuitive, enjoyable game environments expose inductive biases, support investigation of intrinsic motivation, and facilitate large-scale, diverse, and longitudinal data collection, thereby expanding the scope and robustness of cognitive theories. At the same time, the authors emphasize careful design, validation, and modelling to avoid confounds and ensure that game-derived measures reflect general cognitive principles. By advocating a dual strategy that integrates game-based research with controlled experiments, the article proposes a pathway to reconcile internal and external validity, test the ecological generality of theories, and identify computational features necessary to explain human behavior across contexts.

Conclusion

Understanding the mind benefits from moving beyond exclusively simplified laboratory tasks to include well-designed games as research platforms. Games can elevate ecological validity, scale, and robustness, enabling new questions about inductive biases and intrinsic motivation and allowing longitudinal study of learning and expertise. These gains require mitigating pitfalls associated with complex stimuli, heterogeneous prior experiences, data access, and analytic/modeling challenges. The authors recommend designing games around clear hypotheses, validating findings with complementary experiments, and tailoring computational models to the specific game environments. Games and laboratory experiments should be used in tandem—via bottom-up and top-down strategies—to achieve both internal consistency and generalizability.

Limitations
  • Potential confounds from game-specific incentives, aesthetics, and interface ‘bells and whistles’ that may affect behavior unrelated to the targeted cognitive processes.
  • Uncontrolled variability in participants’ prior experience, gameplay progression, and out-of-session activities, leading to non-representative data and analysis complications.
  • Data access and infrastructure hurdles when collaborating with game companies; academic platforms may lack built-in support for longitudinal tracking and large-scale data storage.
  • Risk of ad hoc or arbitrary measures in complex game datasets; large samples can yield statistically significant but non-generalizable effects without strong theory-driven metrics and validation.
  • Statistical and computational modelling challenges in high-dimensional, naturalistic tasks; need to compare model classes and identify necessary computational features rather than overfitting specific models.
  • Legal and financial risks (e.g., data protection, funding) may be more pronounced than in standard lab studies and require proactive planning with institutional offices.
  • Despite improved ecological validity relative to lab tasks, games still do not perfectly capture natural behavior; further work is needed to benchmark game-based measures against real-world counterparts.
Listen, Learn & Level Up
Over 10,000 hours of research content in 25+ fields, available in 12+ languages.
No more digging through PDFs, just hit play and absorb the world's latest research in your language, on your time.
listen to research audio papers with researchbunny