logo
ResearchBunny Logo
What can mathematical modelling contribute to a sociology of quantification?

Sociology

What can mathematical modelling contribute to a sociology of quantification?

A. Saltelli and A. Puy

Explore the untapped potential of mathematical modeling in sociology, as Andrea Saltelli and Arnald Puy investigate how these concepts can enhance the fairness and adequacy of quantification tools. This compelling research delves into sensitivity analysis and auditing techniques that promise to empower political agency through refined methodologies.... show more
Introduction

The paper asks how mathematical modelling—particularly uncertainty quantification, sensitivity analysis (SA), and sensitivity auditing (SAUD)—can enrich the sociology of quantification by improving methodological soundness, normative adequacy and fairness of numbers. It situates the question within ongoing concerns about the reproducibility crisis, controversies in statistics (“statistics wars”), and the relative neglect of mathematical modelling in sociological analyses of quantification. It frames models as versatile mediators shaped by assumptions, interests and worldviews, and argues that models often operate as metaphors and instruments in governance. The purpose is to translate modelling precepts into an epistemology and hermeneutics of quantification that can guide critical scrutiny and responsible use of numbers in public decision-making.

Literature Review

The study engages strands of scholarship on quantification, including the Foucauldian and Economics of Convention traditions and statactivism, as well as critiques of metrics, surveillance capitalism, and statistical performativity. It recalls debates on misuse of statistical tests and the reproducibility crisis, and brings in accounts of models as mediators and metaphors. It reviews how quantification can be deployed for governance and technocratic justification (policy-based evidence), contributing to displacement of attention from reality to model outputs. It draws on Post-Normal Science (PNS) to argue for quality under uncertainty, extended peer communities, and humility in science. Prior work on composite indicators, rankings, and algorithmic harms is discussed to illustrate fragility, performativity, and the politics of numbers.

Methodology

This is a conceptual and normative framework paper. It defines and mobilizes three modelling approaches: (i) uncertainty analysis (quantifying how uncertainties in inputs/structures propagate to outputs), (ii) global sensitivity analysis (SA) to apportion output uncertainty to inputs and their interactions (contrasting with limited local methods), and (iii) sensitivity auditing (SAUD), which extends scrutiny to model framing, structural assumptions, and the socio-political context of model construction and use. The paper proposes translating SA/SAUD into precepts for broader quantification practices: mind the assumptions (stress-test models; adopt assumption hunting; transparency; guard against GIGO); model the modelling process (trace decisions across the ‘garden of forking paths’ and explore alternative frames/formulations); mind the hubris (guard against overcomplexity and rhetorical authority of large models; use parsimony criteria such as AIC/BIC when possible; otherwise use uncertainty/SA to gauge complexity); mind the framing (align models with purpose and context; enable non-ritual participation; compare competing frames; support statactivist tactics of fighting with and against numbers); mind the consequences (expose ‘chameleon models’; deconstruct socially impactful indicators such as rankings; apply SA to reveal fragility and inconsistency); mind the unknowns (acknowledge ignorance; partition uncertainties between data- and model-driven components; avoid quantifying at all costs when evidence is insufficient). The paper aligns these practices with Sen’s informational basis of judgement in justice (IBJJ) to integrate technical quality with fairness through participatory, extended peer processes.

Key Findings
  • Sensitivity analysis can uphold methodological adequacy by quantifying how uncertainties and interactions in inputs drive output variability, avoiding misleading precision from local analyses and revealing fragile inferences.
  • Sensitivity auditing targets normative adequacy and fairness by interrogating model framing, structural assumptions, interests, and blind spots, thus contextualizing numbers within their socio-political uses.
  • Modelling the modelling process (exploring alternative plausible paths/frames) helps surface volatility in composite indicators and rankings, challenging purported neutrality and supporting public contestation.
  • Complexity and model size can undermine relevance and transparency; parsimony and explicit uncertainty are preferable to pseudo-precision and rhetorical authority.
  • Quantification can displace attention from real systems to model outputs and be co-opted for policy justification; SA/SAUD provide tools to resist such technocratic closure by exposing uncertainty and alternative frames.
  • Global SA can be applied to algorithms to assess fairness, e.g., checking whether features proxy for protected attributes even when not explicitly included, thus informing ethical assessments.
  • When uncertainty intervals overlap policy option differences, options may be practically indistinguishable, underscoring the need to acknowledge ignorance and avoid overconfident prescriptions.
Discussion

By importing SA and SAUD from mathematical modelling into the sociology of quantification, the paper shows how to evaluate both the technical robustness and the normative integrity of numbers. This dual lens addresses the research problem of ensuring that quantification supports democratic deliberation rather than forecloses it through pseudo-precision and policy-based evidence. The approach operationalizes PNS principles—humility, extended peer engagement, and quality under uncertainty—and complements statactivist strategies by offering structured methods to interrogate assumptions, frames, and consequences. Applications to composite indicators, rankings, economic simulations, and algorithmic systems demonstrate how SA/SAUD can reveal fragility, hidden value judgements, and fairness issues, thereby broadening policy discourse and enhancing accountability.

Conclusion

The paper argues that SA ensures methodological adequacy while SAUD addresses normative adequacy and fairness, offering a transferable toolkit for scrutinizing models, metrics, indicators, and algorithms. It calls for modelling literacy beyond the deficit model, the formation of extended peer communities around the informational basis of judgement in justice, and activism focused on mathematical modelling akin to existing movements around statistics and algorithms. To preserve the benefits of mathematical encoding without subjugation to numbers, practitioners should expose uncertainty, explore alternative framings, and refrain from quantifying when evidence is insufficient. Future directions include participatory and distributed forms of modelling and auditing, systematic application of global SA to fairness in AI, and routine use of SA/SAUD to design, compare, and deconstruct composite indicators and policy metrics.

Limitations

The article is a conceptual framework and perspective piece; no new empirical data or case studies are generated or analyzed (data availability: not applicable). Given the diversity of modelling practices and contexts, recommendations may require adaptation and domain-specific validation. The paper illustrates with examples from literature rather than systematic empirical testing, which may limit generalizability across all quantification settings.

Listen, Learn & Level Up
Over 10,000 hours of research content in 25+ fields, available in 12+ languages.
No more digging through PDFs, just hit play and absorb the world's latest research in your language, on your time.
listen to research audio papers with researchbunny