Humanities
Computational philosophy: reflections on the PolyGraphs project
B. Ball, A. Koliousis, et al.
The paper introduces the PolyGraphs project in social epistemology to investigate when and why ignorance persists or is overcome within networks of inquiring rational agents. Using agent-based simulations grounded in economic and philosophical models of information sharing and belief updating, the authors explore how network topology, updating rules (Bayesian vs Jeffrey updating), and trust dynamics affect convergence to true beliefs. The study aims to generate philosophically relevant evidence about group knowledge, polarization, and aggregation of attitudes, and to position computational philosophy with respect to digital humanities and computational social sciences. The broader purpose is to address normative and interpretive questions distinctive to the humanities by means of computational simulations.
The work builds on models of learning and information sharing from economics and philosophy, notably Bala and Goyal (1998) and Zollman (2007, 2010), who showed a tradeoff between accuracy and speed (the Zollman effect) across network densities. It engages with O'Connor and Weatherall (2018), who incorporate trust-based discounting via Jeffrey's rule and demonstrate the possibility of stable polarization; their aggregate results report 40% true consensus, 10% false consensus, and 50% polarization across parameter values. Related opinion dynamics and epistemic network models (e.g., Hegselmann and Krause, Rosenstock et al.) inform the context. The paper also situates itself within broader debates: it sketches a taxonomy distinguishing digital humanities, computational social sciences, and other digital engagements (citing Luhmann and Burghardt, 2022; Roth, 2019; Neilson et al., 2018). It contrasts projects where computational elements primarily support curation/dissemination (e.g., Slave Voyages; Kahn and Bouie, 2021) with projects like PolyGraphs where computation is integral to answering humanities questions. It also contrasts with computational social science aims of empirical description/prediction (e.g., Lazer, 2009; Menczer and Hills, 2020; Weng et al., 2012), emphasizing that PolyGraphs targets normative and interpretive assessment rather than predictive accuracy.
Agent-based simulations model a community of scientists as nodes in a graph sharing evidence along edges. Agents investigate which of two treatments, A or B, is better. Those whose credence that B is better is below 0.5 administer A; because A's effectiveness is known, such actions generate no new evidence about A vs B. Agents who test B may generate evidence and share it with neighbors. Belief updating follows:
- Bayes' rule in Zollman-style models: P(h|e) = P(e|h)P(h)/P(e). Each simulation step consists of (i) running experiments (if any), (ii) sharing results with neighbors, and (iii) updating credences. The process repeats until either all agents believe A is better (so no further relevant evidence is generated) or all have credence > 0.99 that B is better.
- Network structures include both artificial graphs (e.g., complete networks; sparse cycles) and an imported real-world undirected network (EgoFacebook; ~4000 nodes).
- Polarization models (O'Connor and Weatherall, 2018) replace strict Bayesian updating with Jeffrey's rule: P(h) = P(e)P(h|e) + P(¬e)P(h|¬e). Trust is endogenous via discounting of evidence from dissimilar neighbors using P(e) = 1 − min(1, d·m)·(1 − P(e)), where d is the absolute credence distance between agents and m is a mistrust multiplier. As d·m approaches or exceeds 1, evidence from that neighbor is fully ignored, enabling polarization.
- Implementation: The authors built Python code to run simulations under both frameworks. They ran multiple trials on complete networks varying network size (e.g., 16, 64 nodes) and evidence parameters (e.g., small e values 0.001, 0.01), and varied mistrust m (e.g., 1.1, 1.5, >1). Outcomes compared: (i) proportion of simulations reaching correct consensus (credence > 0.99 in B) and (ii) number of steps to reach that consensus among successful runs. Statistical comparisons included significance testing of step counts (p < 0.05) between model classes under matched parameters.
- Group belief aggregation on large networks: On EgoFacebook, they tracked every 1000 steps over 100,000 steps the proportion of nodes with credence > 0.99. Two aggregation methods were compared: unweighted voting (one node, one vote) versus structure-sensitive weighted voting (votes weighted by neighborhood size). This tests how aggregation schemes influence assessments of group belief and the speed to supermajority thresholds.
- Compared under matched parameters, polarization-capable models (with mistrust multiplier m > 1) converged to the correct consensus (B is better) in a smaller percentage of runs than Zollman-style Bayesian models, indicating greater prevalence of ignorance; the effect grows with larger m.
- Among runs that reached correct consensus, polarization models required more steps on average than Bayesian models; for small networks and low e values, the increase in steps was significant (p < 0.05) for m = 1.1 and 1.5.
- The classical Zollman effect is reproduced: sparser networks (e.g., cycles) are more reliable (more often correct) but slower; denser networks are faster but less reliable.
- Structure-sensitive aggregation matters for group belief assessments on large networks. On EgoFacebook, weighted voting (by neighborhood size) caused the majority proportion to grow much faster than unweighted voting; a three-quarters supermajority threshold was achieved in fewer than 10,000 steps with weighting, not achieved in the first 20,000 steps without weighting, and full consensus required many tens of thousands of additional steps.
- Prior literature (O'Connor and Weatherall, 2018) reported aggregate outcomes across parameters of approximately 40% true consensus, 10% false consensus, and 50% polarization, underscoring how trust-based discounting can sustain polarization.
The simulations directly address philosophical questions about how communities of rational agents acquire knowledge or remain ignorant under different interaction structures and updating norms. Allowing trust-based evidence discounting (via Jeffrey's rule) increases both the prevalence and persistence of ignorance, indicating that social mistrust along belief-distance lines can undermine community-level knowledge formation even when evidence is reliable. The results also show that how we operationalize group belief—particularly whether aggregation is structure-sensitive—substantially affects evaluations of when a group counts as believing a proposition, suggesting that network topology and node centrality can legitimately inform group-level attitudes. Situated relative to neighboring fields, PolyGraphs shares modeling and generalization aims with computational social science but diverges in targeting normative and interpretive questions characteristic of the humanities (e.g., whether agents ought to update via Bayes or Jeffrey, whether and how groups should aggregate member attitudes). Validation is thus partly qualitative and judgment-based rather than purely empirical prediction. These findings support treating computational simulations as integral philosophical methods for exploring norms of inquiry and collective epistemology.
PolyGraphs integrates computational simulations into philosophical inquiry to examine how network structure, updating rules, and trust dynamics shape community-level knowledge, ignorance, and group belief. The study reproduces the speed–accuracy tradeoff (Zollman effect), shows that trust-based discounting amplifies ignorance and delays convergence, and demonstrates that structure-sensitive aggregation can materially alter assessments of group attitudes on large networks. The project thereby exemplifies computational humanities: the computational work is central to answering normative and interpretive questions. Future work will scale simulations on larger, directed real-world graphs and develop alternative structure-sensitive measures of group belief, disentangling causal influence from rational authority; it will also further compare models across parameter regimes and deepen the analysis of aggregation norms.
- Detecting structure-sensitivity effects is difficult in small artificial networks; results on such networks may not generalize straightforwardly.
- The real-world EgoFacebook graph used is relatively modest (~4000 nodes), limiting external validity to larger or differently structured networks.
- Prior comparative benchmarks (e.g., polarization rates from O'Connor and Weatherall) are reported in aggregate across parameters, hindering direct parameter-matched comparisons.
- Validation of philosophical models is not straightforwardly empirical; conclusions rely on qualitative predictions and judgment about category application, which may introduce subjectivity.
Related Publications
Explore these studies to deepen your understanding of the subject.

