logo
ResearchBunny Logo
Fair algorithms for selecting citizens’ assemblies

Computer Science

Fair algorithms for selecting citizens’ assemblies

B. Flanigan, P. Gölz, et al.

Discover groundbreaking algorithms designed by Bailey Flanigan, Paul Gölz, Anupam Gupta, Brett Hennig, and Ariel D. Procaccia. These innovative solutions enhance fairness in selecting citizens' assemblies, balancing representativeness with equal selection probabilities. Join us in exploring a method that's already made waves in over 40 assemblies worldwide!

00:00
00:00
Playback language: English
Introduction
The use of citizens’ assemblies, where randomly selected citizens deliberate on policy matters, is rapidly increasing globally. The selection process aims to create a mini-cosm of society, ensuring both descriptive representation (the assembly reflects the population's demographics) and equal opportunity for participation. However, these goals are often at odds due to varying participation rates across different population subgroups. The selection typically involves two stages: inviting a large initial sample and then selecting a smaller panel from the pool of volunteers who respond. This paper focuses on the second stage, the selection algorithm, which has significant influence on who ultimately represents the population. Current algorithms primarily focus on achieving descriptive representation through quota systems, often ignoring the principle of equal selection probability for each individual. This paper argues that equal opportunity is a crucial element of democratic legitimacy in sortition and aims to develop algorithms that incorporate both representativeness and fairness in selection probabilities.
Literature Review
The paper reviews existing selection algorithms used in practice, noting that all primarily focus on satisfying pre-defined quotas to ensure descriptive representation. These algorithms often fail to address the issue of unequal selection probabilities, a crucial aspect of fairness in sortition. Political theorists emphasize the importance of equal selection probabilities for reasons of equality of opportunity, democratic equality, and allocative justice. While previous mathematical models have explored the reconciliation of equal probabilities and representativeness, their assumptions are incompatible with current practice. The paper highlights that although perfect equality is usually unattainable, striving for maximal fairness—making probabilities as equal as possible subject to quotas—is a critical goal.
Methodology
The paper introduces an algorithmic framework for creating maximally fair selection algorithms. The framework considers various ways to quantify fairness (e.g., geometric mean, leximin), providing an optimal algorithm for any fairness measure with a specific functional form. The problem of assigning selection probabilities is cast as a fair resource allocation problem, leveraging established metrics from the fair division literature. Unlike existing algorithms that implicitly determine the output distribution through myopic heuristics, the proposed framework explicitly computes a maximally fair output distribution and then samples from it to select the final panel. This two-step approach ensures optimality because the fairest output distribution guarantees fairness at least as high as any other algorithm. While finding a maximally fair distribution might seem to require considering all possible panels, the authors show that an optimal portfolio of panels with a smaller size exists, simplifying the computational problem. The optimization task is tackled using column generation, where columns correspond to panels. The paper details the implementation of a specific algorithm, LEXIMIN, which optimizes the leximin fairness measure—maximizing the minimum probability and then recursively maximizing subsequent probabilities. LEXIMIN was implemented and made available through an open-source tool and a public website, increasing its accessibility and real-world applicability.
Key Findings
The paper's key findings demonstrate that LEXIMIN significantly improves fairness compared to the existing state-of-the-art algorithm, LEGACY. Evaluations on ten real-world datasets from various sortition organizations reveal that LEGACY consistently assigns near-zero probabilities to some pool members, effectively excluding them from participation. In contrast, LEXIMIN achieves a substantially higher minimum selection probability in all instances except for one outlier. The improvement isn't limited to a small group; a considerable portion of pool members (13-56%, median 46%) receive probabilities from LEGACY lower than LEXIMIN's minimum. Comparisons using the Gini coefficient and geometric mean reinforce LEXIMIN's superior fairness across all instances, with substantial improvements in nine out of ten cases. One instance, ‘obf’, showed a 16-percentage-point improvement in the Gini coefficient (from 59% to 43%), highlighting the algorithm's potential for significant impact. The runtime of LEXIMIN on consumer hardware is also feasible for practical applications.
Discussion
The paper's findings address the research question by presenting a novel algorithmic framework and a specific algorithm (LEXIMIN) that achieve maximal fairness in citizens’ assembly selection. The results demonstrate the significant shortcomings of existing algorithms in ensuring equal opportunity and highlight the practical feasibility and effectiveness of LEXIMIN. The substantial improvement in fairness metrics underscores the algorithm's contribution to a more inclusive and equitable selection process. The algorithm's real-world deployment and readily available implementation signal its potential for widespread adoption and positive impact on the practice of sortition. The study's contribution extends beyond algorithmic advancements, initiating an exchange between theory and practice that promises to further refine panel selection procedures. The incorporation of a live lottery mechanism in some implementations demonstrates progress toward greater transparency.
Conclusion
This research makes significant contributions to the fair selection of citizens’ assemblies. The proposed algorithmic framework provides a general solution for various fairness measures, while the implemented LEXIMIN algorithm effectively mitigates the unfairness observed in previous methods. LEXIMIN's widespread adoption demonstrates its practical value. Future research might focus on extending the live lottery approach to guarantee strong fairness guarantees, while further collaboration between theorists and practitioners can lead to enhancements in areas such as transparency and accessibility.
Limitations
While the paper demonstrates the effectiveness of LEXIMIN on various real-world datasets, the generalizability of the findings may be limited by the specific contexts and characteristics of the studied assemblies. Further research could explore the algorithm's performance in different settings with varying demographic compositions and quota constraints. Additionally, the paper focuses on the second stage of the selection process, assuming a pre-existing pool of volunteers; potential biases introduced during the initial invitation phase could affect the overall fairness of the selection.
Listen, Learn & Level Up
Over 10,000 hours of research content in 25+ fields, available in 12+ languages.
No more digging through PDFs, just hit play and absorb the world's latest research in your language, on your time.
listen to research audio papers with researchbunny