Economics
Women are credited less in science than men
M. B. Ross, B. M. Glennon, et al.
Gender differences in observed scientific output are well documented: women both publish and patent less than men, yet the underlying causes remain debated. Prior work has suggested possible drivers, including workplace environment, family responsibilities, differences in positions within labs, supervision, or undervaluation of women’s work. This study asks whether differences in attribution of credit within research teams explain part of the observed output gap. Using new data on research teams and their outputs, the authors investigate whether women are systematically less likely than comparable men to be named as authors or inventors on team-produced research outputs. The context includes historical examples such as Rosalind Franklin’s uncredited contributions to DNA structure discovery, raising the question of how frequently similar, less visible cases occur and how they may affect women’s careers in science.
The paper synthesizes literature on gender disparities in scientific productivity and recognition. Prior studies document that women publish and patent less than men and explore potential explanations such as workplace climate, greater family responsibilities, differences in lab roles and supervision, and specialization patterns across disciplines. Some recent work argues women’s contributions are not less productive but undervalued. Historical and contemporary narratives (e.g., Rosalind Franklin; concerns around recognition in CRISPR-Cas) illustrate under-recognition of women’s contributions. The authors note selection and measurement issues in conventional bibliometric datasets, which include only named contributors, and the limitations of small-sample case studies. This motivates the use of large-scale administrative data on actual team composition and contributions to assess differential attribution by gender.
The study triangulates evidence from three sources:
- Administrative team-level data (UMETRICS/IRIS): Deidentified finance and human resources data from 57 campuses across 20 U.S. research-intensive universities (subset of 118 campuses from 36 universities contributing to IRIS) covering 2013–2016 employment on grants. Teams are constructed around continuously paid faculty PIs (2013–2016) and all individuals (faculty, postdocs, graduate students, undergraduates, research staff) paid on PI-associated grants. These team records are linked to outputs—39,426 journal articles (2014–2016, Web of Science) and 7,675 patents (applications 2014–2016, PatentsView)—via deterministic and probabilistic matching using grant numbers, names, affiliations, co-author networks, and identity clusters (ResearcherID/ORCID; inventor IDs). The analytical dataset comprises 128,859 individuals in 9,778 teams.
-
Measures of credit: (a) Ever-author rate: indicator if an individual is named on at least one article or patent during the analysis period. (b) Attribution rate: probability that an individual is named on a given document produced by their team, defined as actual authorships over potential authorships. Potential authorships are constructed by pairing each team document with all team faculty and with non-faculty who were employed by the team in the prior year, with person-team and person-document weights to avoid over-representation of individuals on many teams/documents. (c) High-impact attribution: attribution conditional on document impact measured by forward citations (as of 2018), modeled as a function of log(citations+1).
-
Gender imputation: Names are assigned gender using Ethnea (first name with ethnicity from family name; middle name when needed) and Python Gender Guesser for unresolved cases. Validation against self-reported gender (one institution, 12,867 faculty) and Survey of Earned Doctorates yielded high precision (>92%). Non-binary identities are not captured.
-
Statistical analysis: Ordinary least squares regressions estimate the probability of being named on a document with a gender indicator and sequential controls: publication/application date (year×month), PI status, days worked on team, job title, field, and team fixed effects. Weights equal the inverse of the number of teams per employee times the inverse of potential outputs per team; standard errors clustered by team and employee. Sub-analyses estimate gender gaps by job title and by field, and examine how gaps vary with citations.
-
Survey of scientists: E-mailed to a sampling frame constructed from 2017 public ORCID profiles linked to Web of Science publications (2014–2018); after deduplication, 98,022 profiles met criteria. Gender imputed for stratified sampling; 28,000 invitations sent (men, women, gender-ambiguous names). There were 2,660 responses on experiences with authorship exclusion and reasons; 2,297 respondents detailed contribution roles (Project CRediT taxonomy). Quantitative summaries and t-tests compare men and women.
-
Qualitative evidence: 887 open-ended responses about credit allocation experiences; 6 in-depth Zoom interviews (4 women, 2 men) using a neutral protocol that did not introduce gender framing. Thematic coding identified patterns regarding voice, power dynamics, unclear authorship rules, and career impacts.
- Workforce vs authorship representation: Women constitute 48.25% of the research workforce but only 34.85% of named authors/inventors on team outputs (Extended Data Table 2).
- Ever-author rate: Overall 16.97% of individuals are ever named; men 21.17% vs women 12.15%.
- Attribution rate (potential to actual authorships): Overall attribution ≈3.2%. Men: 4.23%; women: 2.12% (P=0.0000; test value=19.5823; effect size=2.11%).
- Regression-adjusted attribution gaps (fully controlled with team fixed effects): Women are 13.24% less likely than men to be named on articles (effect size −0.4210 pp; P<0.0001; test value=−6.3788) and 58.40% less likely on patents (effect size −0.7652 pp; P<0.0001; test value=−10.7746).
- By job title (difference between women’s share of potential vs actual authorships): Research staff gap 15.72 pp (P=0.0000), faculty 7.09 pp (P=0.0000), postdocs 5.51 pp (P=0.0000). Gaps persist across positions after controls except for undergraduates.
- By field: Large gaps persist; e.g., biology 15.02 pp (P=0.0000), physical sciences 14.12 pp (P=0.0000). After controls, gaps remain in 9/13 fields for publications and 8/13 for patents.
- High-impact outputs: No significant gender difference at zero citations (P=0.1725), but the gap widens with impact. Each 1 log-point increase in citations is associated with a 4.78% decrease (relative to baseline 3.18%) in women’s likelihood of being named (P<0.0001). At 25 citations, women are 19.97% less likely to be named than men (P<0.0001; effect size 0.6352 pp).
- Survey—exclusion from authorship: 42.95% of women vs 37.81% of men report being excluded (P=0.0151; test value=−2.4327; difference −0.0514).
- Survey—reasons for non-credit: “Contributions underestimated” reported by 48.97% of women vs 39.13% of men (P=0.0036; effect size −0.0984). “Discrimination/stereotyping/bias” cited by 15.46% of women vs 7.67% of men (P=0.0003; effect size −0.0780). Men more often cited “Contribution did not justify authorship” (37.68% men vs 24.74% women; P=0.0000; effect size 0.1294). Differences in responsibilities cited by 17.53% of excluded women vs 12.63% of men (P=0.0432; effect size −0.0490).
- Survey—credited contributions (CRediT): Women report slightly more contribution roles on authored papers (mean 6.34 vs 6.11; P=0.0907). Significantly higher for women in data curation (44.38% vs 37.42%; P=0.0008), writing original draft (52.48% vs 45.73%; P=0.0015), review & editing (86.18% vs 82.57%; P=0.0205), and conceptualization marginally (68.36% vs 64.99%; P=0.0937). Men higher only in software (18.31% vs 11.67%; P=0.0000).
- Qualitative themes: Unclear and inconsistently applied authorship rules, PI-driven decisions, power imbalances, and the need to self-advocate. Respondents described career harms from exclusion, particularly for high-impact papers.
The findings directly address the research question by showing that women’s lower observed output in publications and patents is partly attributable to lower rates of credit attribution rather than lower contributions. The attribution gap persists after controlling for job title, field, team membership, PI status, time worked, and timing, indicating it is not merely due to compositional differences. The gap is widespread across career stages and fields and is larger for higher-impact outputs, suggesting particularly consequential underrecognition. Survey evidence corroborates administrative findings: women report more frequent exclusion, more often attribute it to underestimation or bias, and report doing at least as many or more contribution types when credited. Qualitative accounts highlight unclear norms, PI discretion, power dynamics, and the necessity—and risks—of self-advocacy. Collectively, these results imply that attribution processes may reinforce gender disparities in scientific careers by limiting recognition, advancement, and retention, especially on pivotal projects.
This paper provides convergent evidence from administrative, survey, and qualitative data that women are systematically less likely than men to receive authorship or inventorship credit for work performed on research teams. The analysis introduces a scalable data infrastructure that links detailed team composition to outputs, enabling measurement of potential vs actual authorships and high-impact attribution. The results suggest that part of the so-called productivity gap reflects differences in attribution, not contributions. Future work should examine mechanisms governing credit allocation within teams, the role of institutional and departmental policies, broader identity dimensions (including non-binary and fluid gender identities), unpaid work, and longitudinal links between attribution, career progression, and attrition. Expansion of the data infrastructure to additional institutions and countries and policy evaluations could inform interventions to improve equity in scientific credit.
- Administrative data come from research-intensive U.S. universities; findings may not generalize to all research settings or institutions where women may be differentially represented.
- Only named and paid contributors are observed; unpaid work is not captured, potentially biasing attribution estimates if unpaid contributions differ by gender.
- Gender is algorithmically imputed (though validated); non-binary and fluid identities are not captured.
- Bibliometric linkage, while carefully constructed, may include match errors and cannot capture unnamed contributors.
- Survey data are limited to authors with public ORCID and Web of Science records and may be subject to selection, self-reporting, and social desirability biases; individuals never named as authors are not represented.
- Measures of potential authorship rely on employment in the prior year for non-faculty; alternative windows yield similar but not identical estimates.
Related Publications
Explore these studies to deepen your understanding of the subject.

