
Education
The transformative power of values-enacted scholarship
N. Agate, R. Kennison, et al.
Discover a transformative approach to research evaluation that aligns scholarly practices with core values, proposed by Nicky Agate, Rebecca Kennison, Stacy Konkiel, Christopher P. Long, Jason Rhody, Simone Sacchi, and Penelope Weber. This essay reveals how to foster a healthier scholarly ecosystem and reshape higher education culture.
Playback language: English
Introduction
The authors contend that current methods for evaluating scholarly work in higher education are not only unsustainable but also increasingly detrimental. These systems rely heavily on easily quantifiable metrics, such as publication counts and citations, neglecting the numerous crucial aspects that contribute to a thriving scholarly environment. The production of a journal article, for example, depends on the often unacknowledged contributions of reviewers, editors, and other support staff. This overemphasis on quantifiable outputs creates a vicious cycle where the academy rewards the product of scholarship while failing to sustain the processes that enable its creation. Furthermore, the use of bibliometrics and altmetrics, while intending to measure impact, often leads to a flattening and devaluing of nuanced scholarship, context, and a wider diversity of scholarly activities. This focus on easily quantifiable metrics overlooks the many activities that enrich academic and public life, such as public-oriented work and community engagement, which go largely unmeasured and underappreciated. While the paper’s context is primarily based in the US system, many points are relevant internationally. The authors argue the existing evaluation systems are unsustainable and promote a toxic culture of competition and alienation from personal and institutional values. To counter this, three levels of intervention are proposed, all core to the Humane Metrics in Humanities and Social Sciences (HuMetricsHSS) initiative.
Literature Review
The paper draws upon various sources to support its claims. Solnit (2014) is cited to highlight the limitations of quantifiable metrics in capturing complex phenomena. Bergeron et al. (2014) are cited to demonstrate the emphasis on research productivity as the primary factor in academic rewards. The authors also engage with other works to highlight the limitations of traditional bibliometrics and altmetrics in assessing nuances, context, and the broad spectrum of scholarly activities. Additional sources critique the negative consequences of current evaluation practices, including the creation of a toxic culture of competition, pressure to publish quickly, and alienation from core values. The impact of the Research Excellence Framework (REF) in the UK is discussed, along with criticisms of altmetrics. Works are cited that illustrate negative behaviors and distortions that arise from metrics-driven evaluations. Finally, the paper cites works which critique the neoliberal ideology shaping higher education and the pervasive competition and distorted time constraints of academia.
Methodology
The paper's methodology is primarily based on the experiences and observations of the authors as co-principal investigators of the HuMetricsHSS initiative. They analyze existing evaluation systems, identifying their flaws and proposing alternative approaches. The paper uses a combination of qualitative and quantitative examples to support their arguments, drawing upon existing literature, research studies, and anecdotal evidence. The HuMetricsHSS initiative, detailed in the paper, provides a structured framework for their approach. The initiative operates on three interventions: realignment of how the academy conceptualizes and assesses academic practice (shifting from output to process); breaking down research products into constituent processes to showcase activities and contributors beyond the lone scholar myth; and establishing contextualized values within institutions through conversations and a framework for more meaningful evaluation. The paper presents case studies of values-enacted approaches at institutions like Michigan State University (MSU) through the Cultivating Pathways of Intellectual Leadership (CPIL) initiative and other successful institutions. The methodology also includes a critical analysis of existing initiatives like the San Francisco Declaration on Research Assessment (DORA) and the Responsible Metrics Initiative to contrast approaches. The authors support their claims through a review of pertinent scholarly literature, highlighting issues such as the mismatch between scholars' values and institutional incentives, the overemphasis on specific types of output, and the undervaluing of crucial scholarly processes like peer review. In addition to reviewing existing literature and assessing the HuMetricsHSS initiative, the authors incorporate personal anecdotes and experiences to provide richer contextual details. The paper proposes a taxonomy of values-enacted indicators, including Expanded Scope and Deepened Focus Indicators, Vicarious Indicators, and Values-Driven Quantification Indicators, to expand and enrich assessment methodologies.
Key Findings
The paper's central finding is that current scholarly evaluation systems are deeply flawed and contribute to a toxic academic culture. These systems overemphasize easily quantifiable metrics, neglecting the complex and multifaceted nature of scholarly work. This leads to perverse incentives, such as citation gaming and a neglect of valuable scholarly activities that don't easily translate into quantifiable metrics. The authors demonstrate the limitations of existing metrics in capturing the breadth and depth of scholarly impact. The paper highlights the disconnect between the values that scholars espouse and the metrics by which they are evaluated, creating a system where junior scholars feel pressured to conform to perceived values rather than their own. The research finds the obsession with metrics distorts academic time, fosters a culture of competition that undermines collaboration, and alienates scholars from their core values. The authors demonstrate how the current system disregards the contributions of many individuals involved in scholarly production (reviewers, editors, etc.) and perpetuates the inaccurate myth of the lone scholar. The paper explores various values-enacted approaches to address these issues. The HuMetricsHSS initiative uses structured conversations to help institutions identify and prioritize their values, aligning those values with assessment and reward systems. The CPIL initiative at MSU exemplifies this values-enacted approach, shifting focus from the "means" of scholarship (teaching, research, service) to the "ends" (sharing knowledge, expanding opportunities, mentorship). The paper proposes a taxonomy of values-enacted indicators that move beyond simple citation counts, including: Expanded Scope and Deepened Focus Indicators (e.g., syllabus citations), Vicarious Indicators (recognizing contributions to the success of others), and Values-Driven Quantification Indicators (point systems weighting various activities). The authors use these concepts to illustrate how to develop values-aligned frameworks that provide a more holistic and accurate assessment of scholarly contributions.
Discussion
The findings of this paper directly address the research question of how to improve scholarly evaluation methods. The authors argue that a values-enacted approach, focusing on processes and practices rather than solely on outputs, is crucial for creating a healthier and more sustainable academic ecosystem. The significance of these results lies in their potential to transform the culture of higher education by fostering a more collaborative, supportive, and values-driven environment. The proposed values-enacted indicators offer a more nuanced and comprehensive assessment of scholarly impact, moving beyond simplistic metrics that fail to capture the complexity of scholarly contributions. The discussion highlights the need for institutional change, emphasizing the importance of aligning institutional values with assessment practices. The paper's relevance to the field lies in its contribution to ongoing discussions about research evaluation and assessment in the humanities and social sciences. It offers practical recommendations for institutions and scholars seeking to improve the evaluation of research and create a more supportive and meaningful scholarly environment.
Conclusion
This paper advocates for a paradigm shift in how we evaluate scholarly work, moving from a narrow focus on easily measurable outputs to a more holistic assessment that values the processes, practices, and values that underpin high-quality scholarship. The authors' proposed values-enacted approach offers a path toward creating a more just, equitable, and fulfilling academic environment, emphasizing collaborative efforts and a more nuanced understanding of impact. Future research could focus on developing and testing these values-enacted indicators across a wider range of disciplines and institutions, examining the long-term impact of values-aligned evaluation systems on scholarly productivity and overall academic culture.
Limitations
The paper primarily relies on the authors' experiences and observations within the HuMetricsHSS initiative and related projects. While case studies are provided, more extensive quantitative research might strengthen claims regarding the effectiveness of values-enacted approaches on a larger scale. Additionally, the paper's focus is predominantly on the US and European contexts, with the applicability to other academic systems needing further exploration. Furthermore, some of the proposed values-enacted indicators require careful design and implementation to avoid misuse or unintended consequences, particularly Vicarious Indicators.
Related Publications
Explore these studies to deepen your understanding of the subject.