logo
ResearchBunny Logo
Introduction
The global health community increasingly recognizes the importance of health research in informing policy and practice. However, many LMICs face significant gaps in health research capacity. Health RCS initiatives aim to address these gaps, but their effectiveness has been hampered by the heterogeneity and complexity of these initiatives, making systematic assessment challenging. Existing evaluations often lack rigor, consistent methodologies, and clear linkages between activities, outputs, and outcomes. This study aimed to analyze existing evaluations of health RCS to identify common indicators, assess their quality, and propose improvements for future evaluations. The goal is to provide robust evidence that can demonstrate value to stakeholders, including funders, research organizations, and research users.
Literature Review
The researchers reviewed existing literature highlighting the need for health research in all countries. They note existing gaps in health research production, particularly in LMICs. Several profiles and resources exist to assess LMIC capacity for health research, and proposals for strengthening health RCS have been put forward by various organizations. However, the heterogeneity and complexity of health RCS initiatives hinder systematic assessments of effectiveness. The researchers emphasize a need for improved strategies and frameworks for monitoring and evaluation, including explicit theories of change to guide the selection and interpretation of indicators.
Methodology
This study employed a qualitative approach. The researchers initially consulted with funding agencies to identify available evaluation reports of health RCS initiatives. Using a snowball sampling technique, they collected 54 reports and purposively selected 18 reports from 12 evaluations to maximize diversity. A quality appraisal assessed the clarity of the evaluation's purpose, methodology description, and indicator justification using Development Assistance Committee standards and SMART criteria. A systematic framework analysis was used to extract information on the indicators and their context, using the ESSENCE Planning, Monitoring and Evaluation framework as a guide. Two researchers independently coded each report, resolving discrepancies through discussion. The findings were iteratively checked with key health RCS evaluation stakeholders.
Key Findings
The 12 evaluations varied widely in their design and quality. Many lacked baseline data, limiting the assessment of change and attribution to the health RCS programs. Indicator descriptions and justifications were inconsistent. Indicators were mostly focused on activities, outputs, or outcomes, with few evaluations explicitly linking them. Individual-level indicators tended to be more quantitative, comparable, and attentive to equity considerations (e.g., gender, nationality, discipline). Institutional and national/international level indicators showed extreme diversity. While individual evaluations lacked linkages between activities, outputs, and outcomes, a synthesis across evaluations allowed the construction of potential pathways of change and assembly of corresponding indicators. Common individual-level indicators included research skills training, mentoring, and job outcomes. Institutional-level indicators focused on research infrastructure, management capacity, and collaborations. National/international level indicators encompassed stakeholder engagement, research uptake, national research systems, and networking activities. The study revealed a concerning lack of disaggregation of indicator data according to equity categories.
Discussion
The findings address the research question by highlighting the lack of rigorous evaluation designs and consistent indicator use in health RCS programs. The significant variability in evaluations underscores the need for standardized methodologies and clearer conceptual frameworks. The scarcity of data disaggregated by equity categories reveals a crucial gap in monitoring the impact of these initiatives on disadvantaged groups. The limited attention to theories of change hindered the assessment of how health RCS initiatives bring about change. The results contribute to the field by providing a comprehensive overview of indicators used and suggesting improvements for future evaluations. The use of a more standardized framework and better linkage between indicators within explicit theories of change would greatly enhance understanding of health RCS effectiveness.
Conclusion
This study synthesized existing knowledge on evaluating health RCS programs in LMICs. It emphasizes the need for improved evaluation designs, better indicator measurement, stronger linkages between activities, outputs, and outcomes through theories of change, and greater attention to equity considerations. Future research should focus on developing robust and context-specific evaluation frameworks. Furthermore, investing in prospective indicator measurement and systematic linkage of indicators will be critical for generating robust evidence and justifying investments in health RCS.
Limitations
The study was limited by the availability of evaluation reports; not all funders provided reports, and the analysis was labor-intensive. Although the selected evaluations were diverse, they may not fully represent all health RCS initiatives. The retrospective nature of most evaluations limits causal inference. Also, the interpretation of indicator data relied on the narrative descriptions in the reports, which varied in quality and detail.
Listen, Learn & Level Up
Over 10,000 hours of research content in 25+ fields, available in 12+ languages.
No more digging through PDFs—just hit play and absorb the world's latest research in your language, on your time.
listen to research audio papers with researchbunny