logo
ResearchBunny Logo
Introduction
The rise of evidence-based policy has led to increased focus on policy evaluations globally. Governments strive to institutionalize evaluation for more efficient and democratic decision-making. However, even in organizations with high evaluation maturity—where resources are dedicated to producing high-quality evaluations and evaluation use is a priority—research waste, or the underutilization of evaluation findings, remains a significant problem. This research explores the factors contributing to this waste in a high-maturity setting, aiming to minimize the misallocation of public funds. While existing literature identifies numerous facilitators and barriers to evaluation use, much of the evidence is anecdotal, hindering systematic cross-case analysis. This study addresses this gap by systematically examining the use of evaluations within a mature evaluation context, specifically focusing on the Policy and Operations Evaluation Department (IOB) of the Dutch Ministry of Foreign Affairs. The Netherlands and IOB rank high in evaluation maturity indices, providing a valuable case study to identify key conditions influencing evaluation utilization.
Literature Review
A comprehensive literature review examined factors affecting evaluation use, categorized into: policymaker involvement, political context, evaluation timing, evaluation report attributes, evaluator characteristics, policymaker characteristics, and organizational characteristics. The review revealed a diverse range of factors but a lack of consistent research agenda and empirical evidence. Many studies focus on facilitators rather than barriers, and inter-factor interactions are often unexplored. The inconsistent conceptualization and operationalization of factors further complicate the analysis. This study aimed to address these gaps by focusing on a high-evaluation maturity setting to pinpoint critical factors and their interactions.
Methodology
The study focused on 18 evaluations conducted by IOB between 2013 and 2016. Data collection involved reviewing evaluation documentation (Terms of Reference, reports, ministerial responses), conducting interviews with evaluators and policymakers, and administering questionnaires to relevant policymakers. The researchers employed Qualitative Comparative Analysis (QCA), a method well-suited for analyzing complex causal relationships. QCA allowed for the identification of necessary and sufficient conditions for evaluation use. Initial analysis identified a long list of potential factors influencing evaluation use, but many factors showed little variation across cases due to IOB's high evaluation maturity (e.g., evaluator credibility, report quality). Factors difficult to reliably measure (e.g., feasibility of recommendations, initiative for contact) were excluded. The analysis focused on four key factors: 1. **Political salience:** Measured by the perceived importance of the evaluation on the political agenda by both evaluators and policymakers. 2. **Timing:** Evaluated based on whether the evaluation coincided with policy formulation or revision. 3. **Novel knowledge:** Assessed based on whether policymakers reported gaining new knowledge from the evaluation. 4. **Policymaker interest:** Determined by active engagement of the main policymakers (presentations, questions, etc.). QCA, using a crisp-set approach, coded these factors as present (1) or absent (0) based on a detailed calibration process. The analysis identified necessary and sufficient conditions for both evaluation use and non-use.
Key Findings
The QCA analysis revealed that two conditions were necessary for evaluation use: appropriate timing (evaluation occurring concurrently with policy formulation) and clear policymaker interest. However, these alone were insufficient. A sufficient condition for evaluation use was the combination of appropriate timing, clear policymaker interest, and the generation of novel knowledge. This combination accounted for four of the five evaluations that were instrumentally used. The analysis of non-use revealed a more complex picture, with three distinct paths leading to non-use: 1. Absence of novel knowledge and inappropriate timing. 2. Absence of novel knowledge, low political salience, and lack of policymaker interest. 3. Inappropriate timing, even with policymaker interest. The analysis suggests that political salience has surprisingly little impact on evaluation use.
Discussion
The findings challenge the notion of a linear, rational approach to evidence-based policy. Even in high-maturity settings, evaluation use is not guaranteed. The study's key contribution lies in identifying the interplay between timing, policymaker interest, and the generation of novel knowledge as crucial for successful evaluation utilization. The importance of policymaker engagement in the evaluation process—encouraging them to suggest questions—emerges as key to maximizing impact. The finding that political salience is less impactful than previously thought is encouraging, as it suggests that factors more controllable (timing, knowledge generation, interest) have a greater influence on utilization.
Conclusion
This study provides valuable insights into the relationship between knowledge production and use in policy evaluation. It highlights the critical role of policymaker involvement and the timing of evaluations. The finding that political salience has a less significant effect than initially thought offers a positive outlook. Further research could explore whether these findings are generalizable across diverse organizational and political contexts, delve deeper into the nature of credible knowledge in various contexts, and consider a broader range of explanatory factors beyond the scope of this study.
Limitations
The study's findings are based on a single case study (IOB) and a relatively small number of evaluations. This limits the generalizability of the results to other organizations and policy fields. The focus was primarily on the perspectives of civil servants, neglecting those of politicians. The complexity of the policy field (development cooperation) might also influence the results. More research is needed across different organizations, policy domains, and political systems to confirm these findings and further refine our understanding of evaluation utilization.
Listen, Learn & Level Up
Over 10,000 hours of research content in 25+ fields, available in 12+ languages.
No more digging through PDFs—just hit play and absorb the world's latest research in your language, on your time.
listen to research audio papers with researchbunny