logo
ResearchBunny Logo
Introduction
The effective use of research knowledge is crucial for addressing educational inequities. However, a gap exists between research findings and their application in policy and practice. This has led to the development of the 'knowledge field,' which focuses on improving the connection between research and its users. The term K* encompasses the functions and processes linking knowledge, practice, and policy, aiming to make research useful, useable, and utilized. Various strategies, including capacity building, are being employed to promote K*. Investment in K* training programs necessitates robust evaluations to ensure efficient resource allocation. This scoping review aims to synthesize existing evaluation methods and outcome indicators for K* training programs to identify areas for improvement and inform the evaluation of a specific K* training program, CREATEd.
Literature Review
The introduction cites several studies to establish the context of knowledge mobilization and the need for capacity building in this area. It mentions the work of Fahim et al. (2023) and Walter et al. (2005) which review the multi-pronged approaches needed to promote research use, highlighting capacity building as a crucial strategy. The introduction also references studies showing the need for increased funding and investment in K* capacity building. It mentions CREATEd, a program designed to improve equitable relationships between research and practice communities, and its need for a robust evaluation framework.
Methodology
This scoping review followed Arksey and O'Malley's (2005) framework, encompassing five stages: (1) defining the research question, (2) identifying relevant studies, (3) study selection, (4) charting the data, and (5) collating, summarizing, and reporting the results. The review included evaluations of K* training programs for various professionals. Eight electronic databases and Google Scholar were searched, yielding 824 unique resources after duplicate removal. A two-stage screening process, based on title/abstract and full-text review, resulted in 47 eligible studies. Scientometric analysis (using VOSviewer) mapped the literature, exploring publication trends, citation analysis, author/institution/country collaborations, bibliographic coupling, and keyword co-occurrence. Content analysis extracted methodological characteristics (evaluation type, design, sample size, data collection techniques, timeline) and outcomes assessed (categorized using the Kirkpatrick four-level model). The extracted data included the use of evaluation terms and data collection timelines. The analysis focused on the methodological details and outcomes assessed, with a focus on the Kirkpatrick model. Finally, future evaluation approaches suggested by the reviewed studies were extracted and analyzed.
Key Findings
Scientometric analysis revealed a growth in K* training evaluation publications after 2012, mainly from health and implementation science fields. Implementation Science published the most articles. The most cited articles were published in Implementation Science. Collaboration was limited, mostly among authors in the US and Canada. Content analysis showed a predominance of process and outcome evaluations using surveys and interviews. Small sample sizes were common, limiting statistical power. The Kirkpatrick model revealed that most studies assessed trainees' reactions and learning, while fewer evaluated behavior change and downstream results. Limitations frequently cited by authors included simple evaluative designs, small sample sizes, short-term evaluations, and the lack of curriculum evaluation. Many studies proposed stronger designs, larger sample sizes, and long-term follow-up. The use of contribution analysis was suggested to address attribution challenges in complex interventions. Many studies lacked details on the structure of the training programs themselves or how evaluations were used to improve the programs.
Discussion
The findings show a clear need for more rigorous evaluations of K* training programs. The limited number of studies and small sample sizes hinder the ability to draw definitive conclusions about program effectiveness. The overreliance on self-report measures raises concerns about bias. The lack of attention to long-term impacts and downstream organizational effects limits the understanding of the overall contribution of K* training. The predominantly health-focused literature suggests a need for broader representation of other fields. The limited collaborations indicate a need for increased networking and knowledge sharing to avoid duplication of effort and maximize the effectiveness of future initiatives. The study highlights how a robust and comprehensive evaluation approach, including a theory of action, logic model, and multiple data sources, is essential for maximizing the impact and understanding of K* training programs.
Conclusion
This scoping review provides a comprehensive overview of the existing literature on K* training program evaluations. It reveals a significant need for more rigorous, well-designed studies with larger sample sizes and longer-term follow-up to comprehensively assess the impact of these interventions. Future studies should consider employing more robust research designs, including two-group designs, and triangulation of data. The use of contribution analysis and a theory of action can greatly enhance the causal claims about program effectiveness. Furthermore, efforts should be made to expand the scope of research beyond health and implement standardized measures for evaluating outcomes.
Limitations
The review may have missed some K* training evaluations due to publication biases and limitations in search terms. The small number of studies included in the analysis limits the generalizability of findings. The review did not analyze the structure of training programs or how evaluations were used for program improvement. The focus on English language publications might exclude relevant non-English language studies.
Listen, Learn & Level Up
Over 10,000 hours of research content in 25+ fields, available in 12+ languages.
No more digging through PDFs—just hit play and absorb the world's latest research in your language, on your time.
listen to research audio papers with researchbunny