logo
ResearchBunny Logo
Evaluations of training programs to improve capacity in K*: a systematic scoping review of methods applied and outcomes assessed

Education

Evaluations of training programs to improve capacity in K*: a systematic scoping review of methods applied and outcomes assessed

S. Shewchuk, J. Wallace, et al.

This study delves into the evaluation of K* training programs, analyzing frequency, methods, and outcomes, with recommendations for future evaluation tactics. Conducted by Samantha Shewchuk, James Wallace, and Mia Seibold, it reveals the pressing need for more comprehensive evaluations in various professional fields.

00:00
00:00
~3 min • Beginner • English
Abstract
This paper examines how frequently K* training programs have been evaluated, synthesizes information on the methods and outcome indicators used, and identifies potential future approaches for evaluation. We conducted a systematic scoping review of publications evaluating K* training programs, including formal and informal training programs targeted toward knowledge brokers, researchers, policymakers, practitioners, and community members. Using broad inclusion criteria, eight electronic databases and Google Scholar were systematically searched using Boolean queries. After independent screening, scientometric and content analysis was conducted to map the literature and provide in-depth insights related to the methodological characteristics, outcomes assessed, and future evaluation approaches proposed by the authors of the included studies. The Kirkpatrick four-level training evaluation model was used to categorize training outcomes. Of the 824 unique resources identified, 47 were eligible for inclusion in the analysis. The number of published articles increased after 2014, with most conducted in the United States and Canada. Many training evaluations were designed to capture process and outcome variables. We found that surveys and interviews of trainees were the most used data collection techniques. Downstream organizational impacts that occurred because of the training were evaluated less frequently. Authors of the included studies cited limitations such as the use of simple evaluative designs, small cohorts/sample sizes, lack of long-term follow-up, and an absence of curriculum evaluation activities. This study found that many evaluations of K* training programs were weak, even though the number of training programs (and the evaluations thereof) have increased steadily since 2014. We found a limited number of studies on K* training outside of the field of health and few studies that assessed the long-term impacts of training. More evidence from well-designed K* training evaluations are needed and we encourage future evaluators and program staff to carefully consider their evaluation design and outcomes to pursue.
Publisher
HUMANITIES AND SOCIAL SCIENCES COMMUNICATIONS
Published On
Nov 29, 2023
Authors
Samantha Shewchuk, James Wallace, Mia Seibold
Tags
K* training programs
evaluation frequency
outcome indicators
systematic review
Kirkpatrick model
organizational impact
long-term follow-up
Listen, Learn & Level Up
Over 10,000 hours of research content in 25+ fields, available in 12+ languages.
No more digging through PDFs, just hit play and absorb the world's latest research in your language, on your time.
listen to research audio papers with researchbunny