logo
ResearchBunny Logo
Do we achieve anything by teaching research integrity to starting PhD students?

Education

Do we achieve anything by teaching research integrity to starting PhD students?

S. Abdi, S. Fiewus, et al.

Discover how a mandatory 3-hour research integrity course impacted over 1000 starting PhD students at KU Leuven! Conducted by Shila Abdi, Steffen Fiewus, Benoit Nemery, and Kris Dierickx, this study highlights how formal education in research integrity can foster meaningful conversations and encourage an enduring change in behavior.

00:00
00:00
~3 min • Beginner • English
Introduction
The paper addresses whether a mandatory 3-hour lecture on research integrity for starting PhD students improves their knowledge, attitudes, and behaviors regarding research integrity. Motivated by evidence of widespread questionable research practices and limited evaluations of research integrity training, KU Leuven instituted a compulsory session for first-year PhD students across disciplines. The authors conducted a longitudinal evaluation with pre-test, immediate post-test, and 3-month follow-up, including a control group of Master students who did not receive the intervention. Multivariate models tested the hypothesis that no changes occurred in knowledge, attitude, or behavior scores compared to baseline.
Literature Review
Prior work indicates concerns about research misconduct and questionable research practices (e.g., Martinson et al. 2005; Fanelli 2009). Evaluations of research integrity education have been scarce and methodologically limited, often focusing on single disciplines, lacking longitudinal data, or using meta-analytic approaches that pool diverse interventions (Antes et al. 2010; Watts et al. 2017; Hendoe et al. 2017; Langlais and Bent 2018). A meta-analysis reported generally positive but variable effect sizes for responsible conduct of research instruction (Watts et al. 2017). Literature on pedagogy suggests that traditional lectures yield poorer long-term retention than active learning methods (Freeman et al. 2014; Ramsden 2003; Ruiz-Primo et al. 2011). The present study contributes by evaluating a single, well-defined lecture-based intervention across multiple disciplines with longitudinal follow-up and a control group.
Methodology
Design: Longitudinal cohort with intervention and non-randomized control group. Timepoints: pre-test (immediately before the lecture), post-test (immediately after), and 3-month follow-up (online). Behavior items were asked at pre-test and follow-up. Participants: Intervention group comprised first-year PhD students from all doctoral schools at KU Leuven attending a mandatory 3-hour research integrity lecture (offered four times: Nov 2018, Jan, Mar, May 2019) to mixed groups of 200–400 across biomedical, natural sciences, and social sciences/humanities. Control group comprised Master students from similar disciplines attending regular courses (no research integrity content) in Feb–Apr 2019. Intervention: A single 3-hour, English-language lecture series delivered by a panel of five lecturers (from different disciplines; two co-authors included). Topics covered elements of research, data management, plagiarism, conflicts of interest, and publication ethics, with some interactive polling. Instruments: A 36-item draft questionnaire on knowledge, attitude, and behavior was developed from published lists of research misbehaviors and misconduct. Content validity was assessed by six independent experts who rated item relevance on a 4-point scale; agreement assessed via multi-rater kappa; seven items were removed, yielding 29 items: six multiple-choice knowledge items (scored 0–6 total), ten attitude items on a 5-point Likert scale (scored 10–50 total; first eight reverse-coded so higher scores reflect more positive attitudes), one top-3 ranking item on misconduct reasons, seven behavior items on a 4-point Likert scale (summed 5–15), and five yes/no behavior items. A pilot test with ten departmental members assessed usability. Procedures: Paper-based pre-test (yellow pages with six-digit code) completed before the lecture; paper-based post-test (pink pages, same code) completed immediately after; demographics and behavior items captured at post-test. Follow-up administered online via LimeSurvey ~3 months later, with up to three reminders, including five additional items. Control procedures mirrored the intervention group’s timing, with post-test after approximately 4 hours of regular (non-integrity) lectures. Questionnaires were slightly adapted for Master-level phrasing. Ethics: Approved by the Social and Ethical Affairs Committee of KU Leuven (G-2018 10 1530). Written informed consent obtained; no incentives; confidentiality assured. Data and analysis: Pre/post paper data entered into Excel, merged with online follow-up, and analyzed in SAS 9.4. Multivariate linear models for longitudinal measurements (unstructured covariance; direct likelihood approach) assessed differences in changes from pre-test between groups for continuous outcomes (knowledge, attitude, behavior). Binary items analyzed via logistic models with generalized estimating equations. All two-sided p-values reported. Sensitivity analysis performed excluding one knowledge item on data ownership due to lack of explicit institutional policy.
Key Findings
Sample and response: 1,039 PhD students completed pre-test, 920 post-test, and 560 follow-up. Control group: 419 pre-test, 256 post-test, 127 follow-up; 30% completed all three timepoints. Knowledge: Both groups improved from pre- to post-test, with significantly greater gains in the intervention group (intervention: 3.83 to 4.27; control: 3.59 to 3.75). At 3-month follow-up, knowledge remained above baseline in both groups but differences in change between groups were not significant. Attitude: Both groups improved at post-test and follow-up, with significantly greater immediate improvement in the intervention group (intervention: 39.68 to 42.99; control: 36.87 to 37.27). Group differences were not sustained at 3 months. Behavior: On behavior Likert-scale items, the intervention group showed a significant but small improvement toward better behavior from pre-test to follow-up, whereas the control group showed a significant decrease. For two specific behavior items (e.g., adapting management plans), both groups improved; the intervention group showed notable non-typical behavior change (reported level = 2). Overall, behavioral effects suggested some prolonged impact relative to knowledge and attitude. Awareness and application: At 3 months, 93% of intervention participants reported having conversations about research integrity (primarily with fellow PhDs 43%, someone outside work 16%, supervisors 13%). Additionally, 79% reported applying/using course content, most often related to authorship (24%), data management (22%), and publication (18%).
Discussion
The intervention produced immediate, statistically significant but modest improvements in knowledge and attitudes beyond test effects observed in the control group, aligning with expectations that brief lectures can shift short-term outcomes. Sustained differences between groups at 3 months were not detected for knowledge and attitude, consistent with evidence that traditional lecture-based instruction yields limited long-term retention compared to active learning. Behavioral measures indicated some prolonged positive impact among PhD students relative to controls, suggesting that certain practices might change even when knowledge/attitude gains wane. A key contribution is the inclusion of a contemporaneous control group, revealing test effects wherein controls also improved at post-test. The large, diverse, cross-disciplinary PhD cohort enhances generalizability beyond a single field. Importantly, the course appeared to catalyze dialogues about research integrity and self-reported application of content in practice, potentially influencing research culture beyond measurable scores. These findings support integrating formal instruction with supplemental interactive workshops and supervisor training to foster an institutional climate conducive to research integrity.
Conclusion
A single, mandatory 3-hour research integrity lecture for starting PhD students produced modest immediate gains in knowledge and attitudes beyond test effects, with limited retention at 3 months, but showed indications of longer-lasting behavioral influence. Notably, the course stimulated widespread conversations about integrity and self-reported application of principles in areas such as authorship and data management, suggesting value in shaping a culture of integrity. Future work should: evaluate active learning and multi-session formats; extend follow-up durations; include randomized or better-matched comparison groups; measure objective behavioral outcomes; and broaden content to cover topics such as p-hacking, HARKing, retractions, and publication bias. Integrating PhD and supervisor-focused training may yield greater and more sustained impacts.
Limitations
- Non-randomized design; control group comprised Master students rather than PhD students not receiving the intervention. - Traditional lecture format may limit long-term knowledge and attitude retention. - Questionnaire content omitted certain topics (e.g., p-hacking, HARKing, retractions, citation/publication bias, pre-registration), potentially narrowing measured effects. - Differential attrition, particularly higher drop-out in the control group, may bias follow-up comparisons; participants with complete follow-up had slightly higher baseline scores. - Disciplinary composition differences between groups; natural sciences proportion high, reflecting institutional distribution. - Potential test effects as controls also improved at post-test. - Some procedural constraints (e.g., variable timing/length of control group lectures) and possible measurement inconsistencies noted in behavior scoring. - Attendance for Master courses not registered; exact control group eligibility unknown.
Listen, Learn & Level Up
Over 10,000 hours of research content in 25+ fields, available in 12+ languages.
No more digging through PDFs, just hit play and absorb the world's latest research in your language, on your time.
listen to research audio papers with researchbunny