logo
ResearchBunny Logo
Self-Explanation Effect of Cognitive Load Theory in Teaching Basic Programming

Computer Science

Self-Explanation Effect of Cognitive Load Theory in Teaching Basic Programming

C. Sandoval-medina, C. A. Arévalo-mercado, et al.

Students often struggle with basic programming, leading to high failure and dropout rates. This study designed, developed, and tested Cognitive Load Theory (CLT)–based instructional materials that leverage effects like worked examples and self-explanation, finding positive impacts versus traditional materials in a quasi-experimental study at the Autonomous University of Aguascalientes. This research was conducted by Carlos Sandoval-Medina, Carlos Argelio Arévalo-Mercado, Estela Lizbeth Muñoz-Andrade, and Jaime Muñoz-Arteaga.... show more
Introduction

The paper addresses the persistent difficulty beginners face in learning programming, a challenge reflected in high failure and dropout rates (often around 34%). With growing demand for programmers and the importance of programming skills across domains (industry 4.0, data science, AI), the study examines Cognitive Load Theory (CLT) as an instructional framework to reduce extraneous load and optimize learning. The authors designed and applied CLT-based instructional materials—focused on the self-explanation effect—to support teaching basic C++ programming in the structured paradigm at the Autonomous University of Aguascalientes. A quasi-experimental pre-post study compares the effectiveness of these self-explanation-based materials against traditional classroom examples.

Literature Review

The self-explanation effect has shown benefits across educational domains (Bisra et al., 2018), though limitations exist, especially when learners lack sufficient prior knowledge (Rittle-Johnson & Loehr, 2017). In programming education: Vihavainen et al. (2015) reported improved grades using self-explanation tasks (with/without help options); Yen & Wang (2017) built a self-explanation-based C++ environment with ontology-driven feedback to correct misconceptions; Price et al. (2020) found that explain prompts aid learning but increase time on task. CLT effects like worked examples and completion problems generally yield positive outcomes in programming (Beege et al., 2021; Sands, 2019; Zhi et al., 2019), though the expertise reversal effect and other constraints can reduce effectiveness (Kalyuga, 2007; Moreno, 2006).

Methodology

Two phases: (1) Design and development of instructional materials based on CLT, focusing on self-explanation problems and, for a second treatment, combining them with videos of worked examples. Materials target basic C++ topics: arrays, vectors, strings (array sorting, nested loops, matrix addition, string length, reversal, comparison). Traditional classroom problems were transformed into self-explanation format: students are given a real-life context, faulty program output, and source code with introduced errors; they must identify errors, explain them to a 'classmate', and propose corrections. Exercises have three parts: problem description, expected output, and erroneous source code. Worked example videos were created to combine self-explanation with the worked-example effect, following CLT principles and a five-phase development process (aimed at reducing cognitive load). Materials were implemented on a Moodle-based platform using open-ended essay-type questions, allowing text or audio explanations. An automatic feedback system presented instructor explanations and suggested solutions immediately after exercise completion. (2) Application and evaluation via a quasi-experimental PRE-POST design in distance education (COVID-19 context) with three first-year Computer Systems Engineering groups taught by the same instructor: Exp1 (n=50) received self-explanation problems; Exp2 (n=49) received self-explanation problems + worked-example videos; Control (n=45) studied using traditional in-class exercises. Learning outcomes were measured using standardized, calibrated departmental exams administered in the second (PRE) and third (POST) grading periods, focusing only on arrays, vectors, and strings. PRE and POST used isomorphic problems (examples provided) to enable comparability.

Key Findings

Descriptive statistics: Exp1 PRE mean=7.76 (SD=1.92; min=3.60, max=10.00), POST mean=9.46 (SD=0.74; min=8.00, max=10.00), mean increase=1.7. Exp2 PRE mean=5.97 (SD=2.16; min=2.60, max=10.00), POST mean=8.40 (SD=1.76; min=2.60, max=10.00), mean increase=2.4. Control PRE mean=8.10 (SD=2.55; min=0.80, max=10.00), POST mean=8.73 (SD=1.84; min=3.80, max=10.00), mean increase=0.6. Paired t-tests (PRE vs POST): Exp1 p=0.000 (mean diff=1.7, t=5.80); Exp2 p=0.000 (mean diff=2.4, t=6.72); Control p=0.017 (mean diff=0.6, t=2.49). One-way ANOVA: Groups differ in PRE (F=12.849, p=0.000) and POST (F=6.296, p=0.002). Tukey post hoc: PRE—no difference Exp1 vs Control (p=0.734); Exp2 differs from Control (mean diff=−2.13, p=0.000) and from Exp1 (mean diff=−1.79, p=0.000). POST—Exp1 differs from Exp2 (mean diff=1.06, p=0.002); Exp1 vs Control not significant (p=0.055); Control vs Exp2 not significant (p=0.528). Histograms show clear shifts toward higher scores in POST for both experimental groups (Exp1 concentrated 8–10; Exp2 shifted from 2–6 to 7–10), while Control remained largely 9–10 across both phases.

Discussion

Findings indicate that CLT-based instructional materials, particularly self-explanation tasks, improved learning in arrays and strings compared to traditional materials. Combining self-explanation with worked examples also yielded positive results, although differences in baseline (PRE) scores complicate direct comparisons of effectiveness across experimental groups. The significant PRE-to-POST gains in both experimental groups support the role of self-explanation in promoting germane cognitive load by reinforcing and constructing schemas in long-term memory. Post hoc results suggest Exp1 achieved the highest POST average relative to Exp2, while Exp2 initially lagged at PRE but substantially improved by POST. Overall, the study supports applying CLT effects in programming education to reduce extraneous load and enhance learning outcomes.

Conclusion

Both experimental groups showed statistically significant PRE-to-POST improvements (p=0.000), with average grade increases of 1.7 (Exp1) and 2.4 (Exp2). Results suggest that self-explanation-based instructional materials can improve basic programming learning compared to traditional materials, and that combining self-explanation with worked examples can also be effective. Given significant PRE differences, the study cannot assert that Exp2 outperformed Exp1 overall. The authors argue that self-explanation helps reinforce existing schemas, offers variability in practice, and can be especially valuable in contexts with limited solved examples. They recommend future face-to-face implementations with rigorous control and real-time prompts to potentially yield even better learning outcomes.

Limitations

Distance education (COVID-19) limited control over treatment; participants may have used additional study tools beyond provided materials. Real-time self-explanation prompts were not implemented; asynchronous text/audio explanations may not fully replicate in-person guidance. Some copied responses occurred despite checks; emphasis was placed on process over correctness, with automatic feedback provided. The quasi-experimental design using prearranged cohorts limits generalizability. Differences in baseline group performance (PRE) complicate direct comparisons of treatment effectiveness.

Listen, Learn & Level Up
Over 10,000 hours of research content in 25+ fields, available in 12+ languages.
No more digging through PDFs, just hit play and absorb the world's latest research in your language, on your time.
listen to research audio papers with researchbunny