logo
ResearchBunny Logo
Improving Assessment of Programming Pattern Knowledge through Code Editing and Revision

Computer Science

Improving Assessment of Programming Pattern Knowledge through Code Editing and Revision

S. Nurollahian, A. N. Rafferty, et al.

This insightful study by Sara Nurollahian, Anna N. Rafferty, and Eliane Wiese explores how code-writing tasks assess programming patterns and anti-patterns among students. Surprising findings reveal that simply analyzing initial code writing may overlook true student capabilities, as many can expertly revise their code upon reflection. Discover how combining various coding tasks leads to a richer understanding of student knowledge!

00:00
00:00
~3 min • Beginner • English
Abstract
How well do code-writing tasks measure students’ knowledge of programming patterns and anti-patterns? How can we assess this knowledge more accurately? To explore these questions, we surveyed 328 intermediate CS students and measured their performance on different types of tasks, including writing code, editing someone else’s code, and, if applicable, revising their own alternatively-structured code. Our tasks targeted returning a Boolean expression and using unique code within an if and else. We found that code writing sometimes under-estimated student knowledge. For tasks targeting returning a Boolean expression, over 55% of students who initially wrote with non-expert structure successfully revised to expert structure when prompted – even though the prompt did not include guidance on how to improve their code. Further, over 25% of students who initially wrote non-expert code could properly edit someone else’s non-expert code to expert structure. These results show that non-expert code is not a reliable indicator of deep misconceptions about the structure of expert code. Finally, although code writing is correlated with code editing, the relationship is weak: a model with code writing as the sole predictor of code editing explains less than 15% of the variance. Model accuracy improves when we include additional predictors that reflect other facets of knowledge, namely the identification of expert code and selection of expert code as more readable than non-expert code. Together, these results indicate that combination of code writing, revising, editing, and identification tasks can provide a more accurate assessment of student knowledge of programming patterns than code writing alone.
Publisher
2023 IEEE/ACM 45th International Conference on Software Engineering: Software Engineering Education and Training (ICSE-SEET)
Published On
May 15, 2023
Authors
Sara Nurollahian, Anna N. Rafferty, Eliane Wiese
Tags
code-writing
students
programming patterns
anti-patterns
knowledge assessment
code revision
performance evaluation
Listen, Learn & Level Up
Over 10,000 hours of research content in 25+ fields, available in 12+ languages.
No more digging through PDFs, just hit play and absorb the world's latest research in your language, on your time.
listen to research audio papers with researchbunny