logo
ResearchBunny Logo
Growth in Knowledge of Programming Patterns: A Comparison Study of CS1 vs. CS2 Students

Computer Science

Growth in Knowledge of Programming Patterns: A Comparison Study of CS1 vs. CS2 Students

S. Nurollahian, N. Brown, et al.

This fascinating study by Sara Nurollahian, Noelle Brown, Anna N. Rafferty, and Eliane Wiese delves into how students' understanding of code structures evolves from introductory to intermediate programming courses, revealing surprising gaps that necessitate targeted instructional strategies.

00:00
00:00
Playback language: English
Introduction
Effective programming involves not only producing correct output but also writing well-structured, readable code that adheres to expert patterns. Well-structured code is more transparent, understandable, and maintainable. However, teaching students to write such code is challenging due to implicit conventions and the difficulty of providing scalable feedback. This study focuses on how students' knowledge of code structure evolves after one semester of instruction without explicit code-structure focus. The research compares CS1 and CS2 students' understanding of two significant code structures: S1 (returning boolean expressions directly instead of using if statements with literals) and S2 (managing unique vs. repeated code within if and else blocks). These structures are language-independent and use fundamental concepts taught early in CS1. Previous research shows that students frequently violate these patterns. The study aims to determine (RQ1) how CS2 students' knowledge differs from that of CS1 students across various tasks (identification, judgment, comprehension, writing, and editing), and pinpoint areas where students struggle. Additionally (RQ2), it explores how non-writing measures of code structure knowledge predict students' code-writing performance. The expectation is to gain insights into persistent structural problems and areas needing specific instructional interventions.
Literature Review
Existing literature emphasizes the importance of well-structured code for maintainability and understandability, yet highlights the challenges in teaching and assessing this aspect of programming. Automated code analyzers offer one assessment avenue but can be difficult to configure for pedagogical needs and often miss pedagogically crucial anti-patterns. Instructors frequently prioritize functionality over structure, potentially reducing students' motivation to improve code structure. Prior research indicates that students frequently violate structures S1 and S2. De Ruvo et al. (2018) found high rates of S1 violations, suggesting potential knowledge gaps regarding return expressions. Whalley et al. (2011) observed high redundancy in student code for S2, indicating a direct translation of problem specifications. Keuning et al. (2017) pointed to a lack of awareness or concern for structural violations. Wiese et al. (2022) explored the role of student disagreement with experts regarding readability in structural violations. Comparison studies of CS1 vs. CS2 student code quality are limited, with Breuker et al. (2011) and Wicklund & Östlund (2022) offering some insights, but lacking the multi-faceted approach of this study. This study addresses these gaps by employing diverse assessment tasks, aiming to disentangle potential causes of structural violations and understand student progress over time.
Methodology
To investigate student progress, 354 students (149 CS1 and 205 CS2) from three undergraduate CS courses at the University of Utah participated in an online survey. The survey, a modified version of the RICE survey (Wiese et al., 2019), assessed multiple facets of students' code structure knowledge through five tasks related to structures S1 and S2: 1. **Identification of Expert Pattern:** Students chose the best-styled code block from several options. 2. **Judgment of Readability:** Students selected the most readable code block. 3. **Code Comprehension:** Students predicted the output of given code snippets (expert pattern and anti-pattern). 4. **Code Writing:** Students wrote code based on given specifications. 5. **Code Editing:** Students edited anti-pattern code to improve style. The survey was conducted in Java, the primary language of the courses. The order of tasks was randomized to minimize learning effects. Students received extra credit for participation. The writing and editing tasks were assessed for both functionality and adherence to expert patterns, with qualitative coding for finer-grained analysis. Data analysis included Chi-square tests, logistic regressions and other relevant statistical methods to compare CS1 and CS2 students' performance and to examine predictors of code-writing performance.
Key Findings
The study yielded several key findings: **RQ1: CS1 vs. CS2 Performance:** * CS2 students significantly outperformed CS1 students in identifying expert patterns, judging readability, and editing for both S1 and S2. * However, CS2 students showed superior code writing only for S1 and superior comprehension only for S2. * Overall student performance was far below ceiling across most tasks except for comprehension of the S1 expert pattern. This indicates a significant need for further instructional support. * For S1, a higher percentage of students returned boolean literals using if statements rather than using the expert pattern of returning boolean expressions. * For S2, a significant portion of students failed to write functional code, often due to difficulties in handling string lengths. * A significant number of students failed to identify or edit S2 anti-patterns. **RQ2: Predictors of Writing Performance:** * Successful code editing significantly predicted writing structure for both S1 and S2. * For S1, readability judgment was also a significant predictor, along with student level. * For S2, comprehension of the expert pattern was a significant predictor, in addition to editing. * Exploratory analysis for S1 showed that a combined score across identification, readability judgement, and comprehension tasks significantly predicted writing structure. This wasn't observed for S2. A large proportion of students who couldn't determine the output of nested-if statements selected that code as the most readable in S2. The finding highlight the complexities in code comprehension and its relation to readability. The differences in performance between S1 and S2 suggest that different approaches to teaching these structures may be necessary. For S1, a higher proportion of students selected anti-patterns as more readable indicating a possible gap in understanding the advantages of the expert pattern. For S2, the challenges were more pronounced in writing and editing, pointing towards potential difficulties in managing code logic and refactoring. The study provides evidence for task-dependent skills, where success in one aspect does not necessarily translate to others. For example, identifying the expert pattern did not guarantee selecting it as the most readable.
Discussion
This study's findings address the research questions by showing distinct differences in CS1 and CS2 students’ code structure knowledge across various tasks. The consistently low performance across tasks, even for CS2 students, reveals a substantial gap in students' understanding of and ability to apply expert patterns. The varying patterns of improvement across structures S1 and S2 suggest that a uniform intervention might not be effective. The relationship between code editing success and writing performance underlines the importance of incorporating code review and refactoring activities into the curriculum. The findings regarding readability and comprehension as predictors of writing ability provide insights into the cognitive processes involved in writing well-structured code, suggesting that addressing comprehension gaps and fostering agreement on readability standards are crucial aspects of instruction. The results also challenge the hypothesis that lack of motivation alone fully explains structural violations; students struggled even when explicitly asked to edit code for style. The study's findings emphasize the need for differentiated instruction targeting specific code structures and task-dependent skills. For example, students might benefit from code tracing activities for the S1 pattern, while pre-coding planning and refactoring exercises may be more appropriate for S2.
Conclusion
This study provides valuable insights into the growth of students' code structure knowledge across introductory and intermediate programming courses. The findings reveal significant gaps in understanding and applying expert code patterns, despite some improvement between CS1 and CS2. The varied performance across different tasks and structures highlights the need for more targeted instructional approaches. Future research could investigate the effectiveness of specific interventions designed to address the identified weaknesses, potentially tailoring instruction based on individual student needs and preferred learning styles. Further research focusing on larger samples across different institutions and programming languages would strengthen the generalizability of the findings.
Limitations
The study's limitations include the use of extra credit to incentivize participation, potentially impacting the effort level of some students. The lack of information on students' prior programming experience could affect the interpretation of results. The survey setting, while controlling for various factors, might not fully capture the authenticity of real-world programming tasks. The tasks may have imposed varying cognitive loads on students across different skill levels, despite instructor confirmation. Finally, the lack of external validation for the survey and reliance on instructor agreement regarding expert patterns could affect the overall findings.
Listen, Learn & Level Up
Over 10,000 hours of research content in 25+ fields, available in 12+ languages.
No more digging through PDFs, just hit play and absorb the world's latest research in your language, on your time.
listen to research audio papers with researchbunny