logo
ResearchBunny Logo
Introduction
Programming education is increasingly crucial in today's technological landscape, equipping students with skills for efficient task management and preparing them for a data-driven workforce. However, challenges exist in programming education, making it crucial to explore effective learning strategies. The rise of AI, particularly ChatGPT, offers potential benefits to address these challenges. ChatGPT facilitates programming tasks via prompts, guiding interactions to elicit relevant information. While the potential of ChatGPT in programming learning is recognized, empirical research on the impact of specifically designed prompts on learning outcomes remains limited. This study addresses this gap by investigating the differential effects of prompt-based learning (PbL) versus unprompted learning (UL) on students' programming behaviors, interaction quality, and perceptions of ChatGPT.
Literature Review
The literature highlights the increasing importance of programming education in the context of AI's rapid development and the many benefits students derive from AI resources. ChatGPT, with its natural language processing capabilities, has emerged as a potentially valuable tool in programming education, offering personalized explanations and assistance. However, the effectiveness of ChatGPT can vary depending on how it is used. Studies demonstrate the potential of ChatGPT for assisting with various programming tasks such as code generation, debugging, and explanation. Other works highlight the adaptability and accessibility of ChatGPT for students with varying programming skills. While the benefits are acknowledged, concerns exist about overreliance and the potential for superficial learning. Therefore, prompt-based learning (PBL) has emerged as a promising method to improve the interaction with large language models and guide students towards more effective learning.
Methodology
This quasi-experimental study employed a mixed-methods approach with 30 college students randomly assigned to either a prompt-based learning (PbL) or unprompted learning (UL) group. Both groups followed the same curriculum using the ChatGPT Next platform with a gpt-3.5-turbo model. The PbL group received structured prompts designed based on the "ChatGPT Prompt Engineering for Developers" course, incorporating techniques like delimiters and structured output requests. Data collection involved computer screen-capture videos (during the final 90-minute project session), ChatGPT platform logs, and pre- and post-intervention surveys. Data analysis included clickstream analysis of video data, lag-sequential analysis of programming behaviors, Ouyang and Dai's framework for coding knowledge inquiry analysis of ChatGPT interactions, and independent t-tests and descriptive analysis of survey data focusing on perceived usefulness, ease of use, behavioral intention, and attitude towards ChatGPT, informed by the Technology Acceptance Model (TAM).
Key Findings
The study revealed significant differences between the PbL and UL groups. **Programming Behaviors:** Lag-sequential analysis showed that PbL students predominantly engaged in coding in Python and debugging to validate their work (CP→DP, Yule’s Q = 0.83), and copying and pasting codes from ChatGPT to PyCharm and debugging (CPC→DP, Yule’s Q = 0.74). UL students, conversely, primarily pasted Python codes from PyCharm to ChatGPT and asked new questions (PPC-> ANQ, Yule's Q = 0.94) and frequently copied and adjusted codes from ChatGPT to PyCharm (CPC-> CP, Yule's Q=0.90). **Interaction Quality:** Epistemic network analysis (ENA) of ChatGPT interactions showed that PbL students posed more medium-level questions independently, resulting in more accurate feedback, than their UL counterparts. UL students engaged in more superficial-level interactions but also received accurate feedback. A Mann-Whitney test indicated a statistically significant difference in communicative patterns between PbL and UL groups (U= 54.00, p=0.00, r=0.58). **Perceptions:** Pre-test showed that UL students had more favorable initial perceptions of ChatGPT's ease of use. Post-test revealed that PbL students reported significantly higher mean scores for perceived usefulness, ease of use, behavioral intention to use, and attitude towards using ChatGPT, with a statistically significant difference specifically for attitude (t (28)=-2.26, p=0.03). Students in the PbL group found structured output and delimiters most helpful in organizing solutions and improving learning efficiency. Prompts encouraging thoughtful problem-solving by the model were also highly valued.
Discussion
The findings support the hypothesis that prompt-based learning enhances the effectiveness of ChatGPT in programming education. PbL fostered a more active and iterative learning process, aligning with constructivist learning principles. The deeper engagement with code, feedback, and problem-solving within PbL suggests a more meaningful and effective learning experience than the more superficial approach observed in the UL group. The positive impact on perceptions aligns with the Technology Acceptance Model, indicating that the structured prompts helped bridge the gap between learners' expectations and ChatGPT's actual performance. The improved attitudes towards using ChatGPT suggest that the PbL approach makes the tool more user-friendly and effective. The use of structured prompts guided students to focus on essential skills and knowledge.
Conclusion
This study demonstrates the significant impact of structured prompts on enhancing the learning experience with ChatGPT in programming education. The results suggest a more effective and engaging learning environment with improved learning outcomes and positive attitudes toward the AI tool. Future research should investigate the effectiveness of different prompt variations and explore the integration of prompt-based methodologies at various stages of programming education. Longitudinal studies to track the long-term effects of prompt usage are also necessary.
Limitations
The study's generalizability is limited by the single prompt version used and the specific context of the course. The relatively short duration of the intervention might not capture the long-term impact of prompt-based learning. The sample size might also limit the generalization of the findings to other populations and contexts.
Listen, Learn & Level Up
Over 10,000 hours of research content in 25+ fields, available in 12+ languages.
No more digging through PDFs—just hit play and absorb the world's latest research in your language, on your time.
listen to research audio papers with researchbunny