logo
ResearchBunny Logo
L2 writer engagement with automated written corrective feedback provided by ChatGPT: A mixed-method multiple case study

Linguistics and Languages

L2 writer engagement with automated written corrective feedback provided by ChatGPT: A mixed-method multiple case study

D. Yan and S. Zhang

This fascinating mixed-method study by Da Yan and Shuxian Zhang explores how L2 writers interact with ChatGPT as an automated written corrective feedback provider. Discover the intricate dynamics of language proficiency, technological competence, and affective engagement in this innovative learning environment.

00:00
00:00
Playback language: English
Introduction
Student engagement is crucial for effective learning, particularly in second language (L2) writing where feedback plays a vital role. While research on written corrective feedback (WCF) has shifted towards understanding student involvement in processing feedback, multidimensional insights remain limited. Automated WCF (AWCF) has gained popularity, with recent advancements in generative artificial intelligence (GAI) tools like ChatGPT potentially revolutionizing AWCF in L2 pedagogy. However, students' engagement with this interactive learning environment is under-researched. This study addresses this gap by investigating L2 writers’ behavioral, cognitive, and affective engagement with ChatGPT-generated AWCF, aiming to provide conceptual and pedagogical insights for educators and researchers.
Literature Review
The literature review examines the history and impact of AWCF. While AWCF offers advantages like reduced teacher burden and increased student involvement in revision, its effectiveness compared to human feedback remains debated. Traditional corpus-based AWCF systems often receive criticism for their limitations in providing accurate and comprehensive feedback. AI-based systems like Grammarly and QuillBot have shown significant improvement over corpus-based systems, however. ChatGPT, a conversational GAI chatbot, demonstrates further potential for AWCF, outperforming existing AI solutions in grammatical error correction and offering features such as interactive feedback generation and iterative responses to user inquiries. Despite its potential, drawbacks like hallucination and the requirement of higher AI literacy from learners are noted. The study builds upon existing frameworks of student engagement (behavioral, affective, and cognitive) and adapts these to the context of GAI-generated feedback, emphasizing the role of technological competence alongside language proficiency in shaping engagement.
Methodology
This study employed a mixed-method multiple case study design. Four L2 writers with varying language proficiency and technological competence were purposefully selected from an undergraduate EFL program. Data were collected over five weeks through multiple methods: weekly reflective learning journals (covering prompt writing experiences, feedback processing, and revision), classroom observations (coded for metacognitive and cognitive learning strategies using a scheme adapted from Sonnenberg and Bannert (2015)), and post-session interviews. Quantitative data analysis involved quantified document analysis of learning journals and worksheets (measuring time spent, prompt frequency, feedback retention, and revision accuracy). Lag sequential analysis (LSA) using GSEQ 5.1 software was used to examine patterns in metacognitive and cognitive strategy use during feedback processing. Qualitative data from interviews underwent thematic analysis. Data triangulation was used to ensure trustworthiness and validity. Good inter-rater reliability was established through Cohen's Kappa and Fleiss' Kappa for coding.
Key Findings
Behavioral engagement revealed active involvement in feedback-seeking and revision. High-proficiency learners displayed more sophisticated prompt refinement strategies, while high technological competence was linked to more feedback elicitation attempts. Revision operations showed high-proficiency learners achieving higher correct revision rates and using more substitutions, whereas low-proficiency learners displayed higher rates of incorrect revisions and deletions. Cognitive engagement analysis via LSA revealed varied metacognitive regulatory skills. High-proficiency learners effectively integrated metacognitive monitoring and evaluation with cognitive processes, while low-proficiency learners lacked this integration. Interestingly, high technological competence seemed to compensate for weaker metacognitive skills by facilitating intensive interaction with ChatGPT. Affective engagement was largely positive, with participants appreciating ChatGPT's reliability and ability to handle negative feedback more easily than human feedback. However, participants noted the time-consuming nature and mental effort involved in the feedback-seeking and processing processes.
Discussion
The findings highlight the influence of language proficiency and technological competence on engagement with ChatGPT-generated AWCF. Higher proficiency correlated with more sophisticated feedback-seeking strategies and higher-quality revisions. Technological competence influenced the intensity of interaction with ChatGPT. The study reveals the need for metacognitive instruction to support effective feedback processing, particularly for lower-proficiency learners. The positive affective responses demonstrate ChatGPT's potential to create an engaging learning environment, although the mental effort required should be addressed through pedagogical strategies. The study contrasts with previous research by showing a higher correct revision rate than in previous studies on Grammarly, likely due to ChatGPT's superior performance in grammatical error correction and interactive features.
Conclusion
This study demonstrates ChatGPT's potential as a powerful and engaging AWCF tool. However, effective implementation requires teacher scaffolding for prompt writing and feedback processing, fostering students' metacognitive skills, and a balanced perspective on GAI's role in education, recognizing limitations in current students' AI literacy. Future research should focus on larger-scale studies, longitudinal investigations, inclusion of peer and teacher feedback, and exploration of diverse writing genres.
Limitations
The small sample size limits the generalizability of the findings. The five-week duration and self-regulated learning approach might not fully capture long-term effects or the impact of collaborative learning. The study focused solely on ChatGPT-generated feedback, excluding peer or teacher feedback. Future research should address these limitations.
Listen, Learn & Level Up
Over 10,000 hours of research content in 25+ fields, available in 12+ languages.
No more digging through PDFs, just hit play and absorb the world's latest research in your language, on your time.
listen to research audio papers with researchbunny