
Education
ChatGPT and the digitisation of writing
X. Zhao, A. Cox, et al.
This engaging study by Xin Zhao, Andrew Cox, and Liang Cai delves into how students in higher education utilize ChatGPT in their writing processes. By exploring the interplay between digital tools, ethical concerns, and individual writing challenges, the research opens up exciting discussions about the future of AI literacy in academia.
~3 min • Beginner • English
Introduction
The paper situates the rapid uptake of ChatGPT within the longer trajectory of AI in education and the broader digitisation of writing (e.g., search, recommendation, transcription, translation, grammar checking, plagiarism detection). The launch of ChatGPT in November 2022 disrupted gradual change, creating excitement and concern, especially around academic integrity. The study focuses on how generative AI is changing how students write because writing is central to learning and assessment. The research questions are: 1) How did postgraduate students using ChatGPT and other digital writing tools for writing tasks in the summer of (2023) approach? 2) What do students consider the benefits and problems of ChatGPT’s use? 3) What are the strengths and weaknesses in student generative AI literacy?
Literature Review
The digitisation of writing: Writing has undergone a long-term digitisation through word processors, spelling/grammar/style checkers, connectivity, and now generative AI, affecting largely mental processes that are hard to observe. A growing ecosystem of AI-powered writing assistants has emerged. Godwin-Jones (2022) identifies four types: Automatic writing evaluation (AWE), automatic written corrective feedback (AWCF), translation tools, and text generation tools. Students also use tools for search, summarisation (e.g., Scholarly, iris.ai), literature reviewing (e.g., ResearchRabbit, connected papers), and referencing (e.g., EndNote, Zotero). Writing is iterative (prewriting, planning, drafting, revising, editing), and tools are used across stages (e.g., rephrasing tools like Wordtune to break blocks; translation tools across reading/drafting). Use is complex and task-specific.
Generative AI: ChatGPT gained users rapidly and sparked controversy in education, unlike earlier tools tacitly accepted (e.g., Grammarly, Google Translate). It can summarise, outline, draft in different styles/lengths, and check grammar/spelling, but raises informational and ethical issues: hallucinations, lack of sources, outdated knowledge cutoffs; bias and lack of transparency; potential for misinformation and content overload; unequal access; IP concerns; exploitative data labelling labor; environmental impact (summarised in Table 1).
AI literacy: AI literacy has been framed as competencies to evaluate AI, collaborate with it, and use it effectively (Long and Magerko, 2020), including understanding what AI is/does, how it works, how it should be used (ethics), and perceptions. Ridley and Pawlick-Potts (2021) emphasise algorithmic literacy. Much prior work predates ChatGPT. Given generative AI’s ability to create content from short prompts, the authors propose an updated generative AI literacy model with five headings: 1) Pragmatic understanding (tool selection, effective use including prompt strategies and iteration, critical interpretation of outputs for accuracy, currency, citeability, and bias), 2) Safety understanding (privacy, impacts on learning and social connection), 3) Reflective understanding (assessing and managing impacts in education), 4) Socio-ethical understanding (IPR, misinformation/disinformation, exploitative creation processes, equity of access, power of Big Tech), and 5) Contextual understanding (appropriate use in context, transparency about use).
Generative AI in education: Early studies show students generally positive and quick to adopt generative AI for brainstorming, personalised assistance, summarising, and literature support, while noting concerns about accuracy, transparency, privacy, over-reliance, equity, and employment impacts. Staff express more concerns (cheating, reduced critical thinking/creativity, diminished writing skills, authenticity/voice, agency), yet acknowledge workplace relevance and early-stage benefits.
Methodology
Qualitative interpretivist design using semi-structured interviews and observation. Participants: 23 students at a British university (diverse nationalities: UK, USA, China, Japan, Saudi Arabia, India, Thailand, Greece, Malaysia; postgraduate taught and research), all undertaking academic tasks (e.g., dissertations/theses). Recruitment targeted students using digital tools for writing. Data collection: Summer 2023, before the university issued its AI policy. Procedure: Participants demonstrated their academic writing process and explained tool use; interviews queried tools used, experiences with ChatGPT, and concerns (privacy, inclusivity, accessibility, bias, ethics, impact on education). Analysis: Thematic Analysis (Braun & Clarke, 2006). Ethics: University of Sheffield approval; informed consent; anonymisation.
Key Findings
- Students routinely used a wide range of digital tools (many AI-enabled) across the writing process, selecting specific tools for specific tasks (e.g., Grammarly, Quillbot, Wordtune, translation tools, referencing managers, summarisation tools).
- ChatGPT was used at multiple stages, though for many it was early days and use was limited. Reported uses included: understanding complex concepts and assignment briefs; summarising readings; suggesting structures/outlines; overcoming writer’s block and getting words on the page; rephrasing and grammar checking; occasionally searching for literature; support for non-academic writing (e.g., job applications), coding, and specific conversions (e.g., to LaTeX).
- Central unique value: clarifying assignment requirements and aligning structure/content to the brief (students provided the brief and asked for explanations or checks against requirements).
- Use patterns were highly individualised, aligned to perceived personal weaknesses (e.g., generating analogies for comprehension; iterative summarisation of full articles into bullet points; prompt-based paraphrasing and feedback to boost argument building).
- Interaction style sometimes resembled human dialogue, offering immediate responses and reducing need to ask tutors/peers, raising questions about impacts on the social dimension of learning.
- Perceived benefits: increased efficiency/productivity; time saving; stress reduction.
- Main concerns: informational unreliability and need to fact-check; fear of plagiarism or AI-detection accusations; loss of authorial voice or text that "sounds auto-generated"; dependency on tools and reduced independent thinking/motivation; some privacy awareness.
- Limited recognition of broader societal/ethical issues (bias, sustainability, exploitative labor) even when prompted; participants desired clearer institutional policies and guidance.
- Generative AI literacy (Table 3): Students could select tools for tasks and applied them to individual needs; prompt engineering generally not sophisticated; awareness of accuracy issues but less about bias; some privacy awareness and reflection on impacts on learning/social aspects; weak socio-ethical understanding; strong desire for contextual guidance from institutions.
Discussion
Findings show ChatGPT entering an already digitised and tool-rich writing ecosystem. At the time of study, ChatGPT complemented rather than replaced other tools, with a distinctive role in clarifying briefs, aligning output to requirements, and structuring ideas. Students framed usage in terms of efficiency, time saving, and stress reduction, often justifying use as not changing learning fundamentally but speeding it up—potentially underestimating learning trade-offs. Use was reflexive and iterative but individualized, likely influenced by limited institutional guidance, leading to varied boundaries around appropriate use. Major worries centered on information unreliability, plagiarism/AI detection risk, and dependence on technology, with some students noting potential negative impacts on independent thinking. Societal and ethical concerns were rarely foregrounded, indicating a gap in socio-ethical literacy. The prominence of ChatGPT has made the digitisation of writing more visible and controversial, creating an opportunity for educators to engage students in critical discussions about digital writing practices and to develop AI literacy systematically. Using the proposed generative AI literacy framework, strengths include pragmatic tool selection and emerging reflective awareness; weaknesses include limited prompt sophistication, underdeveloped bias awareness, and minimal appreciation of broader social impacts. Institutions and instructors can leverage these insights to provide clearer policies, explicit guidance on appropriate use, and targeted literacy development.
Conclusion
This study provides early empirical evidence on how students integrate ChatGPT within a broader patchwork of digital writing tools and proposes a generative AI literacy framework to assess capabilities and guide support. Contributions include: 1) documenting nuanced, individualized patterns of generative AI use across writing stages; 2) identifying perceived benefits (efficiency, stress reduction) and primary concerns (accuracy, plagiarism detection, dependence); 3) diagnosing strengths and gaps in generative AI literacy (tool selection vs. prompt sophistication, bias awareness, and socio-ethical understanding); and 4) offering a framework to inform institutional policy, educator development, and student support. Future research should examine impacts of different model versions and tools, track evolving practices as policies and technologies mature, and evaluate interventions designed to strengthen pragmatic, reflective, and socio-ethical dimensions of AI literacy.
Limitations
- Tool versions: Most participants used the free version of ChatGPT (3.5); only a few used the paid version (4). The study did not examine differences between versions in detail.
- Tool scope: ChatGPT was the main generative AI tool in use at the time; other tools’ roles relative to ChatGPT were not fully explored as technologies rapidly evolve.
- Context and sample: Single institutional context with 23 participants limits generalisability; reliance on self-reported practices alongside observations may miss invisible cognitive aspects of writing.
- Incomplete societal lens: Participants’ limited awareness of socio-ethical issues constrained depth of analysis of these dimensions.
Related Publications
Explore these studies to deepen your understanding of the subject.