Jonathan FROME
NUS College
Frome, J. (2024). Content analysis of student AI use in a first-year writing course [Paper presentation]. In Higher Education Conference in Singapore (HECS) 2024, 3 December, National University of Singapore. https://blog.nus.edu.sg/hecs/hecs2024-jfrome/
SUB-THEME
Opportunities from Generative AI
KEYWORDS
Generative AI, undergraduate, AI-assisted writing, content analysis
CATEGORY
Paper Presentation
EXTENDED ABSTRACT
The take-home essay has traditionally served as a reliable proxy for evaluating student writing skills. The rise of Generative AI (GenAI), however, has led to concerns that the take-home essay may no longer be a valid assessment tool. If instructors cannot determine whether a student or GenAI completed an assignment, such assignments may fail to demonstrate whether students have achieved the intended course learning outcomes. This concern is widespread among educators who rely on essays for assessment. For instance, Cardon et al.’s (2023) survey of over 300 communication instructors confirms the widespread concern that GenAI will increase plagiarism, reduce critical thinking, diminish writing skills, and make student assessment difficult. These fears often stem from intuitions about student behavior, such as the belief that “students just want the tool’s output without engaging in the actual [writing] process” (Chang et al. 2023). The speed with which GenAI can produce relatively high-quality essays has led some to suggest that university writing might shift to a model where “young writers will [try] to craft something meaningful and precise from the rough block of generic text that AI has provided them” (Moore 2023).
Yet we cannot determine whether these concerns are justified because of a critical gap in the literature: the lack of research on how students actually use GenAI tools. Although instructors have strong intuitions about the effects of allowing students to use GenAI for writing assignments, few of these intuitions are evidence-based. We simply know very little about how students use GenAI in their coursework. While some instructors are beginning to incorporate GenAI into classroom activities, the primary concerns revolve around its use outside the classroom, which could undermine the effectiveness of essay writing for skill-building and assessment.
This study aims to address this knowledge gap by exploring the following questions: How do students actually use GenAI tools for writing assignments when allowed to do so? How does their use relate to the primary concerns expressed by instructors? And what implications does this relationship have for designing college writing courses?
In this study, students in a first-year writing class were allowed to use ChatGPT freely for their coursework, provided they shared links to their chat transcripts. The chats were downloaded, formatted into a spreadsheet, and analysed as pairs of user prompts and ChatGPT outputs. Over 600 pairs of prompts and outputs were collected and coded to understand how students used ChatGPT to complete their assignments. The coding categories were based on academic writing as a process involving discrete activities: reading and analysing sources, generating ideas, drafting, revising content, and revising form. Additional categories were added inductively during the coding process.
The most serious concerns among instructors included fears that students would “offload” important writing activities (Watkins, 2024) to GenAI, such as active reading, thesis generation, and initial drafting. Such use could undermine the pedagogical value of assignments. Our findings suggest these concerns are supported only to a limited extent. Students were more likely to use GenAI as a reading aid (e.g., clarifying specific sentences) than as a substitute for active reading (e.g., summarising entire texts). Additionally, students used GenAI more often for revising their drafts than for generating initial drafts.
These preliminary results suggest that in the context of take-home essays, the most salient instructor concerns about GenAI use are not entirely borne out. The stereotype that students will use GenAI to write essays for them was not supported, at least for the observed participants (though different students and assignments might yield different results). The findings also underscore the importance of considering specific course learning outcomes when evaluating the disruptive potential of GenAI.
More fundamentally, this study provides an evidence-based account of how students use GenAI for writing assignments, which is crucial for developing more effective teaching strategies. Understanding student use of GenAI allows educators to design assignments that enhance learning and integrate GenAI into courses in ways that support, rather than undermine, critical thinking and writing skills.
REFERENCES
Cardon, P., Fleischmann, C., Aritz, J., Logemann, M., & Heidewald, J. (2023). The challenges and opportunities of AI-assisted writing: Developing AI literacy for the AI age. Business and Professional Communication Quarterly, 86(3), 257–295. https://doi.org/10.1177/23294906231176517
Chang, D. H., Lin, M. P.-C., Hajian, S., & Wang, Q. Q. (2023). Educational design principles of using AI chatbot that supports self-regulated learning in education: Goal setting, feedback, and personalization. Sustainability, 15(17), 12921. https://doi.org/10.3390/su151712921
Moore, A. (2023, June 25). Is there any point still teaching academic writing in the AI age? Times Higher Education. https://www.timeshighereducation.com/blog/there-any-point-still-teaching-academic-writing-ai-age
Watkins, M. (2024). Automated Aid or Offloading Close Reading? Student Perspectives on AI Reading Assistants. https://uen.pressbooks.pub/teachingandgenerativeai/chapter/automated-aid-or-offloading-close-reading-student-perspectives-on-ai-reading-assistants/