Unessay—Gateway to Future Higher Education (HE) Assessments in an AI World?

Chitra SABAPATHY
Centre for English Language Communication (CELC)

elccs@nus.edu.sg

 

Sabapathy, C. (2023). Unessay—Gateway to future higher education (HE) assessments in an AI world? [Paper presentation]. In Higher Education Campus Conference (HECC) 2023, 7 December, National University of Singapore. https://blog.nus.edu.sg/hecc2023proceedings/unessay-gateway-to-future-higher-education-he-assessments-in-an-ai-world/

SUB-THEME

AI and Education 

 

KEYWORDS

Unessay, higher education, AI, student autonomy, multimedia, oral communication

 

CATEGORY

Paper Presentation 

 

ABSTRACT

The rapid advancement of generative artificial intelligence (AI) has caused educators to find that their assessments (e.g., Kung et al., 2022) and pedagogies are vulnerable to them. However, it is important to recognise that AI should not solely be viewed from the perspective of facilitating cheating, particularly since tools like ChatGPT have become integrated into students’ lives. Instead of focusing on prohibitions or strictly monitoring for academic dishonesty, it would be beneficial to explore ways to embrace and utilise these technologies in education (Dawson, 2020) and design assessments that could represent “future realities” of respective disciplines. This presentation highlights the potential benefits of adopting “unessay” as an alternative pedagogical approach in higher education. Unessay offers students a degree of freedom, necessitates ownership, fuels passion (Jakopak et al.,2019), creativity, critical thinking, interdisciplinary understanding in which individuals articulate their ideas, beliefs, and identities. Students are afforded the autonomy to select their own topic within a specific subject area and determine their preferred method of presentation, provided that it is both captivating and impactful (O’Donnel, 2012). By granting students autonomy, fostering creativity, and encouraging critical thinking beyond conventional academic norms, unessay not only equips them with the essential skills required to navigate an AI-driven future but also offers them the freedom to explore alternative modes of expression (Nave, 2021). This approach engenders motivation and investment in their academic work. It also compels students to consider the intended audience, choose appropriate rhetorical strategies, and synthesise information effectively. This is evidenced in previous studies, such as how students used unessay in unique ways in history classes (Guiliano, 2022; Irwin, 2022; Neuhaus, 2022), histology of organ cells (Wood and Stringham, 2022), computer programming (Aycock et al., 2019), writing (Jakopak et al.,2019 and Sullivan, 2015), and applied cognitive psychology (Goodman, 2022). In CS2101 “Effective Communication for Computing Professionals”, the assignment task encouraged students to apply Gibb’s Reflective Cycle, involving describing unique experiences, reflecting on feelings, evaluating and analysing those experiences, and concluding with a future plan. This assignment departed from traditional written reflection essays, allowing students to use AI and innovative multimedia formats such as videos, podcasts, and infographics to express their insights and learning. Drawing from the implementation of the “unessay” strategy, its effectiveness as a teaching approach was assessed through an anonymous end-course survey. This survey incorporated both quantitative and qualitative feedback gathered from approximately 50 students who were enrolled in the course as well as tutors who taught on the course. The data provided insights as to how students engaged with the “unessay” strategy and what their perceptions of its effectiveness were, and the tutors’ perceptions of using this strategy in the course. This presentation aims to facilitate discussions and reflections on the unessay concept and how this could be integrated into higher education (HE) assessment, serving as a potential gateway to a more diverse and inclusive assessment framework.

 

REFERENCES

Aycock, J., Wright, H., Hildebrandt, J., Kenny, D., Lefebvre, N., Lin, M., Mamaclay, M., Sayson, S., Stewart, A., & Yuen, A. (2019). Adapting the “Unessay” for use in computer science. Proceedings of the 24th Western Canadian Conference on Computing Education, 1–6.

Dawson, P. (2020). Cognitive offloading and assessment. In M. Bearman, P. Dawson, R. Ajjawi, J. Tai, & D. Boud (Eds.), Re-imagining University Assessment in a Digital World (pp. 37-48). Springer International Publishing.

Goodman, S. G. (2022). Just as long as it’s not an essay: The unessay as a tool for engagement in a cognitive psychology course. Teaching of Psychology, 0(0), 1–5. https://doi.org/10.1177/00986283221110542

Guiliano, J. (2022). The unessay as native-centered history and pedagogy. Teaching History: A Journal of Methods, 47(1), 6-12. https://doi.org/10.33043/TH.47.1.6-12

Irwin, R. (2022). The un-essay, and teaching in a time of monsters. Teaching History: A Journal of Methods, 47(1), 13-25. https://doi.org/10.33043/TH.47.1.13-25

Jakopak, R. P., Monteith, K. L., & Merkle, B. G. (2019). Writing science: Improving understanding and communication skills with the “unessay.” Bulletin of the Ecological Society of America, 100(4), 1–5. https://doi.org/10.1002/bes2.1610

O’Donnel, D. P. (2012, September 4). The unessay. Daniel Paul O’Donnell. http://people.uleth.ca/~daniel.odonnell/Teaching/the-unessay

Nave, L. (2021). Universal design for learning UDL in online environments: The HOW of learning. Journal of Developmental Education, 44(3), 34-35. http://www.jstor.org/stable/45381118

Neuhaus, J. (2022). Introduction to the Fall 2022 Special Issue: Using the unessay to teach history. Teaching History: A Journal of Methods, 47(1), 2- 5. https://doi.org/10.33043/TH.47.1.2-5

Sullivan, P. (2015). The UnEssay: Making room for creativity in the composition classroom. College Composition and Communication, 67(1), 6-34. http://www.jstor.org/stable/24633867

Wood, J. L., & Stringham, N. (2022). The UnEssay project as an enriching alternative to practical exams in pre-professional and graduate education. Journal of Biological Education. Informa UK Limited, 1–8. https://doi.org/10.1080/00219266.2022.2047098

 

The Other Benefits of Making AI-resistant Assessments

Olivier LEFEBVRE1,2
1Department of Civil and Environmental Engineering
2NUS Teaching Academy

ceelop@nus.edu.sg

 

Lefebvre, O. (2023). The other benefits of making AI-resistant assessments [Paper presentation]. In Higher Education Campus Conference (HECC) 2023, 7 December, National University of Singapore. https://blog.nus.edu.sg/hecc2023proceedings/the-other-benefits-of-making-ai-resistant-assessments/ 

SUB-THEME

AI and Education 

 

KEYWORDS

AI chatbots, AI-resistant assessments, authentic assessments

 

CATEGORY

Paper Presentation 

 

ABSTRACT

Over the past one year, the world has witnessed growing concerns in relation with the rise in performance of artificial intelligence (AI) chatbots at a rate much faster than our own ability to comprehend all the implications, leading to questions on whether we should slow down or even halt the “race to god-like AI”1. In the academic world, concerns are mostly related about the way to assess students in this new day and age, where ChatGPT has been found to pass entry exams in fields as varied as medicine, law or business schools (Wilde, 2023). Such legitimate concerns have resulted in diverse responses from many universities over the world, from banning AI chatbots altogether, such as in French University Sciences Po (Sciences Po, 2023), to providing guidelines and recommendations for staff and for students, such as the choice made by NUS in our interim policy guidelines (NUS Libraries, n. d.).

 

The plagiarism issues and risks of other acts of academic dishonesty are real, but this is not the first time that we have to face such issues. At the peak of the COVID-19 pandemic, students were asked to take their exams from home, and in many cases, the simple conversion of a pen-and-paper exam into a digital one, without rethinking the entire assessment, had led to a rising number of plagiarism and other cheating cases. Similarly, as with the COVID-19 pandemic, can we once again rethink our assessments and not only proof them against abuse from AI but also take this opportunity to deliver more meaningful assessments, in better adequation with the skills that our students need in this day and age (Mimirinis, 2019)? Instead of banning ChatGPT and the like, should we just acknowledge that AI is here to stay, and design assessments that question higher-order thinking skills, allowing at the same to distinguish between students who engage in surface learning and those who have achieved a real deep understanding of the topic? Such exams would constitute a form of authentic assessment, by recreating the conditions that students will apply their knowledge in their professional environment (Shand, 2020).

 

In this talk, I will present some general guidelines on what kind of exam can at the same time test students for higher-order thinking skills and resist AI chatbots. Real examples will be provided, where students are asked to:

  • Deliver a critical analysis of a scientific paper
  • Interpret graphs or images
  • Solve ill-defined and complex problems

 

I will show how well (or not) these exams resist ChatGPT and compare the AI output to that of real (anonymised) students over a range of performance (excellent, average, marginal). I will conclude with the limitations, e.g., the risk to increase the difficulty of the exam by a too large margin, making it difficult for the weaker students to perform reasonably well.

 

ENDNOTE

  1. Refer to https://www.ft.com/content/03895dc4-a3b7-481e-95cc-336a524f2ac2 for details.

 

REFERENCES

Mimirinis, M. (2019). Qualitative differences in academics’ conceptions of e-assessment. Assessment & Evaluation in Higher Education, 44(2), 233-48. http://dx.doi.org/10.1080/02602938.2018.1493087

NUS Libraries (n. d.). Academic integrity essentials. https://libguides.nus.edu.sg/new2nus/acadintegrity

Sciences Po (2023, January 27). Sciences Po bans the use of ChatGPT without transparent referencing. https://newsroom.sciencespo.fr/sciences-po-bans-the-use-of-chatgpt/

Shand, G. (2020). Bringing OSCEs into the 21st century: Why internet access is a requirement for assessment validity. Medical Teacher, 42(4), 469-71. http://dx.doi.org/10.1080/0142159X.2019.1693527

Wilde, J. (2023, January 27). ChatGPT passes medical, law, and business exams. Morning Brew. https://www.morningbrew.com/daily/stories/2023/01/26/chatgpt-passes-medical-law-business-exams

 

Teaching Augmentative Uses of ChatGPT and Other Generative AI Tools

Jonathan Y. H. SIM
Department of Philosophy, Faculty of Arts and Social Sciences (FASS)

jyhsim@nus.edu.sg

 

Sim, J. Y. H. (2023). Teaching augmentative uses of ChatGPT and other generative AI tools [Paper presentation]. In Higher Education Campus Conference (HECC) 2023, 7 December, National University of Singapore. https://blog.nus.edu.sg/hecc2023proceedings/teaching-augmentative-uses-of-chatgpt-and-other-generative-ai-tools /  

SUB-THEME

AI and Education 

 

KEYWORDS

ChatGPT, generative AI, philosophy of technology, AI augmentation

 

CATEGORY

Paper Presentation 

 

ABSTRACT

Since the rise of generative artificial intelligence (GenAI) like ChatGPT, educators have expressed concerns that students may misuse these tools by growing too reliant on them or use it to take shortcuts in their learning, thus undermining important learning objectives that we set for them.

 

Such concerns are not new in the history of technology. Socrates was one of the first to voice concerns about how the invention of writing would be detrimental to people’s memories:

“[Writing] will implant forgetfulness in their souls; they will cease to exercise memory because they rely on that which is written, calling things to remembrance no longer from within themselves, but by means of external marks.” (Phaedrus, 274b-277a)

 

Common to these complaints is the fear that new technologies will replace existing human processes—as a substitutive tool—leading to a deterioration or loss of certain human abilities. This is not the only approach to technology—we can also use these tools in an augmentative way to enhance existing human abilities and processes (Eors Szathmary et al, 2018). While we may not have memories as strong as the ancients did, writing has since augmented our thinking abilities, allowing us to easily record, recall, transmit, evaluate, analyse, and synthesise far more information than before.

 

This augmentative approach can also be applied to GenAI tools, like ChatGPT. 19.1% of my students (n=351) found ways to use ChatGPT as an augmentative tool rather than as a substitutive tool:

  • As an idea generator or a sounding board to help develop ideas before working on an assignment
  • As a learning resource to teach/explain concepts or clarify confusions
  • As a tool to improve their expression

 

Admittedly, it can be difficult for non-savvy users to think of augmentative uses. Students are commonly exposed to substitutive applications of ChatGPT in learning, and 65.5% of students did not think skills were required to use it well.

 

How can educators encourage effective augmentative uses of GenAI tools? I believe there are three learning objectives we should focus on:

 

(1) Cultivate a collaborative mindset working with GenAI. Knowing how to talk is not the same as knowing how to work well in a team. Learners must feel comfortable and empowered working with GenAI as a collaborative partner if they are to use it as an augmentative tool. One approach is to incorporate activities that involve collaborating with GenAI. In my course, students are to work alongside ChatGPT to develop an evaluation criterion for ride-sharing services, seeking feedback from it while also evaluating its feedback.

 

(2) Develop critical questioning skills. Learners need to learn how to scrutinise GenAI output as the content may be inaccurate or shallow. Throughout the same tutorial, students were challenged to find flaws with ChatGPT’s suggestions, and to find areas where they can improve the quality of ChatGPT’s output. The exercise helped them to recognise that an AI’s answer is far from perfect, and that they cannot take a seemingly well-written work as the final answer. Human intervention and scrutiny is still necessary as the AI’s work is, at best, a draft suggestion.

 

(3) Master the art of prompting. The quality of AI output is dependent on the quality and clarity of instructions given to it. Learners need to hone their ability to articulate their requirements well. Later in the same tutorial, students were given a prompt for ChatGPT to generate a pitch. They were then tasked with identifying shortcomings to the output and to produce better prompts to overcome those issues.

 

After the tutorial, many students reported newfound confidence and competency in utilising ChatGPT (n=351):

Table 1
Students’ perception of ChatGPT competency before and after tutorial

I considered myself very competent in using ChatGPT
Before Tutorial
(Average 2.76)
After Tutorial
(Average 3.71)
5 – Strongly Agree  5.41% 15.67%
4 22.79% 47.29%
3 27.07% 29.91%
2 31.91% 6.84%
1–Strongly Disagree 12.82% 0.28%

 

Table 1
Students’ perception of ChatGPT competency before and after tutorial

The tutorial taught me how to effectively collaborate and work with an AI for work.
(Average 4.19)
The tutorial taught me how to effectively critique and evaluate AI generated output so that I don’t take the answers for granted.
(Average 4.34)
The tutorial taught me how to design better prompts to get better results.
(Average 4.38)
I believe the skills taught in Tutorial 4 are useful for me when I go out to work.
(Average 4.28)
30.77% 40.46% 43.87% 38.46%
58.69% 53.56% 50.43% 52.71%
9.69% 5.70% 5.41% 7.41%
0.85% 0.28% 0.28% 1.42%
0% 0% 0% 0%

 

Overall, students had positive experiences learning this new approach to AI. They felt empowered and even an optimism about their future—knowledge of using AI in an augmentative way opens doors of opportunities that seemed too distant previously. In one case, a social science major shared how he felt so empowered by the tutorial that he took on a coding internship (despite being new to coding). He used ChatGPT to learn how to code which facilitated him to handle coding projects at work. This augmentative approach not only allowed him to produce solutions but also evaluate them much faster than if he did it on his own.

 

I firmly believe that teaching students how to augment their learning with GenAI tools holds immense potential in empowering our students for the future.

 

REFERENCES

Eors Szathmary et al. (2018). Artificial or augmented intelligence? the ethical and societal implications. In J. W. Vasbinder, B. Gulyas. & J. W. H Sim (Eds.), Grand Challenges for Science in the 21st Century. World Scientific.

Plato. (1952). Phaedrus. Trans. Reginald Hackforth. Cambridge University Press.

 

Harnessing the Potential of Generative AI in Medical Undergraduate Education Across Different Disciplines—Comparative Study on Performance of ChatGPT in Physiology and Biochemistry Modified Essay Questions

W. A. Nathasha Vihangi LUKE1*, LEE Seow Chong2, Kenneth BAN2, Amanda WONG1, CHEN Zhi Xiong1,3, LEE Shuh Shing3 , Reshma Taneja1,
Dujeepa SAMARASEKARA3, Celestial T. YAP1

1Department of Physiology, Yong Loo Lin School of Medicine (YLLSOM)
2Department of Biochemistry, YLLSOM
3Centre for Medical Education, YLLSOM

*nathasha@nus.edu.sg

 

Luke, W. A. N. V., Lee, S. C., Ban, K., Wong, A., Chen, Z. X., Lee, S. S., Taneja, R., Samarasekara, D., & Yap, C. T. (2023). Harnessing the potential of generative AI in medical undergraduate education across different disciplines—comparative study on performance of ChatGPT in physiology and biochemistry modified essay questions [Paper presentation]. In Higher Education Campus Conference (HECC) 2023, 7 December, National University of Singapore. https://blog.nus.edu.sg/hecc2023proceedings/harnessing-the-potential-of-generative-ai-in-medical-undergraduate-education-across-different-disciplines-comparative-study-on-performance-of-chatgpt-in-physiology-and-biochemistry-modified-es/ 
 

SUB-THEME

AI and Education

 

KEYWORDS

Generative AI, artificial intelligence, large language models, physiology, biochemistry

 

CATEGORY

Paper Presentations

 

INTRODUCTION & JUSTIFICATION

Revolutions in generative artificial intelligence (AI) have led to profound discussions on its potential implications across various disciplines in education. ChatGPT passing the United States medical school examinations (Kung et al., 2023) and excelling in other discipline-specific examinations (Subramani et al., 2023) displayed its potential to revolutionise medical education. Capabilities and limitations of this technology across disciplines should be identified to promote the optimum use of the models in medical education. This study evaluated the performance of ChatGPT, a large language model (LLM) by Open AI, powered by GPT 3.5, in modified essay questions (MEQs) in physiology and biochemistry for medical undergraduates.

 

METHODOLOGY

Modified essay questions (MEQs) extracted from physiology and biochemistry tutorials and case-based learning scenarios were encoded into ChatGPT. Answers were generated for 44 MEQs in physiology and 43 MEQs in biochemistry. Each response was graded by two examiners independently, guided by a marking scheme. In addition, the examiners rated the answers on concordance, accuracy, language, organisation, and information and provided qualitative comments. Descriptive statistics including mean, standard deviation, and variance were calculated in relation to the average scores and subgroups according to Bloom’s Taxonomy. Single factor ANOVA was calculated for the subgroups to assess for a statistically significant difference.

 

RESULTS

ChatGPT answers (n = 44) obtained a mean score of 74.7(SD 25.96) in physiology. 16/44(36.3%) of the ChatGPT answers scored 90/100 marks or above. 29.5%, numerically 13/44, obtained a score of 100%. There was a statistically significant difference in mean scores between the higher-order and lower-order questions on the Bloom’s taxonomy (p < 0.05). Qualitative comments commended ChatGPT’s strength in producing exemplary answers to most questions in physiology, mostly excelling in lower-order questions. Deficiencies were noted in applying physiological concepts in a clinical context.

 

The mean score for biochemistry was 59.3(SD 26.9). Only 2/43(4.6%) obtained 100% scores for the answers, while 7/43(16.27%) scored 90 or above marks. There was no statistically significant difference in the scores for higher and lower-order questions of the Bloom’s taxonomy. The examiner’s comments highlighted those answers lacked relevant information and had faulty explanations of concepts. Examiners commented that outputs demonstrated breadth, but not the depth expected.

nathasha luke et al, - Distribution of scores

Figure 1. Distribution of scores.

 

CONCLUSIONS AND RECOMMENDATIONS

Overall, our study demonstrates the differential performance of ChatGPT across the two subjects. ChatGPT performed with a high degree of accuracy in most physiology questions, particularly excelling in lower-order questions of the Bloom’s taxonomy. Generative AI answers in biochemistry scored relatively lower. Examiners commented that the answers demonstrated lower levels of precision and specificity, and lacked depth in explanations.

 

The performance of language models largely depends on the availability of training data; hence the efficacy may vary across subject areas. The differential performance highlights the need for future iterations of LLMs to receive subject and domain-specific training to enhance performance.

 

This study further demonstrates the potential of generative AI technology in medical education. Educators should be aware of the abilities and limitations of generative AI in different disciplines and revise learning tools accordingly to ensure integrity. Efforts should be made to integrate this technology into learning pedagogies when possible.

 

The performance of ChatGPT in MEQs highlights the ability of generative AI as educational tools for students. However, this study confirms that the current technology might not be in a state to be recommended as a sole resource, but rather be a supplementary tool along with other learning resources. In addition, the differential performance in subjects should be taken into consideration by students when determining the extent to which this technology should be incorporated into learning.

 

REFERENCES

Kung, T. H., Cheatham, M., Medenilla, A., Sillos, C., De Leon, L., Elepaño, C., Madriaga, M., Aggabao, R., Diaz-Candido, G., Maningo, J., & Tseng, V. (2023). Performance of ChatGPT on USMLE: Potential for AI-assisted medical education using large language models. PLOS Digital Health, 2(2), e0000198. https://doi.org/10.1371/journal.pdig.0000198

 Subramani, M., Jaleel, I., & Krishna Mohan, S. (2023). Evaluating the performance of ChatGPT in medical physiology university examination of phase I MBBS. Advances in Physiology Education, 47(2), 270–71. https://doi.org/10.1152/advan.00036.2023

 

Doing But Not Creating: A Theoretical Study of the Implications of ChatGPT on Paradigmatic Learning Processes

Koki MANDAI1, Mark Jun Hao TAN1, Suman PADHI1, and Kuin Tian PANG1,2,3 

1*Yale-NUS College
2Bioprocessing Technology Institute, Agency for Science, Technology, and Research (A*STAR), Singapore
3School of Chemistry, Chemical Engineering, and Biotechnology, Nanyang Technology University (NTU), Singapore

*m.koki@u.yale-nus.edu.sg

 

Mandai, K, Tan, J. H. M., Padhi, S., & Pang, K. T. (2023). Doing but not creating: A theoretical study of the implications of ChatGPT on paradigmatic learning processes [Paper presentation]. In Higher Education Campus Conference (HECC) 2023, 7 December, National University of Singapore. https://blog.nus.edu.sg/hecc2023proceedings/doing-but-not-creating-a-theoretical-study-of-the-implications-of-chatgpt-on-paradigmatic-learning-processes/

SUB-THEME

AI and Education

 

KEYWORDS

AI, artificial intelligence, education, ChatGPT, learning, technology

 

CATEGORY

Paper Presentation 

 

CHATGPT AND LEARNING FRAMEWORKS

Introduction

Since the recent release of ChatGPT, developed by OpenAI, multiple sectors have been affected by it, and educational institutions are not only affected by this trend but are also more deeply impacted compared to other fields (Dwivedi et al., 2023; Eke, 2023; Rudolph et al., 2023). Following the sub-theme of “AI and Education”, we conduct a systematic investigation into the educational uses of ChatGPT and its quality as a tool for learning, teaching, and assessing, mainly in higher education. Research is carried out using comprehensive literature reviews of the current and future educational landscape and ChatGPT’s methodology and function, while applying major educational theories as the main component for the construction of the evaluative criteria. Findings will be presented via a paper presentation.

 

Theoretical Foundations and Knowledge Gaps

Current literature on the intersections of education and artificial intelligence (AI) consists of variegated and isolated critiques of how AI impacts segments of the educational process. For instance, there is a large focus on the general benefits or harms in education (Baidoo-Anu & Ansah, 2023; Dwivedi et al., 2023; Mhlanga, 2023), rather than discussion of specific levels of learning that students and teachers encounter. Furthermore, there seems to be a lack of analysis on the fundamental change and reconsideration of the meaning of education that may occur due to the introduction of AI. The situation can be described as a Manichean dichotomy, as one side argues for the expected enhancements and improved efficiency in education (Ray 2023; Rudolph et al., 2023), while the other side argues for the risks of losing knowledge/creativity and the basis of future development (Chomsky, 2023; Dwivedi et al., 2023; Krügel et al., 2022/2023).

 

By referring to John Dewey’s reflective thought and action model for the micro-scale analysis (Dewey, 1986; Gutek, 2005; Miettinen, 2000) and a revision of Bloom’s taxonomy for the macro-scale analysis (Elsayed, 2023; Forehand, 2005; Kegan, 1977; Seddon, 1978), we consider the potential impact of ChatGPT over progressive levels of learning and the associated activities therein. These models were mainly chosen due to their hierarchical framework that allows for easy application in evaluation compared to other models, although this does not indicate that these models are superior to others; the evaluative criteria we aim to construct will be comprehensive, thus what our research provides is a possible base for future improvements. Moreover, we also incorporate insights from multiple perspectives that are not limited to educational theory, such as from the fields of policy and philosophy with the diverse backgrounds in our research team.

 

Purpose and Significance of the Present Study

This study sought to answer questions regarding the viability of ChatGPT as an educational tool, its proposed benefits and harms, and potential obstacles educators may face in its uptake, as well as relevant safeguards against those obstacles.

 

Furthermore, we suggest a possible base for a new theoretical framework in which ChatGPT is explicitly integrated with standard educational hierarchies, in order to provide better instruction to educators and students. This study aims to safely pioneer a baseline for policy considerations on it as an education tool made to either ameliorate or deteriorate. As a result, ChatGPT can be ratified in educational institutions with accompanying developmental policies to be considered and amended in governmental legislatures for wider educational use.

 

Potential Findings/Implications

The expectations from the existing literature suggest that in keeping with intuitions regarding higher-level learning, ChatGPT itself seems to be limited to do—that is, it is only able to process lower to mid-level learning comprising repetitive actions like remembering, understanding, applying, and analysing (Dwivedi, 2023; Elsayed 2023). Some literature also positions ChatGPT as less useful directly in higher-level processes of creation like evaluation and creation of new knowledge, and can even be said to hinder them (Crawford, 2023; Rudolph, 2023). Even within the lower-level process, there is a high concern for overreliance that will potentially lead to dullness of the learners (Halaweh, 2023; Ray, 2023). Yet under the lens of educational theories that this paper so far applied, there seems to be a possibility that ChatGPT may be able to assist higher-order skills such as creativity and related knowledge acquisition. As the net benefit of ChatGPT on education may more or less depend on external factors such as educational fields, the personality of the user, and the environment that we have yet to take into account of, it requires further research to determine its optimal usage in education. Still, this attempt may be one of the first steps to construct an evaluative criteria for the new era of education with AIs.

 

REFERENCES

Baidoo-Anu, D. & Ansah, L. O. (2023). Education in the era of generative artificial intelligence (AI): Understanding the potential benefits of ChatGPT in promoting teaching and learning. SSRN. https://ssrn.com/abstract=4337484

Crawford, J., Cowling, M., & Allen, K. (2023). Leadership is needed for ethical ChatGPT: Character, assessment, and learning using artificial intelligence (AI). Journal of University Teaching & Learning Practice, 20(3). https://doi.org/10.53761/1.20.3.02

Chomsky, N, et al. (2023). Noam Chomsky: The False Promise of ChatGPT. The New York Times. www.nytimes.com/2023/03/08/opinion/noam-chomsky-chatgpt-ai.html

Dewey, J. (1986). Experience and education. The Educational Forum, 50(3), 241-52. https://doi.org/10.1080/00131728609335764

Dwivedi, Y. K. et al. (2023). “So what if ChatGPT wrote it?” Multidisciplinary perspectives on opportunities, challenges and implications of generative conversational AI for research, practice and policy. International Journal of Information Management, 71, 1-63. https://doi.org/10.1016/j.ijinfomgt.2023.102642

Eke, D. O. (2023). ChatGPT and the rise of generative AI: Threat to academic integrity? Journal of Responsible Technology, 13, 1-4, https://doi.org/10.1016/j.jrt.2023.100060

Elsayed, S. (2023). Towards mitigating ChatGPT’s negative impact on education: Optimizing question design through Bloom’s taxonomy. https://doi.org/10.48550/arXiv.2304.08176

Forehand, M. (2005). Bloom’s taxonomy: Original and revised. In M. Orey (Ed.), Emerging perspectives on learning, teaching, and technology. http://projects.coe.uga.edu/epltt/

Gutek, G. L. (2005). Jacques Maritain and John Dewey on education: A reconsideration. Educational Horizons, 83(4), 247–63. http://www.jstor.org/stable/42925953

Halaweh, M. (2023). ChatGPT in education: Strategies for responsible implementation. Contemporary Educational Technology, 15(2), ep421. https://doi.org/10.30935/cedtech/13036

Kegan, D. L. (1977). Using Bloom’s cognitive taxonomy for curriculum planning and evaluation in nontraditional educational settings. The Journal of Higher Education, 48(1), 63–77. https://doi.org/10.2307/1979174

Krügel, S., Ostermaier, A. & Uhl, M (2023). ChatGPT’s inconsistent moral advice influences users’ judgment. Sci Rep 13, 4569. https://doi.org/10.1038/s41598-023-31341-0

Krügel, S., Ostermaier, A. & Uhl, M. Zombies in the loop? Humans trust untrustworthy AI-advisors for ethical decisions. (2022) Philos. Technol. 35, 17. https://doi.org/10.1007/s13347-022-00511-9

Mhlanga, D. (2023). Open AI in Education, the Responsible and Ethical Use of ChatGPT Towards Lifelong Learning SSRN, https://ssrn.com/abstract=4354422

Miettinen, R. (2000). The concept of experiential learning and John Dewey’s theory of reflective thought and action, International Journal of Lifelong Education, 19(1), 54-72. https://doi.org/10.1080/026013700293458

Ray, P. P. (2023). ChatGPT: A comprehensive review on background, applications, key challenges, bias, ethics, limitations and future scope. Internet of Things and Cyber-Physical Systems, 3, 121-154, https://doi.org/10.1016/j.iotcps.2023.04.003

Rudolph, J., Tan, S., Tan, S. (2023). ChatGPT: Bullshit spewer or the end of traditional assessments in higher education? Journal of Applied Learning & Teaching, 6(1), 1-22. https://journals.sfu.ca/jalt/index.php/jalt/article/view/689

Seddon, G. M. (1978). The properties of Bloom’s Taxonomy of educational objectives for the cognitive domain. Review of Educational Research, 48(2), 303–23. https://doi.org/10.2307/1170087

 

Does AI-generated Writing Differ from Human Writing in Style? A Literature Survey

Feng CAO
Centre for English Language and Communication (CELC)

elccf@nus.edu.sg

 

Cao, F. (2023). Does AI-generated writing differ from human writing in style? A literature survey [Lightning talk]. In Higher Education Campus Conference (HECC) 2023, 7 December, National University of Singapore. https://blog.nus.edu.sg/hecc2023proceedings/does-ai-generated-writing-differ-from-human-writing-in-style-a-literature-survey/

 

SUB-THEME

AI and Education

 

KEYWORDS

AI-generated writing, human writing, ChatGPT, style, linguistic features

 

CATEGORY

Lightning Talks

 

ABSTRACT

Artificial intelligence (AI) has witnessed significant advancements recently, leading to the emergence of AI-generated writing. This new form of writing has sparked interest and debate, raising questions about how it differs from traditional human writing. One popular AI tool which has been attracting much attention since 2022 is ChatGPT, which has been used to create texts in many domains. In this preliminary survey of literature, I aim to review studies which compare the writing generated by ChatGPT with human writing to explore the rhetorical and linguistic differences in style.

 

This literature survey focuses on the most widely used databases: Google Scholar, Scopus, and Web of Science. An initial search in these databases using key terms such as “AI-generated writing”, “human writing”, and “ChatGPT” returned over 400 items relevant to the topic. I skimmed through the titles and abstracts, and sometimes the full texts to assess their relevance to the research question. Irrelevant items and duplicates were excluded, and only the most pertinent sources were further analysed.

 

The preliminary analysis showed that the AI-generated writing differed from human writing in a number of genres and disciplines, for example, medical abstracts and case reports, business correspondence, restaurant reviews, and academic essays. Regarding content creation, for example, the literature review shows that AI is capable of generating highly readable medical abstracts and case reports which are almost indistinguishable from human writing. However, a few key limitations, such as inaccuracies in content and fictitious citations, were also reported by expert reviewers.

 

In terms of tone and voice, the analysis reveals that human writing differs from AI-generated writing by evoking emotions and resonates with readers on a personal level. Human writers bring their life experiences, cultural background, and empathy into their work, enabling them to convey complex emotions, capture nuances, and engage readers’ emotions. AI-generated writing, however, typically lacks the emotional depth and intuition present in human writing.

 

In terms of linguistic features, the literature indicates that AI-generated writing tends to employ longer sentences than human writing, but the latter is likely to employ more diverse vocabulary and expressions. In addition, AI-generated writing contains a more formal register whereas human writing is more likely to use informal register such as the frequent use of personal pronouns.

 

In short, this survey of the literature provides an initial overview of some key differences between AI-generated writing and human writing. While AI models like ChatGPT have made remarkable advances in mimicking human writing, they still lack the distinct characteristics that make human writing unique and emotionally resonant. Understanding these differences is vital for harnessing the potential of AI-generated writing while mitigating potential risks and challenges. In the field of language education, a better understanding of the differences between AI- and human writing may help teachers and novice writers to better utilise AI tools for developing academic writing skills and publishing. At the same time, by addressing ethical concerns and nurturing human creativity alongside AI capabilities, teachers and learners can navigate the evolving landscape of AI-generated writing, and leverage it to enhance human expression and communication in a responsible and inclusive manner.

 

Exploring Activity-based Instructional Approaches to Develop Students’ Understanding of the Ethical Implications of ICT

Alex MITCHELL1*, Weiyu ZHANG1, Jingyi XIE1, Bimlesh WADHWA2, and Eric KERR3

1Department of Communications and New Media, Faculty of Arts and Social Sciences
2Department of Computer Science, School of Computing
3Tembusu College and Asia Research Institute (ARI)

*1alexm@nus.edu.sg

 

Mitchell, A., Zhang, W., Xie, J., Wadhwa, B., & Kerr, E. (2023). Exploring activity-based instructional approaches to develop students’ understanding of the ethical implications of technology [Paper presentation]. In Higher Education Campus Conference (HECC) 2023, 7 December, National University of Singapore. https://blog.nus.edu.sg/hecc2023proceedings/exploring-activity-based-instructional-approaches-to-develop-students-understanding-of-the-ethical-implications-of-ict/

SUB-THEME

AI and Education 

 

KEYWORDS

IT ethics education, technology design, educational strategies, activity-based instruction

 

CATEGORY

Paper Presentation 

 

ABSTRACT

Information and communications technology (ICT) such as artificial intelligence (AI) offers tremendous opportunities to benefit society, but raises concerns over potential harm to social good. While ICT education has focused on advancing technologies, there is less emphasis on embedding ethical considerations in the learning of ICT. There is increasing public concern over the unethical consequences of ICT development and usage, particularly given the recent widespread adoption of AI-based tools such as ChatGPT. This suggests a need for the educational community “to renew its emphasis on nurturing the ability to recognize and engage with ethical issues emerging in relation to AI” (Borenstein & Howard, 2021) and ICT more generally. This paper presentation describes our exploration of activity-based instructional approaches to help students gain a better understanding of the ethical implications of ICT.

 

Current approaches to ICT ethics education can be categorised into three groups: ethical guidelines, fairness toolkits, and activity-based approaches (Zhang, 2022). Using ethical guidelines as a starting point for ICT ethics education can be problematic, as current guidelines tend to take an action-restricting, checkbox-based approach, making them inherently limiting and hard to adapt to specific situations (Hagendorff, 2020). In addition, students often find ethics education dry and hard to apply when education emphasises philosophical principles without accounting for real-life complexities. Similarly, fairness toolkits have limitations in terms of adaptability, and, if poorly designed, “could engender false confidence in flawed algorithms” (Lee & Singh, 2021). Activity-based co-design approaches, such as design fiction and speculative design (Baumer et al., 2020; Pierce, 2021), offer an alternative to more traditional approaches, and address the call for AI ethics education to move beyond approaches grounded in instructionism (Holmes et al., 2022).

 

This paper explores the effectiveness of activity-based ethics education strategies across various ICT-related courses. Specifically, an exploratory study was carried out using the Value Cards game (Shen et al., 2021), and running co-design sessions based on the Timelines design activity (Wong & Nguyen, 2021). Acknowledging “the importance of having interdisciplinary teams who create AI ethics content and potentially teach it” (Borenstein & Howard, 2021), we included courses from the Department of Communications and New Media, the Department of Computer Science, and Tembusu College at NUS. More than 120 students from the courses NM2209 “Social Psychology of New Media”, NMC5322 “Interactive Media Marketing Strategies”, CS3240 “Interaction” Design, and UTC1102 “Fakes” participated in the study. All four courses include at least one session that grapples with ethical issues in developing or using technology such as AI. For NM2209 and UTC1102, value cards were deployed to explore the implications of AI-generated content (see Figure 1), for CS3240, adapted value cards were used to discuss the topic of dark patterns such as nudges (see Figure 2), whereas for NMC5322 we used the Timelines design activity (see Figure 3) to explore the impact of various ICTs, such as AI, gamification, and the metaverse, on interactive marketing.

value cards used in NM2209
Figure 1. Examples of value cards used in NM2209 (click on the image for a full-sized version).

 

value cards used in CS3240

Figure 2. Examples of value cards used in CS3240 (click on the image for a full-sized version).

 

Timelines activity in NMC532. Timelines activity in NMC5322

Figure 3. Students engaged in the Timelines activity in NMC5322.

 

Students answered a survey about their ethics perception and awareness before and after participating in the activities. In addition, a subset of the students took part in a focus group soon after the courses ended.

 

In our presentation, we will share our insights from the use of these two approaches, highlighting the challenges we faced and the strengths of each activity. We will also provide suggestions both for how these approaches can be improved, and what educators can do more broadly to overcome the limitations of current approaches to ICT ethics education.

 

ACKNOWLEDGEMENTS

This project is supported by the NUS Centre of Development for Teaching and Learning Teaching Enhancement Grant (TEG) “Exploring Instructional Approaches to Develop Students’ Ethical Mindset for a Better Understanding of the Ethical and Social Implications of Technology.”

 

REFERENCES

Baumer, E. P. S., Blythe, M., & Tanenbaum, T. J. (2020). Evaluating design fiction: The right tool for the job. Proceedings of the 2020 ACM Designing Interactive Systems Conference, 1901–13. https://doi.org/10.1145/3357236.3395464

Borenstein, J., & Howard, A. (2021). Emerging challenges in AI and the need for AI ethics education. AI and Ethics, 1(1), 61–65. https://doi.org/10.1007/s43681-020-00002-7

Hagendorff, T. (2020). The ethics of AI ethics: An evaluation of guidelines. Minds and Machines, 30(1), 99–120. https://doi.org/10.1007/s11023-020-09517-8

Holmes, W., Porayska-Pomsta, K., Holstein, K., Sutherland, E., Baker, T., Shum, S. B., Santos, C., Rodrigo, M. T., Cukurova, M., Bittencourt, I. I., & Koedinger, K. R. (2022). Ethics of AI in education: Towards a community-wide framework. International Journal of Artificial Intelligence in Education, 32(3), 504–26. https://doi.org/10.1007/s40593-021- 00239-1

Lee, M. S. A., & Singh, J. (2021). The landscape and gaps in open source fairness toolkits. Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems, 1–13. https://doi.org/10.1145/3411764.3445261

Pierce, J. (2021). In tension with progression: Grasping the frictional tendencies of speculative, critical, and other alternative designs. Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems, 1–19. https://doi.org/10.1145/3411764.3445406

Shen, H., Deng, W. H., Chattopadhyay, A., Wu, Z. S., Wang, X., & Zhu, H. (2021). Value cards: An educational toolkit for teaching social impacts of machine learning through deliberation. Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency, 850–61.

Wong, R. Y., & Nguyen, T. (2021). Timelines: A world-building activity for values advocacy. Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems, 1–15. https://doi.org/10.1145/3411764.3445447

Zhang, W. (2022). Civic AI Education: Developing a deliberative framework. Proceedings of the 4th Annual Symposium on HCI Education (EduCHI’22), April 30-1 May 2022.

 

Creating Teaching Videos Using AI-generated Voices

David CHEW
Department of Statistics and Data Science, Faculty of Science (FOS)

david.chew@nus.edu.sg

 

Chew, D. (2023). Creating teaching videos using AI-generated voices [Paper presentation]. In Higher Education Campus Conference (HECC) 2023, 7 December, National University of Singapore. https://blog.nus.edu.sg/hecc2023proceedings/creating-teaching-videos-using-ai-generated-voices/

SUB-THEME

AI and Education 

 

KEYWORDS

Technology-enhanced learning, AI voices, videos, blended learning,  transferability

 

CATEGORY

Paper Presentation 

 

ABSTRACT

The advent of artificial intelligence (AI) presents a remarkable opportunity for various industries, and education is no exception. Within the realm of educational technology, a promising opportunity has emerged with the use of AI voices to create teaching videos. This innovative approach harnesses the power of AI to enhance educational content and delivery methods, revolutionising the way knowledge is imparted to learners.

 

In this talk, I describe an effort to make use of AI-generated voices to create teaching videos for the course ST2334 “Probability and Statistics”. ST2334 has an enrolment of 800 students every semester and is offered in a blended learning manner. Each week, students view at their own time 30 to 40 minutes worth of pre-recorded videos, before attending a “live” lecture delivered by the course coordinator. As the course is taught by different faculty members in different semesters, it was decided that the pre-recorded videos will be made with a “neutral” voice. An AI voice software Descript was then used to create the pre-recorded videos.

 

There are several ways you can use Descript.

(A) Use it as a video recorder cum editor

  • Record your teaching videos using your own voice.
  • Import the videos into Descript. Voice narrations will be automatically transcribed into text and aligned automatically to the audio. It is then easy to edit your videos in a word processor-like environment (Figure 1). Instead of working with sound waves (as with many other video editing software), the user can work on the script/words directly. Deleting words will automatically remove the associated video footage.
  • If you like to replace (the audio of) a mispronounced or wrong choice of word, it is possible to select that word, correct it and have that word replaced using a trained AI voice that sounds exactly like you.
  • Annotations/animations can be timed to coincide with text easily.
The Descript interface. Annotations/animations can be timed to sync with words
Figure 1. The Descript interface. Annotations/animations can be timed to sync with words (See blue arrows).

 

(B) Use it to construct your videos from scratch using an AI voice

  • Import your slides/videos into Descript.
  • Overlay the slides/videos with AI voices by typing out a script.
  • You may use (i) a stock AI voice, or (ii) train and use an AI voice that sounds exactly like you.

 

Here are some advantages of using an AI voice software like Descript:

  • The videos can be edited easily in the future, much like how one can easily edit a Word document or a PowerPoint file. Slides can be replaced, the script can be edited and audio regenerated easily in Descript.
  • The videos are easily transferable. Colleagues taking over the course do not have to record new videos using their own voice, but can easily reuse these videos since they are made with a “neutral” stock voice. They can also choose to train and use their own AI voice.

 

The use of an AI voice to produce teaching videos holds tremendous potential. This technology is heavily utilised by podcast content creators. There are many aspects of harnessing AI that educators can learn from such content creators to produce teaching videos that are engaging and accessible to students.

 

REFERENCES

Descript (2020). Introducing Descript [Video]. https://youtu.be/Bl9wqNe5J8U

Descript (2022). Descript Storyboard: Preview & Demo [Video]. https://youtu.be/P7SfbmsEK24

 

 

Harnessing the Power of ChatGPT for Assessment Question Generation: Five Tips for Medical Educators

Inthrani Raja INDRAN*, Priya PARANTHAMAN, and Nurulhuda MUSTAFA

Department of Pharmacology,
Yong Loo Lin School of Medicine (YLLSoM)

*phciri@nus.edu.sg

 

Indran, I. R., Paranthaman, P., & Mustafa, N. (2023). Harnessing the power of ChatGPT for assessment question generation: Five tips for medical educators [Lightning talk]. In Higher Education Campus Conference (HECC) 2023, 7 December, National University of Singapore. https://blog.nus.edu.sg/hecc2023proceedings/harnessing-the-power-of-chatgpt-for-assessment-question-generation-five-tips-for-medical-educators/ 

SUB-THEME

AI and Education 

 

KEYWORDS

AI, ChatGPT, questions, medical assessment

 

CATEGORY

Lightning Talks 

 

INTRODUCTION

Developing diverse and high-quality assessment questions for the medical curriculum is a complex and time-intensive task, as they often require the incorporation of clinically relevant scenarios which are aligned to the learning outcomes (Al-Rukban, 2006; Palmer & Devitt, 2007). The emergence of artificial intelligence (AI)-driven large language models (LLMs) has presented an unprecedented opportunity to explore how AI can be harnessed to optimise and automate these complex tasks for educators (AI, 2023). It also provides an opportunity for students to use the LLMs to help create practice questions and further their understanding of the concepts they wish to test.

 

AIMS & METHODS

This study aims to establish a definitive and dependable series of practical pointers, that would enable educators to tap on the ability of LLMs, like ChatGPT, to firstly enhance question generation in healthcare profession education, using multiple choice question (MCQs) as an illustrative example. Secondly, it can assist to generate diverse clinical scenarios for teaching and learning purposes and lastly, we hope that our experiences will encourage more educators to explore and access AI tools such as ChatGPT with greater ease, especially if they had limited prior experiences.

 

To generate diverse, high-quality clinical scenario MCQs, we outlined core medical concepts and identified essential keywords for integrating into the instruction stem. The text inputs were iteratively refined and fine-tuned until we developed instruction prompts that could help us generate questions of a desirable quality. Following question generation, respective domain experts reviewed them for content accuracy and overall relevance, identifying any potential flags in the question stem. This process of soliciting feedback and implementing refinements, enabled us to continuously enhance the prompts and the quality of questions generated. By prioritising expert review, we established a necessary validation process for the MCQs prior to their formal implementation.

 

THE FIVE TIPS

We consolidated the following tips to effectively harness the power of ChatGPT for assessment question generation.

 

Tip 1: Define the Objective and Select the Appropriate Model

Determine the purpose of question generation and choose the appropriate AI model based on needs and access. Model selection depends on the needs and accessibility. Choose ChatGPT 4.0 over 3.5 for greater accuracy and concept integration. ChatGPT 4.0 requires a subscription. Activate the beta features in “Settings” and utilise the “Browse with Bing” mode to retrieve information surpassing its training cut-off period, as well as install plugins for improved AI performance.

 

Tip 2: Optimise Prompt Design

When refining the stem design for question generation, there are several important considerations. Firstly, be specific in your instructions by emphasising key concepts, question types, quantity, and the answer format. Clearly state any guidelines or rules you want the model to follow. Focus on core concepts and keywords relevant to the discipline to build the instruction stem. Experiment with vocabulary to optimise question quality.

 

Tip 3: Build Diverse Authentic Scenarios

Develop a range of relevant clinical vignettes to broaden the scope of scenarios that can be used to assess students.

 

Tip 4: Calibrate Assessment Difficulty

Incorporate the principles of Bloom’s Taxonomy when developing assessment questions to test different cognitive skills, ranging from basic knowledge recall to complex analysis, enhancing question diversity.

 

Tip 5: Work Around Limitations

Be mindful that ChatGPT is trained on limited data and can generate factually inaccurate information. Despite diverse training, ChatGPT does not possess the nuanced understanding of a medical expert, which can impact the quality of the questions it generates. Human validation is necessary to address any factual inaccuracies that may arise. AI data collection risks misuse, privacy breaches, and bias amplification, leading to misguided outcomes.

 

CONCLUSION

AI-assisted question generation is an iterative process, and these tips can provide any healthcare professions educator valuable guidance in automating the generation of good quality assessment questions. Furthermore, students can leverage this technology for self-directed learning, creating and verifying their practice questions and strengthening their understanding of medical concepts (Touissi et al., 2022). While this paper primarily demonstrates the use of ChatGPT in generating MCQs, we believe that the approach can be extended to various other question types. It is also important to remember that though AI augments, it does not replace human expertise. (Ali et al., 2023; Rahsepar et al., 2023). Domain experts are needed to ensure quality, accuracy, and relevance.

 

REFERENCES 

AI, O. (2023).

Al-Rukban, M. O. (2006). Guidelines for the construction of multiple choice questions tests. J Family Community Med, 13(3), 125-33. https://www.ncbi.nlm.nih.gov/pubmed/23012132

Ali, R., Tang, O. Y., Connolly, I. D., Fridley, J. S., Shin, J. H., Zadnik Sullivan, P. L., Cielo, D., Oyelese, A. A., Doberstein, C. E., Telfeian, A. E., Gokaslan, Z. L., & Asaad, W. F. (2023). Performance of ChatGPT, GPT-4, and Google Bard on a Neurosurgery Oral Boards Preparation Question Bank. Neurosurgery. https://doi.org/10.1227/neu.0000000000002551

Palmer, E. J., & Devitt, P. G. (2007). Assessment of higher order cognitive skills in undergraduate education: modified essay or multiple choice questions? Research paper. BMC Med Educ, 7, 49. https://doi.org/10.1186/1472-6920-7-49

Rahsepar, A. A., Tavakoli, N., Kim, G. H. J., Hassani, C., Abtin, F., & Bedayat, A. (2023). How AI responds to common lung cancer questions: ChatGPT vs Google Bard. Radiology, 307(5), e230922. https://doi.org/10.1148/radiol.230922

Touissi, Y., Hjiej, G., Hajjioui, A., Ibrahimi, A., & Fourtassi, M. (2022). Does developing multiple-choice questions improve medical students’ learning? A systematic review. Med Educ Online, 27(1), 2005505. https://doi.org/10.1080/10872981.2021.2005505

 

Investigating Students’ Perception and Use of ChatGPT as a Learning Tool to Develop English Writing Skills: A Survey Analysis

Jonathan PHAN* and Jessie TENG
Centre for English Language Communication (CELC)

*jonathanphan@nus.edu.sg

 

Phan, J., & Teng, J. (2023). Investigating students’ perception and use of ChatGPT as a learning tool to develop english writing skills: A survey analysis [Paper presentation]. In Higher Education Campus Conference (HECC) 2023, 7 December, National University of Singapore. https://blog.nus.edu.sg/hecc2023proceedings/investigating-students-perception-and-use-of-chatgpt-as-a-learning-tool-to-develop-english-writing-skills-a-survey-analysis/

SUB-THEME

AI and Education 

 

KEYWORDS

AI-assisted education, ChatGPT, English language communication, higher education, writing

 

CATEGORY

Paper Presentation 

 

ABSTRACT

ChatGPT, an artificial intelligence (AI) chatbot and Large Language Model (LLM) developed by OpenAI, has garnered significant attention worldwide since its release for public use in November 2022. In the field of higher education, there is considerable enthusiasm regarding the potential use of ChatGPT for innovating AI-assisted education. Advocators propose utilising this AI tool to enhance students’ learning experiences and reduce teacher workload (Baker et al., 2019; Zhai, 2022). However, some educational institutions view its use as potentially detrimental to the teaching and learning process due to its disruptive nature. Concerns include the possibility of “amplify[ing] laziness and counteracting learners’ interest to conduct their own investigations and come to their own conclusions or solutions” (Kasneci et al., 2023, p. 7), and “increased instances of plagiarism” (Looi & Wong, 2023). Consequently, some higher educational institutions in various countries have banned or restricted the use of AI tools due to students’ use of ChatGPT to plagiarise (Cassidy, 2023; CGTN, 2023; Reuters, 2023; Sankaran, 2023). As a response, some educators propose creating AI-resistant assessments to combat student plagiarism while others suggest providing resources and proper guidance for students to use ChatGPT judiciously and responsibly (Rudolph et al., 2023).

 

As universities work to develop policies to address the use of AI tools, particularly ChatGPT, by both teachers and students within the academic context, they need to consider both the teachers’ and the students’ perspectives on the matter. However, given the novelty of this research topic, studies on the use of ChatGPT are not only scarce, but they have primarily focused on the pedagogical implications of AI tools from the teacher’s perspective. To address the lack of studies on students’ perspective, this study seeks to examine the perceptions and use of ChatGPT as a learning tool by higher education students.

To examine students’ perceptions of using ChatGPT as a learning tool to develop English academic writing skills, a survey questionnaire was administered to students enrolled in an undergraduate English language communication course at a local university. The questionnaire consisted of 34 five-point Likert scale questions and two open-ended questions on participants’ views on ChatGPT and their use of ChatGPT in their learning. One expected finding is that students are aware of how ChatGPT can be used, while an interesting finding is that students are also aware that ChatGPT gives misleading answers. In addition, a number of students disagreed that using ChatGPT was an efficient way of doing their assignments. Nevertheless, many use it for paraphrasing, generating ideas, and improving their general knowledge. As such, some students do feel helped by ChatGPT as a learning tool, although not every participant thinks it should be allowed in higher education.

 

It is hoped that the findings of this study can serve as a point of reference for educators in developing course materials and assessments so as to promote the effective use of ChatGPT in higher education.

 

 

REFERENCES

Baker, T., Smith, L., & Anissa, N. (2019). Educ-AI-tion rebooted? Exploring the future of artificial intelligence in schools and colleges. Nesta Foundation. https://www.nesta.org.uk/report/education-rebooted/

Cassidy, C. (2023, January 10). Australian universities to return to ‘pen and paper’ exams after students caught using AI to write essays. The Guardian. https://www.theguardian.com/australia-news/2023/jan/10/universities-to-return-to-pen-and-paper-exams-after-students-caught-using-ai-to-write-essays

CGTN. (2023, February 19). University of Hong Kong issues interim ban on ChatGPT, AI-based tools. CGTN. https://news.cgtn.com/news/2023-02-19/University-of-Hong-Kong-issues-interim-ban-on-ChatGPT-AI-based-tools-1hxWzqgcMxy/index.html

Kasneci, E., Sessler, K., Küchemann, S., Bannert, M., Dementieva, D., Fischer, F., …Kasneci, G. (2023). ChatGPT for good? On opportunities and challenges of large language models for education. Learning and Individual Differences, 103, 102274. https://doi.org/10.1016/j.lindif.2023.102274

Looi, C. K., & Wong, L. H. (2023, February 7). Commentary: ChatGPT can disrupt education, but it need not be all bad. Here’s how NIE is using it to train teachers. TODAY. https://www.todayonline.com/commentary/commentary-chatgpt-can-disrupt-education-it-need-not-be-all-bad-heres-how-nie-using-it-train-teachers-2102386

Reuters. (2023, January 28). Top French university bans use of ChatGPT to prevent plagiarism. Reuters. https://www.reuters.com/technology/top-french-university-bans-use-chatgpt-prevent-plagiarism-2023-01-27/

Rudolph, J., Tan., S, & Tan., S. (2023). ChatGPT: Bullshit spewer or the end of traditional assessments in higher education? Journal of Applied Learning and Teaching, 6, 1. https://doi.org/10.37074/jalt.2023.6.1.9

Sankaran, V. (2023, April 10). Japanese universities become latest to restrict use of ChatGPT. The Independent. https://www.independent.co.uk/tech/japanese-universities-chatgpt-use-restrict-b2317060.html

Zhai, X. (2023). ChatGPT user experience: Implications for education. SSRN. https://dx.doi.org/10.2139/ssrn.4312418

 

Skip to toolbar