Teaching Augmentative Uses of ChatGPT and Other Generative AI Tools

Jonathan Y. H. SIM
Department of Philosophy, Faculty of Arts and Social Sciences (FASS)

jyhsim@nus.edu.sg

 

Sim, J. Y. H. (2023). Teaching augmentative uses of ChatGPT and other generative AI tools [Paper presentation]. In Higher Education Campus Conference (HECC) 2023, 7 December, National University of Singapore. https://blog.nus.edu.sg/hecc2023proceedings/teaching-augmentative-uses-of-chatgpt-and-other-generative-ai-tools /  

SUB-THEME

AI and Education 

 

KEYWORDS

ChatGPT, generative AI, philosophy of technology, AI augmentation

 

CATEGORY

Paper Presentation 

 

ABSTRACT

Since the rise of generative artificial intelligence (GenAI) like ChatGPT, educators have expressed concerns that students may misuse these tools by growing too reliant on them or use it to take shortcuts in their learning, thus undermining important learning objectives that we set for them.

 

Such concerns are not new in the history of technology. Socrates was one of the first to voice concerns about how the invention of writing would be detrimental to people’s memories:

“[Writing] will implant forgetfulness in their souls; they will cease to exercise memory because they rely on that which is written, calling things to remembrance no longer from within themselves, but by means of external marks.” (Phaedrus, 274b-277a)

 

Common to these complaints is the fear that new technologies will replace existing human processes—as a substitutive tool—leading to a deterioration or loss of certain human abilities. This is not the only approach to technology—we can also use these tools in an augmentative way to enhance existing human abilities and processes (Eors Szathmary et al, 2018). While we may not have memories as strong as the ancients did, writing has since augmented our thinking abilities, allowing us to easily record, recall, transmit, evaluate, analyse, and synthesise far more information than before.

 

This augmentative approach can also be applied to GenAI tools, like ChatGPT. 19.1% of my students (n=351) found ways to use ChatGPT as an augmentative tool rather than as a substitutive tool:

  • As an idea generator or a sounding board to help develop ideas before working on an assignment
  • As a learning resource to teach/explain concepts or clarify confusions
  • As a tool to improve their expression

 

Admittedly, it can be difficult for non-savvy users to think of augmentative uses. Students are commonly exposed to substitutive applications of ChatGPT in learning, and 65.5% of students did not think skills were required to use it well.

 

How can educators encourage effective augmentative uses of GenAI tools? I believe there are three learning objectives we should focus on:

 

(1) Cultivate a collaborative mindset working with GenAI. Knowing how to talk is not the same as knowing how to work well in a team. Learners must feel comfortable and empowered working with GenAI as a collaborative partner if they are to use it as an augmentative tool. One approach is to incorporate activities that involve collaborating with GenAI. In my course, students are to work alongside ChatGPT to develop an evaluation criterion for ride-sharing services, seeking feedback from it while also evaluating its feedback.

 

(2) Develop critical questioning skills. Learners need to learn how to scrutinise GenAI output as the content may be inaccurate or shallow. Throughout the same tutorial, students were challenged to find flaws with ChatGPT’s suggestions, and to find areas where they can improve the quality of ChatGPT’s output. The exercise helped them to recognise that an AI’s answer is far from perfect, and that they cannot take a seemingly well-written work as the final answer. Human intervention and scrutiny is still necessary as the AI’s work is, at best, a draft suggestion.

 

(3) Master the art of prompting. The quality of AI output is dependent on the quality and clarity of instructions given to it. Learners need to hone their ability to articulate their requirements well. Later in the same tutorial, students were given a prompt for ChatGPT to generate a pitch. They were then tasked with identifying shortcomings to the output and to produce better prompts to overcome those issues.

 

After the tutorial, many students reported newfound confidence and competency in utilising ChatGPT (n=351):

Table 1
Students’ perception of ChatGPT competency before and after tutorial

I considered myself very competent in using ChatGPT
Before Tutorial
(Average 2.76)
After Tutorial
(Average 3.71)
5 – Strongly Agree  5.41% 15.67%
4 22.79% 47.29%
3 27.07% 29.91%
2 31.91% 6.84%
1–Strongly Disagree 12.82% 0.28%

 

Table 1
Students’ perception of ChatGPT competency before and after tutorial

The tutorial taught me how to effectively collaborate and work with an AI for work.
(Average 4.19)
The tutorial taught me how to effectively critique and evaluate AI generated output so that I don’t take the answers for granted.
(Average 4.34)
The tutorial taught me how to design better prompts to get better results.
(Average 4.38)
I believe the skills taught in Tutorial 4 are useful for me when I go out to work.
(Average 4.28)
30.77% 40.46% 43.87% 38.46%
58.69% 53.56% 50.43% 52.71%
9.69% 5.70% 5.41% 7.41%
0.85% 0.28% 0.28% 1.42%
0% 0% 0% 0%

 

Overall, students had positive experiences learning this new approach to AI. They felt empowered and even an optimism about their future—knowledge of using AI in an augmentative way opens doors of opportunities that seemed too distant previously. In one case, a social science major shared how he felt so empowered by the tutorial that he took on a coding internship (despite being new to coding). He used ChatGPT to learn how to code which facilitated him to handle coding projects at work. This augmentative approach not only allowed him to produce solutions but also evaluate them much faster than if he did it on his own.

 

I firmly believe that teaching students how to augment their learning with GenAI tools holds immense potential in empowering our students for the future.

 

REFERENCES

Eors Szathmary et al. (2018). Artificial or augmented intelligence? the ethical and societal implications. In J. W. Vasbinder, B. Gulyas. & J. W. H Sim (Eds.), Grand Challenges for Science in the 21st Century. World Scientific.

Plato. (1952). Phaedrus. Trans. Reginald Hackforth. Cambridge University Press.

 

Doing But Not Creating: A Theoretical Study of the Implications of ChatGPT on Paradigmatic Learning Processes

Koki MANDAI1, Mark Jun Hao TAN1, Suman PADHI1, and Kuin Tian PANG1,2,3 

1*Yale-NUS College
2Bioprocessing Technology Institute, Agency for Science, Technology, and Research (A*STAR), Singapore
3School of Chemistry, Chemical Engineering, and Biotechnology, Nanyang Technology University (NTU), Singapore

*m.koki@u.yale-nus.edu.sg

 

Mandai, K, Tan, J. H. M., Padhi, S., & Pang, K. T. (2023). Doing but not creating: A theoretical study of the implications of ChatGPT on paradigmatic learning processes [Paper presentation]. In Higher Education Campus Conference (HECC) 2023, 7 December, National University of Singapore. https://blog.nus.edu.sg/hecc2023proceedings/doing-but-not-creating-a-theoretical-study-of-the-implications-of-chatgpt-on-paradigmatic-learning-processes/

SUB-THEME

AI and Education

 

KEYWORDS

AI, artificial intelligence, education, ChatGPT, learning, technology

 

CATEGORY

Paper Presentation 

 

CHATGPT AND LEARNING FRAMEWORKS

Introduction

Since the recent release of ChatGPT, developed by OpenAI, multiple sectors have been affected by it, and educational institutions are not only affected by this trend but are also more deeply impacted compared to other fields (Dwivedi et al., 2023; Eke, 2023; Rudolph et al., 2023). Following the sub-theme of “AI and Education”, we conduct a systematic investigation into the educational uses of ChatGPT and its quality as a tool for learning, teaching, and assessing, mainly in higher education. Research is carried out using comprehensive literature reviews of the current and future educational landscape and ChatGPT’s methodology and function, while applying major educational theories as the main component for the construction of the evaluative criteria. Findings will be presented via a paper presentation.

 

Theoretical Foundations and Knowledge Gaps

Current literature on the intersections of education and artificial intelligence (AI) consists of variegated and isolated critiques of how AI impacts segments of the educational process. For instance, there is a large focus on the general benefits or harms in education (Baidoo-Anu & Ansah, 2023; Dwivedi et al., 2023; Mhlanga, 2023), rather than discussion of specific levels of learning that students and teachers encounter. Furthermore, there seems to be a lack of analysis on the fundamental change and reconsideration of the meaning of education that may occur due to the introduction of AI. The situation can be described as a Manichean dichotomy, as one side argues for the expected enhancements and improved efficiency in education (Ray 2023; Rudolph et al., 2023), while the other side argues for the risks of losing knowledge/creativity and the basis of future development (Chomsky, 2023; Dwivedi et al., 2023; Krügel et al., 2022/2023).

 

By referring to John Dewey’s reflective thought and action model for the micro-scale analysis (Dewey, 1986; Gutek, 2005; Miettinen, 2000) and a revision of Bloom’s taxonomy for the macro-scale analysis (Elsayed, 2023; Forehand, 2005; Kegan, 1977; Seddon, 1978), we consider the potential impact of ChatGPT over progressive levels of learning and the associated activities therein. These models were mainly chosen due to their hierarchical framework that allows for easy application in evaluation compared to other models, although this does not indicate that these models are superior to others; the evaluative criteria we aim to construct will be comprehensive, thus what our research provides is a possible base for future improvements. Moreover, we also incorporate insights from multiple perspectives that are not limited to educational theory, such as from the fields of policy and philosophy with the diverse backgrounds in our research team.

 

Purpose and Significance of the Present Study

This study sought to answer questions regarding the viability of ChatGPT as an educational tool, its proposed benefits and harms, and potential obstacles educators may face in its uptake, as well as relevant safeguards against those obstacles.

 

Furthermore, we suggest a possible base for a new theoretical framework in which ChatGPT is explicitly integrated with standard educational hierarchies, in order to provide better instruction to educators and students. This study aims to safely pioneer a baseline for policy considerations on it as an education tool made to either ameliorate or deteriorate. As a result, ChatGPT can be ratified in educational institutions with accompanying developmental policies to be considered and amended in governmental legislatures for wider educational use.

 

Potential Findings/Implications

The expectations from the existing literature suggest that in keeping with intuitions regarding higher-level learning, ChatGPT itself seems to be limited to do—that is, it is only able to process lower to mid-level learning comprising repetitive actions like remembering, understanding, applying, and analysing (Dwivedi, 2023; Elsayed 2023). Some literature also positions ChatGPT as less useful directly in higher-level processes of creation like evaluation and creation of new knowledge, and can even be said to hinder them (Crawford, 2023; Rudolph, 2023). Even within the lower-level process, there is a high concern for overreliance that will potentially lead to dullness of the learners (Halaweh, 2023; Ray, 2023). Yet under the lens of educational theories that this paper so far applied, there seems to be a possibility that ChatGPT may be able to assist higher-order skills such as creativity and related knowledge acquisition. As the net benefit of ChatGPT on education may more or less depend on external factors such as educational fields, the personality of the user, and the environment that we have yet to take into account of, it requires further research to determine its optimal usage in education. Still, this attempt may be one of the first steps to construct an evaluative criteria for the new era of education with AIs.

 

REFERENCES

Baidoo-Anu, D. & Ansah, L. O. (2023). Education in the era of generative artificial intelligence (AI): Understanding the potential benefits of ChatGPT in promoting teaching and learning. SSRN. https://ssrn.com/abstract=4337484

Crawford, J., Cowling, M., & Allen, K. (2023). Leadership is needed for ethical ChatGPT: Character, assessment, and learning using artificial intelligence (AI). Journal of University Teaching & Learning Practice, 20(3). https://doi.org/10.53761/1.20.3.02

Chomsky, N, et al. (2023). Noam Chomsky: The False Promise of ChatGPT. The New York Times. www.nytimes.com/2023/03/08/opinion/noam-chomsky-chatgpt-ai.html

Dewey, J. (1986). Experience and education. The Educational Forum, 50(3), 241-52. https://doi.org/10.1080/00131728609335764

Dwivedi, Y. K. et al. (2023). “So what if ChatGPT wrote it?” Multidisciplinary perspectives on opportunities, challenges and implications of generative conversational AI for research, practice and policy. International Journal of Information Management, 71, 1-63. https://doi.org/10.1016/j.ijinfomgt.2023.102642

Eke, D. O. (2023). ChatGPT and the rise of generative AI: Threat to academic integrity? Journal of Responsible Technology, 13, 1-4, https://doi.org/10.1016/j.jrt.2023.100060

Elsayed, S. (2023). Towards mitigating ChatGPT’s negative impact on education: Optimizing question design through Bloom’s taxonomy. https://doi.org/10.48550/arXiv.2304.08176

Forehand, M. (2005). Bloom’s taxonomy: Original and revised. In M. Orey (Ed.), Emerging perspectives on learning, teaching, and technology. http://projects.coe.uga.edu/epltt/

Gutek, G. L. (2005). Jacques Maritain and John Dewey on education: A reconsideration. Educational Horizons, 83(4), 247–63. http://www.jstor.org/stable/42925953

Halaweh, M. (2023). ChatGPT in education: Strategies for responsible implementation. Contemporary Educational Technology, 15(2), ep421. https://doi.org/10.30935/cedtech/13036

Kegan, D. L. (1977). Using Bloom’s cognitive taxonomy for curriculum planning and evaluation in nontraditional educational settings. The Journal of Higher Education, 48(1), 63–77. https://doi.org/10.2307/1979174

Krügel, S., Ostermaier, A. & Uhl, M (2023). ChatGPT’s inconsistent moral advice influences users’ judgment. Sci Rep 13, 4569. https://doi.org/10.1038/s41598-023-31341-0

Krügel, S., Ostermaier, A. & Uhl, M. Zombies in the loop? Humans trust untrustworthy AI-advisors for ethical decisions. (2022) Philos. Technol. 35, 17. https://doi.org/10.1007/s13347-022-00511-9

Mhlanga, D. (2023). Open AI in Education, the Responsible and Ethical Use of ChatGPT Towards Lifelong Learning SSRN, https://ssrn.com/abstract=4354422

Miettinen, R. (2000). The concept of experiential learning and John Dewey’s theory of reflective thought and action, International Journal of Lifelong Education, 19(1), 54-72. https://doi.org/10.1080/026013700293458

Ray, P. P. (2023). ChatGPT: A comprehensive review on background, applications, key challenges, bias, ethics, limitations and future scope. Internet of Things and Cyber-Physical Systems, 3, 121-154, https://doi.org/10.1016/j.iotcps.2023.04.003

Rudolph, J., Tan, S., Tan, S. (2023). ChatGPT: Bullshit spewer or the end of traditional assessments in higher education? Journal of Applied Learning & Teaching, 6(1), 1-22. https://journals.sfu.ca/jalt/index.php/jalt/article/view/689

Seddon, G. M. (1978). The properties of Bloom’s Taxonomy of educational objectives for the cognitive domain. Review of Educational Research, 48(2), 303–23. https://doi.org/10.2307/1170087

 

Does AI-generated Writing Differ from Human Writing in Style? A Literature Survey

Feng CAO
Centre for English Language and Communication (CELC)

elccf@nus.edu.sg

 

Cao, F. (2023). Does AI-generated writing differ from human writing in style? A literature survey [Lightning talk]. In Higher Education Campus Conference (HECC) 2023, 7 December, National University of Singapore. https://blog.nus.edu.sg/hecc2023proceedings/does-ai-generated-writing-differ-from-human-writing-in-style-a-literature-survey/

 

SUB-THEME

AI and Education

 

KEYWORDS

AI-generated writing, human writing, ChatGPT, style, linguistic features

 

CATEGORY

Lightning Talks

 

ABSTRACT

Artificial intelligence (AI) has witnessed significant advancements recently, leading to the emergence of AI-generated writing. This new form of writing has sparked interest and debate, raising questions about how it differs from traditional human writing. One popular AI tool which has been attracting much attention since 2022 is ChatGPT, which has been used to create texts in many domains. In this preliminary survey of literature, I aim to review studies which compare the writing generated by ChatGPT with human writing to explore the rhetorical and linguistic differences in style.

 

This literature survey focuses on the most widely used databases: Google Scholar, Scopus, and Web of Science. An initial search in these databases using key terms such as “AI-generated writing”, “human writing”, and “ChatGPT” returned over 400 items relevant to the topic. I skimmed through the titles and abstracts, and sometimes the full texts to assess their relevance to the research question. Irrelevant items and duplicates were excluded, and only the most pertinent sources were further analysed.

 

The preliminary analysis showed that the AI-generated writing differed from human writing in a number of genres and disciplines, for example, medical abstracts and case reports, business correspondence, restaurant reviews, and academic essays. Regarding content creation, for example, the literature review shows that AI is capable of generating highly readable medical abstracts and case reports which are almost indistinguishable from human writing. However, a few key limitations, such as inaccuracies in content and fictitious citations, were also reported by expert reviewers.

 

In terms of tone and voice, the analysis reveals that human writing differs from AI-generated writing by evoking emotions and resonates with readers on a personal level. Human writers bring their life experiences, cultural background, and empathy into their work, enabling them to convey complex emotions, capture nuances, and engage readers’ emotions. AI-generated writing, however, typically lacks the emotional depth and intuition present in human writing.

 

In terms of linguistic features, the literature indicates that AI-generated writing tends to employ longer sentences than human writing, but the latter is likely to employ more diverse vocabulary and expressions. In addition, AI-generated writing contains a more formal register whereas human writing is more likely to use informal register such as the frequent use of personal pronouns.

 

In short, this survey of the literature provides an initial overview of some key differences between AI-generated writing and human writing. While AI models like ChatGPT have made remarkable advances in mimicking human writing, they still lack the distinct characteristics that make human writing unique and emotionally resonant. Understanding these differences is vital for harnessing the potential of AI-generated writing while mitigating potential risks and challenges. In the field of language education, a better understanding of the differences between AI- and human writing may help teachers and novice writers to better utilise AI tools for developing academic writing skills and publishing. At the same time, by addressing ethical concerns and nurturing human creativity alongside AI capabilities, teachers and learners can navigate the evolving landscape of AI-generated writing, and leverage it to enhance human expression and communication in a responsible and inclusive manner.

 

Harnessing the Power of ChatGPT for Assessment Question Generation: Five Tips for Medical Educators

Inthrani Raja INDRAN*, Priya PARANTHAMAN, and Nurulhuda MUSTAFA

Department of Pharmacology,
Yong Loo Lin School of Medicine (YLLSoM)

*phciri@nus.edu.sg

 

Indran, I. R., Paranthaman, P., & Mustafa, N. (2023). Harnessing the power of ChatGPT for assessment question generation: Five tips for medical educators [Lightning talk]. In Higher Education Campus Conference (HECC) 2023, 7 December, National University of Singapore. https://blog.nus.edu.sg/hecc2023proceedings/harnessing-the-power-of-chatgpt-for-assessment-question-generation-five-tips-for-medical-educators/ 

SUB-THEME

AI and Education 

 

KEYWORDS

AI, ChatGPT, questions, medical assessment

 

CATEGORY

Lightning Talks 

 

INTRODUCTION

Developing diverse and high-quality assessment questions for the medical curriculum is a complex and time-intensive task, as they often require the incorporation of clinically relevant scenarios which are aligned to the learning outcomes (Al-Rukban, 2006; Palmer & Devitt, 2007). The emergence of artificial intelligence (AI)-driven large language models (LLMs) has presented an unprecedented opportunity to explore how AI can be harnessed to optimise and automate these complex tasks for educators (AI, 2023). It also provides an opportunity for students to use the LLMs to help create practice questions and further their understanding of the concepts they wish to test.

 

AIMS & METHODS

This study aims to establish a definitive and dependable series of practical pointers, that would enable educators to tap on the ability of LLMs, like ChatGPT, to firstly enhance question generation in healthcare profession education, using multiple choice question (MCQs) as an illustrative example. Secondly, it can assist to generate diverse clinical scenarios for teaching and learning purposes and lastly, we hope that our experiences will encourage more educators to explore and access AI tools such as ChatGPT with greater ease, especially if they had limited prior experiences.

 

To generate diverse, high-quality clinical scenario MCQs, we outlined core medical concepts and identified essential keywords for integrating into the instruction stem. The text inputs were iteratively refined and fine-tuned until we developed instruction prompts that could help us generate questions of a desirable quality. Following question generation, respective domain experts reviewed them for content accuracy and overall relevance, identifying any potential flags in the question stem. This process of soliciting feedback and implementing refinements, enabled us to continuously enhance the prompts and the quality of questions generated. By prioritising expert review, we established a necessary validation process for the MCQs prior to their formal implementation.

 

THE FIVE TIPS

We consolidated the following tips to effectively harness the power of ChatGPT for assessment question generation.

 

Tip 1: Define the Objective and Select the Appropriate Model

Determine the purpose of question generation and choose the appropriate AI model based on needs and access. Model selection depends on the needs and accessibility. Choose ChatGPT 4.0 over 3.5 for greater accuracy and concept integration. ChatGPT 4.0 requires a subscription. Activate the beta features in “Settings” and utilise the “Browse with Bing” mode to retrieve information surpassing its training cut-off period, as well as install plugins for improved AI performance.

 

Tip 2: Optimise Prompt Design

When refining the stem design for question generation, there are several important considerations. Firstly, be specific in your instructions by emphasising key concepts, question types, quantity, and the answer format. Clearly state any guidelines or rules you want the model to follow. Focus on core concepts and keywords relevant to the discipline to build the instruction stem. Experiment with vocabulary to optimise question quality.

 

Tip 3: Build Diverse Authentic Scenarios

Develop a range of relevant clinical vignettes to broaden the scope of scenarios that can be used to assess students.

 

Tip 4: Calibrate Assessment Difficulty

Incorporate the principles of Bloom’s Taxonomy when developing assessment questions to test different cognitive skills, ranging from basic knowledge recall to complex analysis, enhancing question diversity.

 

Tip 5: Work Around Limitations

Be mindful that ChatGPT is trained on limited data and can generate factually inaccurate information. Despite diverse training, ChatGPT does not possess the nuanced understanding of a medical expert, which can impact the quality of the questions it generates. Human validation is necessary to address any factual inaccuracies that may arise. AI data collection risks misuse, privacy breaches, and bias amplification, leading to misguided outcomes.

 

CONCLUSION

AI-assisted question generation is an iterative process, and these tips can provide any healthcare professions educator valuable guidance in automating the generation of good quality assessment questions. Furthermore, students can leverage this technology for self-directed learning, creating and verifying their practice questions and strengthening their understanding of medical concepts (Touissi et al., 2022). While this paper primarily demonstrates the use of ChatGPT in generating MCQs, we believe that the approach can be extended to various other question types. It is also important to remember that though AI augments, it does not replace human expertise. (Ali et al., 2023; Rahsepar et al., 2023). Domain experts are needed to ensure quality, accuracy, and relevance.

 

REFERENCES 

AI, O. (2023).

Al-Rukban, M. O. (2006). Guidelines for the construction of multiple choice questions tests. J Family Community Med, 13(3), 125-33. https://www.ncbi.nlm.nih.gov/pubmed/23012132

Ali, R., Tang, O. Y., Connolly, I. D., Fridley, J. S., Shin, J. H., Zadnik Sullivan, P. L., Cielo, D., Oyelese, A. A., Doberstein, C. E., Telfeian, A. E., Gokaslan, Z. L., & Asaad, W. F. (2023). Performance of ChatGPT, GPT-4, and Google Bard on a Neurosurgery Oral Boards Preparation Question Bank. Neurosurgery. https://doi.org/10.1227/neu.0000000000002551

Palmer, E. J., & Devitt, P. G. (2007). Assessment of higher order cognitive skills in undergraduate education: modified essay or multiple choice questions? Research paper. BMC Med Educ, 7, 49. https://doi.org/10.1186/1472-6920-7-49

Rahsepar, A. A., Tavakoli, N., Kim, G. H. J., Hassani, C., Abtin, F., & Bedayat, A. (2023). How AI responds to common lung cancer questions: ChatGPT vs Google Bard. Radiology, 307(5), e230922. https://doi.org/10.1148/radiol.230922

Touissi, Y., Hjiej, G., Hajjioui, A., Ibrahimi, A., & Fourtassi, M. (2022). Does developing multiple-choice questions improve medical students’ learning? A systematic review. Med Educ Online, 27(1), 2005505. https://doi.org/10.1080/10872981.2021.2005505

 

Investigating Students’ Perception and Use of ChatGPT as a Learning Tool to Develop English Writing Skills: A Survey Analysis

Jonathan PHAN* and Jessie TENG
Centre for English Language Communication (CELC)

*jonathanphan@nus.edu.sg

 

Phan, J., & Teng, J. (2023). Investigating students’ perception and use of ChatGPT as a learning tool to develop english writing skills: A survey analysis [Paper presentation]. In Higher Education Campus Conference (HECC) 2023, 7 December, National University of Singapore. https://blog.nus.edu.sg/hecc2023proceedings/investigating-students-perception-and-use-of-chatgpt-as-a-learning-tool-to-develop-english-writing-skills-a-survey-analysis/

SUB-THEME

AI and Education 

 

KEYWORDS

AI-assisted education, ChatGPT, English language communication, higher education, writing

 

CATEGORY

Paper Presentation 

 

ABSTRACT

ChatGPT, an artificial intelligence (AI) chatbot and Large Language Model (LLM) developed by OpenAI, has garnered significant attention worldwide since its release for public use in November 2022. In the field of higher education, there is considerable enthusiasm regarding the potential use of ChatGPT for innovating AI-assisted education. Advocators propose utilising this AI tool to enhance students’ learning experiences and reduce teacher workload (Baker et al., 2019; Zhai, 2022). However, some educational institutions view its use as potentially detrimental to the teaching and learning process due to its disruptive nature. Concerns include the possibility of “amplify[ing] laziness and counteracting learners’ interest to conduct their own investigations and come to their own conclusions or solutions” (Kasneci et al., 2023, p. 7), and “increased instances of plagiarism” (Looi & Wong, 2023). Consequently, some higher educational institutions in various countries have banned or restricted the use of AI tools due to students’ use of ChatGPT to plagiarise (Cassidy, 2023; CGTN, 2023; Reuters, 2023; Sankaran, 2023). As a response, some educators propose creating AI-resistant assessments to combat student plagiarism while others suggest providing resources and proper guidance for students to use ChatGPT judiciously and responsibly (Rudolph et al., 2023).

 

As universities work to develop policies to address the use of AI tools, particularly ChatGPT, by both teachers and students within the academic context, they need to consider both the teachers’ and the students’ perspectives on the matter. However, given the novelty of this research topic, studies on the use of ChatGPT are not only scarce, but they have primarily focused on the pedagogical implications of AI tools from the teacher’s perspective. To address the lack of studies on students’ perspective, this study seeks to examine the perceptions and use of ChatGPT as a learning tool by higher education students.

To examine students’ perceptions of using ChatGPT as a learning tool to develop English academic writing skills, a survey questionnaire was administered to students enrolled in an undergraduate English language communication course at a local university. The questionnaire consisted of 34 five-point Likert scale questions and two open-ended questions on participants’ views on ChatGPT and their use of ChatGPT in their learning. One expected finding is that students are aware of how ChatGPT can be used, while an interesting finding is that students are also aware that ChatGPT gives misleading answers. In addition, a number of students disagreed that using ChatGPT was an efficient way of doing their assignments. Nevertheless, many use it for paraphrasing, generating ideas, and improving their general knowledge. As such, some students do feel helped by ChatGPT as a learning tool, although not every participant thinks it should be allowed in higher education.

 

It is hoped that the findings of this study can serve as a point of reference for educators in developing course materials and assessments so as to promote the effective use of ChatGPT in higher education.

 

 

REFERENCES

Baker, T., Smith, L., & Anissa, N. (2019). Educ-AI-tion rebooted? Exploring the future of artificial intelligence in schools and colleges. Nesta Foundation. https://www.nesta.org.uk/report/education-rebooted/

Cassidy, C. (2023, January 10). Australian universities to return to ‘pen and paper’ exams after students caught using AI to write essays. The Guardian. https://www.theguardian.com/australia-news/2023/jan/10/universities-to-return-to-pen-and-paper-exams-after-students-caught-using-ai-to-write-essays

CGTN. (2023, February 19). University of Hong Kong issues interim ban on ChatGPT, AI-based tools. CGTN. https://news.cgtn.com/news/2023-02-19/University-of-Hong-Kong-issues-interim-ban-on-ChatGPT-AI-based-tools-1hxWzqgcMxy/index.html

Kasneci, E., Sessler, K., Küchemann, S., Bannert, M., Dementieva, D., Fischer, F., …Kasneci, G. (2023). ChatGPT for good? On opportunities and challenges of large language models for education. Learning and Individual Differences, 103, 102274. https://doi.org/10.1016/j.lindif.2023.102274

Looi, C. K., & Wong, L. H. (2023, February 7). Commentary: ChatGPT can disrupt education, but it need not be all bad. Here’s how NIE is using it to train teachers. TODAY. https://www.todayonline.com/commentary/commentary-chatgpt-can-disrupt-education-it-need-not-be-all-bad-heres-how-nie-using-it-train-teachers-2102386

Reuters. (2023, January 28). Top French university bans use of ChatGPT to prevent plagiarism. Reuters. https://www.reuters.com/technology/top-french-university-bans-use-chatgpt-prevent-plagiarism-2023-01-27/

Rudolph, J., Tan., S, & Tan., S. (2023). ChatGPT: Bullshit spewer or the end of traditional assessments in higher education? Journal of Applied Learning and Teaching, 6, 1. https://doi.org/10.37074/jalt.2023.6.1.9

Sankaran, V. (2023, April 10). Japanese universities become latest to restrict use of ChatGPT. The Independent. https://www.independent.co.uk/tech/japanese-universities-chatgpt-use-restrict-b2317060.html

Zhai, X. (2023). ChatGPT user experience: Implications for education. SSRN. https://dx.doi.org/10.2139/ssrn.4312418

 

Fostering AI Literacy: Human-agency-oriented Approach to AI Usage in Higher Education

Jodie LUU and Jungyoung KIM
Centre for English Language Communication (CELC)
jodieluu@nus.edu.sg

 

Luu, T. H. L., & Kim, J. Y. (2023). Fostering AI literacy: Human-agency-oriented approach to AI usage in higher education [Lightning talk]. In Higher Education Campus Conference (HECC) 2023, 7 December, National University of Singapore. https://blog.nus.edu.sg/hecc2023proceedings/fostering-ai-literacy-human-agency-oriented-approach-to-ai-usage-in-higher-education/ 

 

SUB-THEME

AI and Education

 

KEYWORDS

ChatGPT, AI literacy, critical thinking, human agency, human-AI interaction

 

CATEGORY

Lightning Talks

 

ABSTRACT

From providing learning analytics essential to personalized personalised education to conducting automated assessments and grading, technology powered by artificial intelligence (AI) has been gradually transforming the education sector. However, it is the pivotal open access to ChatGPT, a powerful AI chatbot built with OpenAI’s large language models (LLMs) such as GPT-4 and its predecessors (Marr, 2023), that has given rise to the question of how to harness the potential of AI while maintaining the integrity and ethos of education.

 

In response to ChatGPT and its equivalents’ capability of producing comprehensive content based on well-crafted prompts, higher education institutions worldwide have started to devise policies for AI-generated content. In NUS, a timely interim policy for the use of AI in teaching and learning was first circulated in February 2023. The policy’s focus on mandating self-declaration seems to suggest that the moral compass of an AI user plays a key role. Considering the fast-paced advancement and integration of AI in various sectors, it could be argued that learners need both a moral compass and AI literacy to navigate and harness the potential of AI tools.

 

The emerging literature on AI in education has highlighted the need to develop AI literacy across all age groups and professions (Taguma et al., 2021; Ng et al., 2022; Cardon et al., 2023; Long et al.; 2023; Su & Yang, 2023). As proposed by Kong et al. (2021), “AI literacy includes three components: AI concepts, using AI concepts for evaluation, and using AI concepts for understanding the real world through problem solving” (p. 2). In the context of human-AI interaction, AI is said to manifest machine agency, which could be understood as the algorithms’ ability to process a large amount of data, learn from the analysis, adapt, and evolve to support decision-making and problem solving (Kaplan & Haenlein, 2019; Kang & Lou, 2022). Informed by Williams et al.’s (2021) conception of agency that acknowledges the consideration of context, consequences, or implications of human actions (in addition to rationality and autonomy), human agency, on the other hand, could be seen as the ability to make intentional, reasoned, contextualised and ethical decisions when it comes to AI-powered activities, be it for school, work, or leisure.

 

Following these discussions, our Lightning Talk will discuss how we can reframe AI usage in higher education while fostering AI literacy based on the notion of human agency within the context of human-AI interaction. In doing so, we will draw on results from an anonymous poll on the use of ChatGPT conducted in Semester 2 AY2022/23 (with students enrolled in the course ES2660 “Communicating in the Information Age”) and two case studies of how the teaching team handled written works flagged positive by GPTZero, an AI detection tool. Ultimately, we would like to suggest that a proper cultivation of AI literacy and awareness of the role of human agency in the technology-driven world among students are imperative. At the practical level, AI literacy development needs to move beyond mandating self-declaration to include engaging with learners through dialogues and integrating AI tools such as ChatGPT in learning activities where human- AI interaction could be experienced and human agency negotiated.

 

REFERENCES

Cardon, P. W., Fleischmann, C., Aritz, J., Logemann, M., & Heidewald, J. (2023). The challenges and opportunities of AI-assisted writing: Developing AI literacy for the AI age. Business and Professional Communication Quarterly, 232949062311765. https://doi.org/10.1177/23294906231176517

Kang, H., & Lou, C. (2022). AI agency vs. human agency: understanding human–AI interactions on TikTok and their implications for user engagement. Journal of Computer-Mediated Communication, 27(5). https://doi.org/10.1093/jcmc/zmac014

Kaplan, A., & Haenlein, M. (2019). Siri, Siri, in my hand: Who’s the fairest in the land? On the interpretations, illustrations, and implications of artificial intelligence. Business Horizons, 62(1), 15–25. https://doi.org/10.1016/j.bushor.2018.08.004

Kong, S. C., Cheung, W. W. L., & Zhang, G. (2021). Evaluation of an artificial intelligence literacy course for university students with diverse study backgrounds. Computers & Education: Artificial Intelligence, 2, 100026. https://doi.org/10.1016/j.caeai.2021.100026

Long, D., Roberts, J., Magerko, B., Holstein, K., DiPaola, D., & Martin, F. (2023). AI Literacy: Finding Common Threads between Education, Design, Policy, and Explainability. https://doi.org/10.1145/3544549.3573808

Marr, B. (2023, May 19). A short history of ChatGPT: How we got to where we are today. Forbes. https://www.forbes.com/sites/bernardmarr/2023/05/19/a-short-history-of-chatgpt- how-we-got-to-where-we-are-today/?sh=5f1e3e13674f

Ng, D. T. K., Lee, M. G., Tan, R. J. Y., Hu, X., Downie, J. S., & Chu, S. K. W. (2022). A review of AI teaching and learning from 2000 to 2020. Education and Information Technologies. https://doi.org/10.1007/s10639-022-11491-w

Taguma, M., Feron, E., & Lim, M. H. (2021, July 5). Education and AI: Preparing for the future & AI, attitudes and values. In Future of Education and Skills 2030: Conceptual Learning Framework. Organisation for Economic Co-operation and Development. https://www.oecd.org/education/2030/E2030%20Position%20Paper%20(05.04.2018).pdf

Su, J., & Yang, W. (2023). Artificial Intelligence (AI) literacy in early childhood education: an intervention study in Hong Kong. Interactive Learning Environments, 1–15. https://doi.org/10.1080/10494820.2023.2217864

Williams, R. A., Gantt, E. E., & Fischer, L. (2021). Agency: What does it mean to be a human being? Frontiers in Psychology, 12. https://doi.org/10.3389/fpsyg.2021.693077

 

Can ChatGPT be a Teaching Tool to Promote Learning and Scientific Inquiry Skills?

Amanda Huee-Ping WONG*, Swapna Haresh Teckwani, and Ivan Cherh Chiet LOW*
Department of Physiology, Yong Loo Lin School of Medicine (YLLSOM), NUS

*phsilcc@nus.edu.sg, phswhpa@nus.edu.sg

 

Wong, A. H. P., Teckwani, S. H., & Low, I. C. C. (2023). Can ChatGPT be a teaching tool to promote learning and scientific inquiry skills? [Paper presentation]. In Higher Education Campus Conference (HECC) 2023, 7 December, National University of Singapore. https://blog.nus.edu.sg/hecc2023proceedings/can-chatgpt-be-a-teaching-tool-to-promote-learning-and-scientific-inquiry-skills/

SUB-THEME

AI and Education 

 

KEYWORDS

ChatGPT, large language model, scientific inquiry, teaching tool, student learning

 

CATEGORY

Paper Presentation 

 

ABSTRACT

Introduction: Advancements in technology, especially in the artificial intelligence (AI) sphere, have brought about a noticeable paradigm shift in the educational landscape of the 21st century. Since its maiden release into the public domain in November 2022, ChatGPT (Generative Pre-trained Transformer) garnered more than one million subscribers within a week (Baidoo-Anu & Ansah, 2023). The introduction of large language model (LLM) tools, such as ChatGPT, into the education field has resulted in the use of information and communication technologies as a tool for improving teaching and learning (Opara, 2023). Educators have the opportunity to incorporate ChatGPT as part of a diversified teaching tool to achieve a more interesting and innovative teaching and learning experience (Yu, 2023). Along similar lines, we incorporated ChatGPT as a learning tool in the tutorial of a scientific inquiry course in an attempt to promote student learning and scientific inquiry skills. In this study, we compared the effectiveness of a ChatGPT-based tutorial with conventional tutorials in promoting the achievement of learning outcomes (LO) and scientific inquiry.

 

Methods: In the tutorial sessions of HSI2002 “Inquiry into Current Sporting Beliefs and Practices”, students were tasked with providing evidence-based evaluation and critiques of selected sporting issues and practices. In one of the three tutorials, ChatGPT was incorporated as a learning tool whereby students were tasked to perform their inquiry regarding ChatGPT’s response to specific prompts related to the course content. On the other hand, students were required to provide their critique in the other tutorials based on pre-reading materials in the form of journal articles. Students were required to submit an assignment report after each tutorial, which was used in this study analysis. Specifically, student assignments were analysed using two sets of rubrics designed to assess (1) the achievement of LO at the different level of the Bloom’s taxonomy, and (2) scientific inquiry skills (Seeratan et al. 2020). One-way ANOVA was used to determine statistical significance of scores among the three tutorials.

 

Results: Assignments from 10 out of 40 students were scored to date. Preliminary analysis revealed that the overall scores for each tutorial (with and without ChatGPT) were comparable (p = 0.245, Figure 1).

PAHecc2023-a48-Fig1
Figure 1. Total scores (mean ± SD; maximum score of 16) of student assignments (n = 10) from the different tutorials.

Mean scores for the student responses according to each rubric factor, namely the three desired learning outcomes according to Bloom’s taxonomy level (Understand, Analyse, and Evaluate) and scientific inquiry were comparable across the different tutorials (Table 1). Interestingly, we observed a trend that scientific inquiry skills were enhanced in the ChatGPT-based tutorial (p = 0.083). However, further analysis of the remaining 30 students needs to be conducted to substantiate this observation.

PAHecc2023-Table1

 

Conclusion: This study showcases another approach to meaningfully harness AI technology, specifically ChatGPT, to support student learning in a scientific inquiry course. Our preliminary data revealed that the tutorial leveraging on ChatGPT as a teaching tool was comparable to conventional case-based tutorials in promoting learning outcomes and scientific inquiry skills. Future completion of our data analysis may reveal further interesting insights, with the potential of this novel strategy surpassing traditional approaches of teaching and learning. As learners are faced with ever-evolving technologies, integrating generative AI tools in the classroom serves as a platform to teach students how to use this technology constructively and safely, thus preparing them to thrive in an AI-dominated work environment upon graduation.

 

REFERENCES

Yu, H. (2023). Reflection on whether ChatGPT should be banned by academia from the perspective of education and teaching. Front. Psychol. 14, 1181712. https://doi.org/10.3389/fpsyg.2023.1181712

Baidoo-Anu, D., & Ansah, L. (2023). Education in the era of generative artificial intelligence (AI): Understanding the potential benefits of ChatGPT in promoting teaching and learning. Social Science Research Network. http://dx.doi.org/10.2139/ssrn.4337484

Opara, E. C. (2023). ChatGPT for teaching, learning and research: Prospects and challenges. Global Academic Journal of Humanities and Social Sciences, 5(2), 33-40. https://ssrn.com/abstract=4375470

Seeratan, K. L., McElhaney, K. W., Mislevy, J., McGhee, R., Jr, Conger, D., & Long, M. C. (2020). Measuring students’ ability to engage in scientific inquiry: A new instrument to assess data analysis, explanation, and argumentation. Educational Assessment, 25(2), 112–35. https://doi.org/10.1080/10627197.2020.1756253

 

ChatGPT and Teacher Education/Development

Aileen LAM Wanli
Centre for English Language Communication (CELC)
aileenlam@nus.edu.sg

 

Lam, A. W. (2023). ChatGPT and teacher education/development [Lightning talk]. In Higher Education Campus Conference (HECC) 2023, 7 December, National University of Singapore. https://blog.nus.edu.sg/hecc2023proceedings/chatgpt-and-teacher-education-development/

 

SUB-THEME

AI and Education 

 

KEYWORDS

AI generative software, ChatGPT, teacher education, teacher development

 

CATEGORY

Lightning Talks 

 

ABSTRACT

ChatGPT by OpenAI and other variations of generative artificial intelligence (AI) software such as Bard by Google, Bing AI chat by Microsoft, Ernie by Baidu, Tong Yi Qian Wen by Alibaba have been viewed with optimism and suspicion. Its capabilities in understanding user requests, access to a comprehensive data bank and ability to generate natural and appropriate responses using human-like language is significant (Lund & Wang, 2023). This ability to perform complex tasks have led to educators arguing over its benefits and disadvantages such as the ability to provide ‘personalised and interactive learning’ and ‘ongoing feedback’ as opposed to its limitations in the accuracy of answers provided, promotion of biases, and privacy issues (Baidoo-Anu & Owusu Ansah, 2023). There is also a common worry that AI generative software would lead to more instances of cheating and plagiarism (King & ChatGPT, 2023) and by extension, affect students’ learning when they take shortcuts. Yet, from all angles, this technology is here to stay. Universities such as the University of Hong Kong and those in Japan have reacted by banning or restricting students’ use of ChatGPT (Universities in Japan, 2023; Yau & Chan, 2023) while others such as Yale University and Princeton have issued AI guidelines for students and faculty in response to its rising popularity (Gorelick & Mcdonald, 2023; Hartman-Sigall, 2023). The industry and even the civil service have also taken an interest in this technology, with a team from Open Government Product (OGP) integrating ChatGPT into Microsoft Word for public officers in Singapore to use for research and writing (Chia, 2023). With constant advancements and improvements, the possibilities are endless for education and the industry alike. Though some tech companies like Apple, Samsung and Amazon as well as financial institutions like JPMorgan Chase, Citigroup, Goldman Sachs have put a ban on the use of AI generative software citing data concerns, cyber security risks, accountability, and legal consequences (Ray, 2023; Uche, 2023; Nelson, 2023; Cawley, 2023), others like Lazada and Bain & Company have embraced the technology and looked into ways to integrate AI generative software into their systems (Yordan, 2023; Bain & Company, 2023) in a more secure manner with the end goal of efficiency. Certain sectors have also begun to explore the role of AI generative software such as ChatGPT in areas such as in global warming (Biswas, 2023a), public health (Biswas, 2023b), and healthcare research (Sallam, 2023). With the advantages that the industry already recognises, this lightning talk focuses on AI generative software in education but shifts the focus from students to the tutors and explores the possibilities of AI generative software in teacher education (Trust et al., 2023; Rahman & Watanobe, 2023) as well as for teacher development, especially for new tutors who may need help in the formative stages of their teaching careers or those going into new domains. Beyond exploring ChatGPT’s support for pedagogical knowledge such as teaching skills and classroom management, student assessment/evaluation, and personalised learning support, this talk will look into the possible support for tutor-student communication, creative thinking/multimodal approaches as well as subject-specific tutor development. 

 

REFERENCES

Baidoo-Anu, D., & Owusu Ansah, L. (2023b). Education in the era of generative artificial intelligence (AI): Understanding the potential benefits of ChatGPT in promoting teaching and learning. Available at SSRN 4337484. https://dx.doi.org/10.2139/ssrn.4337484

Bain & Company. (2023, 21 February). Bain & Company announces services alliance with OpenAI to help enterprise clients identify and realize the full potential and maximum value of AI [Press release]. https://www.bain.com/about/media-center/press-releases/2023/bain–company-announces-services-alliance-with-openai-to-help-enterprise-clients-identify-and-realize-the-full-potential-and-maximum-value-of-ai/

Biswas, S. S. (2023a). Potential use of chat GPT in global warming. Annals of Biomedical Engineering, 51(6), 1126-27. https://doi.org/10.1007/s10439-023-03171-8

Biswas, S. S. (2023b). Role of chat GPT in public health. Annals of Biomedical Engineering, 51(5), 868-69. https://doi.org/10.1007/s10439-023-03172-7

Cawley, C. (2023, 13 June). From Apple to Samsung, these companies (and a few countries) are prohibiting the use of generative AI platforms like ChatGPT. Tech.co. https://tech.co/news/tech-companies-banning-generative-ai

Chia, O. (2023, 14 February). Civil servants to soon use ChatGPT to help with research, speech writing. The Straits Times. https://www.straitstimes.com/tech/civil-servants-to-soon-use-chatgpt-to-help-with-research-speech-writing

Gorelick, E. & Mcdonald, A. (2023, 12 February). University leaders issue AI guidance in response to growing popularity of ChatGPT. Yale Daily News. https://yaledailynews.com/blog/2023/02/12/university-leaders-issue-ai-guidance-in-response-to-growing-popularity-of-chatgpt/

Hartman-Sigall, J. (2023, 25 January). University declines to ban ChatGPT, releases faculty guidance for its usage. The Daily Princetonian. https://www.dailyprincetonian.com/article/2023/01/university-declines-ban-chatgpt-releases-faculty-guidance-for-usage

King, M. R., & ChatGPT (2023). A conversation on artificial intelligence, chatbots, and plagiarism in higher education. Cellular and Molecular Bioengineering, 16(1), 1-2. https://doi.org/10.1007/s12195-022-00754-8

Lund, B. D., & Wang, T. (2023). Chatting about ChatGPT: how may AI and GPT impact academia and libraries? Library Hi Tech News, 40(3), 26-29. https://dx.doi.org/10.2139/ssrn.4333415

Nelson, F. (2023, 16 June). Many Companies Are Banning ChatGPT. This Is why. Science Alert. https://www.sciencealert.com/many-companies-are-banning-chatgpt-this-is-why

Rahman, M. M., & Watanobe, Y. (2023). ChatGPT for education and research: Opportunities, threats, and strategies. Applied Sciences, 13(9), 5783. https://doi.org/10.3390/app13095783

Ray, S. (2023, 19 March). Apple Joins a Growing List of Companies Cracking Down on use of ChatGPT by Staffers—Here’s Why. Forbes. https://www.forbes.com/sites/siladityaray/2023/05/19/apple-joins-a-growing-list-of-companies-cracking-down-on-use-of-chatgpt-by-staffers-heres-why/?sh=2169888f28ff

Sallam, M. (2023). ChatGPT utility in healthcare education, research, and practice: systematic review on the promising perspectives and valid concerns. Healthcare, 11(6), 887. https://doi.org/10.3390/healthcare11060887

Trust, T., Whalen, J., & Mouza, C. (2023). Editorial: ChatGPT: Challenges, opportunities, and implications for teacher education. Contemporary Issues in Technology and Teacher Education, 23(1), 1-23. https://citejournal.org/volume-23/issue-1-23/editorial/editorial-chatgpt-challenges-opportunities-and-implications-for-teacher-education

Uche, A. (2023, 26 June). 5 Reasons Why Companies Are Banning ChatGPT. Make Use Of. https://www.makeuseof.com/reasons-why-companies-banning-chatgpt/

Universities in Japan restrict students’ use of ChatGPT. (2023, 10 Apr). The Straits Times. https://www.straitstimes.com/asia/east-asia/universities-in-japan-restrict-students-use-of-chatgpt

Yau, C., & Chan, K. (2023, 17 February). University of Hong Kong temporarily bans students from using ChatGPT, other AI-based tools for coursework. South China Morning Post. https://www.scmp.com/news/hong-kong/education/article/3210650/university-hong-kong-temporarily-bans-students-using-chatgpt-other-ai-based-tools-coursework

Yordan, J. (2023, 25 May). Lazada launches ChatGPT-powered chatbot. TechInAsia. https://www.techinasia.com/lazada-launches-ecommerce-ai-chatbot-powered-chatgpt

 

Skip to toolbar