Unessay—Gateway to Future Higher Education (HE) Assessments in an AI World?

Chitra SABAPATHY
Centre for English Language Communication (CELC)

elccs@nus.edu.sg

 

Sabapathy, C. (2023). Unessay—Gateway to future higher education (HE) assessments in an AI world? [Paper presentation]. In Higher Education Campus Conference (HECC) 2023, 7 December, National University of Singapore. https://blog.nus.edu.sg/hecc2023proceedings/unessay-gateway-to-future-higher-education-he-assessments-in-an-ai-world/

SUB-THEME

AI and Education 

 

KEYWORDS

Unessay, higher education, AI, student autonomy, multimedia, oral communication

 

CATEGORY

Paper Presentation 

 

ABSTRACT

The rapid advancement of generative artificial intelligence (AI) has caused educators to find that their assessments (e.g., Kung et al., 2022) and pedagogies are vulnerable to them. However, it is important to recognise that AI should not solely be viewed from the perspective of facilitating cheating, particularly since tools like ChatGPT have become integrated into students’ lives. Instead of focusing on prohibitions or strictly monitoring for academic dishonesty, it would be beneficial to explore ways to embrace and utilise these technologies in education (Dawson, 2020) and design assessments that could represent “future realities” of respective disciplines. This presentation highlights the potential benefits of adopting “unessay” as an alternative pedagogical approach in higher education. Unessay offers students a degree of freedom, necessitates ownership, fuels passion (Jakopak et al.,2019), creativity, critical thinking, interdisciplinary understanding in which individuals articulate their ideas, beliefs, and identities. Students are afforded the autonomy to select their own topic within a specific subject area and determine their preferred method of presentation, provided that it is both captivating and impactful (O’Donnel, 2012). By granting students autonomy, fostering creativity, and encouraging critical thinking beyond conventional academic norms, unessay not only equips them with the essential skills required to navigate an AI-driven future but also offers them the freedom to explore alternative modes of expression (Nave, 2021). This approach engenders motivation and investment in their academic work. It also compels students to consider the intended audience, choose appropriate rhetorical strategies, and synthesise information effectively. This is evidenced in previous studies, such as how students used unessay in unique ways in history classes (Guiliano, 2022; Irwin, 2022; Neuhaus, 2022), histology of organ cells (Wood and Stringham, 2022), computer programming (Aycock et al., 2019), writing (Jakopak et al.,2019 and Sullivan, 2015), and applied cognitive psychology (Goodman, 2022). In CS2101 “Effective Communication for Computing Professionals”, the assignment task encouraged students to apply Gibb’s Reflective Cycle, involving describing unique experiences, reflecting on feelings, evaluating and analysing those experiences, and concluding with a future plan. This assignment departed from traditional written reflection essays, allowing students to use AI and innovative multimedia formats such as videos, podcasts, and infographics to express their insights and learning. Drawing from the implementation of the “unessay” strategy, its effectiveness as a teaching approach was assessed through an anonymous end-course survey. This survey incorporated both quantitative and qualitative feedback gathered from approximately 50 students who were enrolled in the course as well as tutors who taught on the course. The data provided insights as to how students engaged with the “unessay” strategy and what their perceptions of its effectiveness were, and the tutors’ perceptions of using this strategy in the course. This presentation aims to facilitate discussions and reflections on the unessay concept and how this could be integrated into higher education (HE) assessment, serving as a potential gateway to a more diverse and inclusive assessment framework.

 

REFERENCES

Aycock, J., Wright, H., Hildebrandt, J., Kenny, D., Lefebvre, N., Lin, M., Mamaclay, M., Sayson, S., Stewart, A., & Yuen, A. (2019). Adapting the “Unessay” for use in computer science. Proceedings of the 24th Western Canadian Conference on Computing Education, 1–6.

Dawson, P. (2020). Cognitive offloading and assessment. In M. Bearman, P. Dawson, R. Ajjawi, J. Tai, & D. Boud (Eds.), Re-imagining University Assessment in a Digital World (pp. 37-48). Springer International Publishing.

Goodman, S. G. (2022). Just as long as it’s not an essay: The unessay as a tool for engagement in a cognitive psychology course. Teaching of Psychology, 0(0), 1–5. https://doi.org/10.1177/00986283221110542

Guiliano, J. (2022). The unessay as native-centered history and pedagogy. Teaching History: A Journal of Methods, 47(1), 6-12. https://doi.org/10.33043/TH.47.1.6-12

Irwin, R. (2022). The un-essay, and teaching in a time of monsters. Teaching History: A Journal of Methods, 47(1), 13-25. https://doi.org/10.33043/TH.47.1.13-25

Jakopak, R. P., Monteith, K. L., & Merkle, B. G. (2019). Writing science: Improving understanding and communication skills with the “unessay.” Bulletin of the Ecological Society of America, 100(4), 1–5. https://doi.org/10.1002/bes2.1610

O’Donnel, D. P. (2012, September 4). The unessay. Daniel Paul O’Donnell. http://people.uleth.ca/~daniel.odonnell/Teaching/the-unessay

Nave, L. (2021). Universal design for learning UDL in online environments: The HOW of learning. Journal of Developmental Education, 44(3), 34-35. http://www.jstor.org/stable/45381118

Neuhaus, J. (2022). Introduction to the Fall 2022 Special Issue: Using the unessay to teach history. Teaching History: A Journal of Methods, 47(1), 2- 5. https://doi.org/10.33043/TH.47.1.2-5

Sullivan, P. (2015). The UnEssay: Making room for creativity in the composition classroom. College Composition and Communication, 67(1), 6-34. http://www.jstor.org/stable/24633867

Wood, J. L., & Stringham, N. (2022). The UnEssay project as an enriching alternative to practical exams in pre-professional and graduate education. Journal of Biological Education. Informa UK Limited, 1–8. https://doi.org/10.1080/00219266.2022.2047098

 

Doing But Not Creating: A Theoretical Study of the Implications of ChatGPT on Paradigmatic Learning Processes

Koki MANDAI1, Mark Jun Hao TAN1, Suman PADHI1, and Kuin Tian PANG1,2,3 

1*Yale-NUS College
2Bioprocessing Technology Institute, Agency for Science, Technology, and Research (A*STAR), Singapore
3School of Chemistry, Chemical Engineering, and Biotechnology, Nanyang Technology University (NTU), Singapore

*m.koki@u.yale-nus.edu.sg

 

Mandai, K, Tan, J. H. M., Padhi, S., & Pang, K. T. (2023). Doing but not creating: A theoretical study of the implications of ChatGPT on paradigmatic learning processes [Paper presentation]. In Higher Education Campus Conference (HECC) 2023, 7 December, National University of Singapore. https://blog.nus.edu.sg/hecc2023proceedings/doing-but-not-creating-a-theoretical-study-of-the-implications-of-chatgpt-on-paradigmatic-learning-processes/

SUB-THEME

AI and Education

 

KEYWORDS

AI, artificial intelligence, education, ChatGPT, learning, technology

 

CATEGORY

Paper Presentation 

 

CHATGPT AND LEARNING FRAMEWORKS

Introduction

Since the recent release of ChatGPT, developed by OpenAI, multiple sectors have been affected by it, and educational institutions are not only affected by this trend but are also more deeply impacted compared to other fields (Dwivedi et al., 2023; Eke, 2023; Rudolph et al., 2023). Following the sub-theme of “AI and Education”, we conduct a systematic investigation into the educational uses of ChatGPT and its quality as a tool for learning, teaching, and assessing, mainly in higher education. Research is carried out using comprehensive literature reviews of the current and future educational landscape and ChatGPT’s methodology and function, while applying major educational theories as the main component for the construction of the evaluative criteria. Findings will be presented via a paper presentation.

 

Theoretical Foundations and Knowledge Gaps

Current literature on the intersections of education and artificial intelligence (AI) consists of variegated and isolated critiques of how AI impacts segments of the educational process. For instance, there is a large focus on the general benefits or harms in education (Baidoo-Anu & Ansah, 2023; Dwivedi et al., 2023; Mhlanga, 2023), rather than discussion of specific levels of learning that students and teachers encounter. Furthermore, there seems to be a lack of analysis on the fundamental change and reconsideration of the meaning of education that may occur due to the introduction of AI. The situation can be described as a Manichean dichotomy, as one side argues for the expected enhancements and improved efficiency in education (Ray 2023; Rudolph et al., 2023), while the other side argues for the risks of losing knowledge/creativity and the basis of future development (Chomsky, 2023; Dwivedi et al., 2023; Krügel et al., 2022/2023).

 

By referring to John Dewey’s reflective thought and action model for the micro-scale analysis (Dewey, 1986; Gutek, 2005; Miettinen, 2000) and a revision of Bloom’s taxonomy for the macro-scale analysis (Elsayed, 2023; Forehand, 2005; Kegan, 1977; Seddon, 1978), we consider the potential impact of ChatGPT over progressive levels of learning and the associated activities therein. These models were mainly chosen due to their hierarchical framework that allows for easy application in evaluation compared to other models, although this does not indicate that these models are superior to others; the evaluative criteria we aim to construct will be comprehensive, thus what our research provides is a possible base for future improvements. Moreover, we also incorporate insights from multiple perspectives that are not limited to educational theory, such as from the fields of policy and philosophy with the diverse backgrounds in our research team.

 

Purpose and Significance of the Present Study

This study sought to answer questions regarding the viability of ChatGPT as an educational tool, its proposed benefits and harms, and potential obstacles educators may face in its uptake, as well as relevant safeguards against those obstacles.

 

Furthermore, we suggest a possible base for a new theoretical framework in which ChatGPT is explicitly integrated with standard educational hierarchies, in order to provide better instruction to educators and students. This study aims to safely pioneer a baseline for policy considerations on it as an education tool made to either ameliorate or deteriorate. As a result, ChatGPT can be ratified in educational institutions with accompanying developmental policies to be considered and amended in governmental legislatures for wider educational use.

 

Potential Findings/Implications

The expectations from the existing literature suggest that in keeping with intuitions regarding higher-level learning, ChatGPT itself seems to be limited to do—that is, it is only able to process lower to mid-level learning comprising repetitive actions like remembering, understanding, applying, and analysing (Dwivedi, 2023; Elsayed 2023). Some literature also positions ChatGPT as less useful directly in higher-level processes of creation like evaluation and creation of new knowledge, and can even be said to hinder them (Crawford, 2023; Rudolph, 2023). Even within the lower-level process, there is a high concern for overreliance that will potentially lead to dullness of the learners (Halaweh, 2023; Ray, 2023). Yet under the lens of educational theories that this paper so far applied, there seems to be a possibility that ChatGPT may be able to assist higher-order skills such as creativity and related knowledge acquisition. As the net benefit of ChatGPT on education may more or less depend on external factors such as educational fields, the personality of the user, and the environment that we have yet to take into account of, it requires further research to determine its optimal usage in education. Still, this attempt may be one of the first steps to construct an evaluative criteria for the new era of education with AIs.

 

REFERENCES

Baidoo-Anu, D. & Ansah, L. O. (2023). Education in the era of generative artificial intelligence (AI): Understanding the potential benefits of ChatGPT in promoting teaching and learning. SSRN. https://ssrn.com/abstract=4337484

Crawford, J., Cowling, M., & Allen, K. (2023). Leadership is needed for ethical ChatGPT: Character, assessment, and learning using artificial intelligence (AI). Journal of University Teaching & Learning Practice, 20(3). https://doi.org/10.53761/1.20.3.02

Chomsky, N, et al. (2023). Noam Chomsky: The False Promise of ChatGPT. The New York Times. www.nytimes.com/2023/03/08/opinion/noam-chomsky-chatgpt-ai.html

Dewey, J. (1986). Experience and education. The Educational Forum, 50(3), 241-52. https://doi.org/10.1080/00131728609335764

Dwivedi, Y. K. et al. (2023). “So what if ChatGPT wrote it?” Multidisciplinary perspectives on opportunities, challenges and implications of generative conversational AI for research, practice and policy. International Journal of Information Management, 71, 1-63. https://doi.org/10.1016/j.ijinfomgt.2023.102642

Eke, D. O. (2023). ChatGPT and the rise of generative AI: Threat to academic integrity? Journal of Responsible Technology, 13, 1-4, https://doi.org/10.1016/j.jrt.2023.100060

Elsayed, S. (2023). Towards mitigating ChatGPT’s negative impact on education: Optimizing question design through Bloom’s taxonomy. https://doi.org/10.48550/arXiv.2304.08176

Forehand, M. (2005). Bloom’s taxonomy: Original and revised. In M. Orey (Ed.), Emerging perspectives on learning, teaching, and technology. http://projects.coe.uga.edu/epltt/

Gutek, G. L. (2005). Jacques Maritain and John Dewey on education: A reconsideration. Educational Horizons, 83(4), 247–63. http://www.jstor.org/stable/42925953

Halaweh, M. (2023). ChatGPT in education: Strategies for responsible implementation. Contemporary Educational Technology, 15(2), ep421. https://doi.org/10.30935/cedtech/13036

Kegan, D. L. (1977). Using Bloom’s cognitive taxonomy for curriculum planning and evaluation in nontraditional educational settings. The Journal of Higher Education, 48(1), 63–77. https://doi.org/10.2307/1979174

Krügel, S., Ostermaier, A. & Uhl, M (2023). ChatGPT’s inconsistent moral advice influences users’ judgment. Sci Rep 13, 4569. https://doi.org/10.1038/s41598-023-31341-0

Krügel, S., Ostermaier, A. & Uhl, M. Zombies in the loop? Humans trust untrustworthy AI-advisors for ethical decisions. (2022) Philos. Technol. 35, 17. https://doi.org/10.1007/s13347-022-00511-9

Mhlanga, D. (2023). Open AI in Education, the Responsible and Ethical Use of ChatGPT Towards Lifelong Learning SSRN, https://ssrn.com/abstract=4354422

Miettinen, R. (2000). The concept of experiential learning and John Dewey’s theory of reflective thought and action, International Journal of Lifelong Education, 19(1), 54-72. https://doi.org/10.1080/026013700293458

Ray, P. P. (2023). ChatGPT: A comprehensive review on background, applications, key challenges, bias, ethics, limitations and future scope. Internet of Things and Cyber-Physical Systems, 3, 121-154, https://doi.org/10.1016/j.iotcps.2023.04.003

Rudolph, J., Tan, S., Tan, S. (2023). ChatGPT: Bullshit spewer or the end of traditional assessments in higher education? Journal of Applied Learning & Teaching, 6(1), 1-22. https://journals.sfu.ca/jalt/index.php/jalt/article/view/689

Seddon, G. M. (1978). The properties of Bloom’s Taxonomy of educational objectives for the cognitive domain. Review of Educational Research, 48(2), 303–23. https://doi.org/10.2307/1170087

 

Harnessing the Power of ChatGPT for Assessment Question Generation: Five Tips for Medical Educators

Inthrani Raja INDRAN*, Priya PARANTHAMAN, and Nurulhuda MUSTAFA

Department of Pharmacology,
Yong Loo Lin School of Medicine (YLLSoM)

*phciri@nus.edu.sg

 

Indran, I. R., Paranthaman, P., & Mustafa, N. (2023). Harnessing the power of ChatGPT for assessment question generation: Five tips for medical educators [Lightning talk]. In Higher Education Campus Conference (HECC) 2023, 7 December, National University of Singapore. https://blog.nus.edu.sg/hecc2023proceedings/harnessing-the-power-of-chatgpt-for-assessment-question-generation-five-tips-for-medical-educators/ 

SUB-THEME

AI and Education 

 

KEYWORDS

AI, ChatGPT, questions, medical assessment

 

CATEGORY

Lightning Talks 

 

INTRODUCTION

Developing diverse and high-quality assessment questions for the medical curriculum is a complex and time-intensive task, as they often require the incorporation of clinically relevant scenarios which are aligned to the learning outcomes (Al-Rukban, 2006; Palmer & Devitt, 2007). The emergence of artificial intelligence (AI)-driven large language models (LLMs) has presented an unprecedented opportunity to explore how AI can be harnessed to optimise and automate these complex tasks for educators (AI, 2023). It also provides an opportunity for students to use the LLMs to help create practice questions and further their understanding of the concepts they wish to test.

 

AIMS & METHODS

This study aims to establish a definitive and dependable series of practical pointers, that would enable educators to tap on the ability of LLMs, like ChatGPT, to firstly enhance question generation in healthcare profession education, using multiple choice question (MCQs) as an illustrative example. Secondly, it can assist to generate diverse clinical scenarios for teaching and learning purposes and lastly, we hope that our experiences will encourage more educators to explore and access AI tools such as ChatGPT with greater ease, especially if they had limited prior experiences.

 

To generate diverse, high-quality clinical scenario MCQs, we outlined core medical concepts and identified essential keywords for integrating into the instruction stem. The text inputs were iteratively refined and fine-tuned until we developed instruction prompts that could help us generate questions of a desirable quality. Following question generation, respective domain experts reviewed them for content accuracy and overall relevance, identifying any potential flags in the question stem. This process of soliciting feedback and implementing refinements, enabled us to continuously enhance the prompts and the quality of questions generated. By prioritising expert review, we established a necessary validation process for the MCQs prior to their formal implementation.

 

THE FIVE TIPS

We consolidated the following tips to effectively harness the power of ChatGPT for assessment question generation.

 

Tip 1: Define the Objective and Select the Appropriate Model

Determine the purpose of question generation and choose the appropriate AI model based on needs and access. Model selection depends on the needs and accessibility. Choose ChatGPT 4.0 over 3.5 for greater accuracy and concept integration. ChatGPT 4.0 requires a subscription. Activate the beta features in “Settings” and utilise the “Browse with Bing” mode to retrieve information surpassing its training cut-off period, as well as install plugins for improved AI performance.

 

Tip 2: Optimise Prompt Design

When refining the stem design for question generation, there are several important considerations. Firstly, be specific in your instructions by emphasising key concepts, question types, quantity, and the answer format. Clearly state any guidelines or rules you want the model to follow. Focus on core concepts and keywords relevant to the discipline to build the instruction stem. Experiment with vocabulary to optimise question quality.

 

Tip 3: Build Diverse Authentic Scenarios

Develop a range of relevant clinical vignettes to broaden the scope of scenarios that can be used to assess students.

 

Tip 4: Calibrate Assessment Difficulty

Incorporate the principles of Bloom’s Taxonomy when developing assessment questions to test different cognitive skills, ranging from basic knowledge recall to complex analysis, enhancing question diversity.

 

Tip 5: Work Around Limitations

Be mindful that ChatGPT is trained on limited data and can generate factually inaccurate information. Despite diverse training, ChatGPT does not possess the nuanced understanding of a medical expert, which can impact the quality of the questions it generates. Human validation is necessary to address any factual inaccuracies that may arise. AI data collection risks misuse, privacy breaches, and bias amplification, leading to misguided outcomes.

 

CONCLUSION

AI-assisted question generation is an iterative process, and these tips can provide any healthcare professions educator valuable guidance in automating the generation of good quality assessment questions. Furthermore, students can leverage this technology for self-directed learning, creating and verifying their practice questions and strengthening their understanding of medical concepts (Touissi et al., 2022). While this paper primarily demonstrates the use of ChatGPT in generating MCQs, we believe that the approach can be extended to various other question types. It is also important to remember that though AI augments, it does not replace human expertise. (Ali et al., 2023; Rahsepar et al., 2023). Domain experts are needed to ensure quality, accuracy, and relevance.

 

REFERENCES 

AI, O. (2023).

Al-Rukban, M. O. (2006). Guidelines for the construction of multiple choice questions tests. J Family Community Med, 13(3), 125-33. https://www.ncbi.nlm.nih.gov/pubmed/23012132

Ali, R., Tang, O. Y., Connolly, I. D., Fridley, J. S., Shin, J. H., Zadnik Sullivan, P. L., Cielo, D., Oyelese, A. A., Doberstein, C. E., Telfeian, A. E., Gokaslan, Z. L., & Asaad, W. F. (2023). Performance of ChatGPT, GPT-4, and Google Bard on a Neurosurgery Oral Boards Preparation Question Bank. Neurosurgery. https://doi.org/10.1227/neu.0000000000002551

Palmer, E. J., & Devitt, P. G. (2007). Assessment of higher order cognitive skills in undergraduate education: modified essay or multiple choice questions? Research paper. BMC Med Educ, 7, 49. https://doi.org/10.1186/1472-6920-7-49

Rahsepar, A. A., Tavakoli, N., Kim, G. H. J., Hassani, C., Abtin, F., & Bedayat, A. (2023). How AI responds to common lung cancer questions: ChatGPT vs Google Bard. Radiology, 307(5), e230922. https://doi.org/10.1148/radiol.230922

Touissi, Y., Hjiej, G., Hajjioui, A., Ibrahimi, A., & Fourtassi, M. (2022). Does developing multiple-choice questions improve medical students’ learning? A systematic review. Med Educ Online, 27(1), 2005505. https://doi.org/10.1080/10872981.2021.2005505

 

Skip to toolbar