The Other Benefits of Making AI-resistant Assessments

Olivier LEFEBVRE1,2
1Department of Civil and Environmental Engineering
2NUS Teaching Academy

ceelop@nus.edu.sg

 

Lefebvre, O. (2023). The other benefits of making AI-resistant assessments [Paper presentation]. In Higher Education Campus Conference (HECC) 2023, 7 December, National University of Singapore. https://blog.nus.edu.sg/hecc2023proceedings/the-other-benefits-of-making-ai-resistant-assessments/ 

SUB-THEME

AI and Education 

 

KEYWORDS

AI chatbots, AI-resistant assessments, authentic assessments

 

CATEGORY

Paper Presentation 

 

ABSTRACT

Over the past one year, the world has witnessed growing concerns in relation with the rise in performance of artificial intelligence (AI) chatbots at a rate much faster than our own ability to comprehend all the implications, leading to questions on whether we should slow down or even halt the “race to god-like AI”1. In the academic world, concerns are mostly related about the way to assess students in this new day and age, where ChatGPT has been found to pass entry exams in fields as varied as medicine, law or business schools (Wilde, 2023). Such legitimate concerns have resulted in diverse responses from many universities over the world, from banning AI chatbots altogether, such as in French University Sciences Po (Sciences Po, 2023), to providing guidelines and recommendations for staff and for students, such as the choice made by NUS in our interim policy guidelines (NUS Libraries, n. d.).

 

The plagiarism issues and risks of other acts of academic dishonesty are real, but this is not the first time that we have to face such issues. At the peak of the COVID-19 pandemic, students were asked to take their exams from home, and in many cases, the simple conversion of a pen-and-paper exam into a digital one, without rethinking the entire assessment, had led to a rising number of plagiarism and other cheating cases. Similarly, as with the COVID-19 pandemic, can we once again rethink our assessments and not only proof them against abuse from AI but also take this opportunity to deliver more meaningful assessments, in better adequation with the skills that our students need in this day and age (Mimirinis, 2019)? Instead of banning ChatGPT and the like, should we just acknowledge that AI is here to stay, and design assessments that question higher-order thinking skills, allowing at the same to distinguish between students who engage in surface learning and those who have achieved a real deep understanding of the topic? Such exams would constitute a form of authentic assessment, by recreating the conditions that students will apply their knowledge in their professional environment (Shand, 2020).

 

In this talk, I will present some general guidelines on what kind of exam can at the same time test students for higher-order thinking skills and resist AI chatbots. Real examples will be provided, where students are asked to:

  • Deliver a critical analysis of a scientific paper
  • Interpret graphs or images
  • Solve ill-defined and complex problems

 

I will show how well (or not) these exams resist ChatGPT and compare the AI output to that of real (anonymised) students over a range of performance (excellent, average, marginal). I will conclude with the limitations, e.g., the risk to increase the difficulty of the exam by a too large margin, making it difficult for the weaker students to perform reasonably well.

 

ENDNOTE

  1. Refer to https://www.ft.com/content/03895dc4-a3b7-481e-95cc-336a524f2ac2 for details.

 

REFERENCES

Mimirinis, M. (2019). Qualitative differences in academics’ conceptions of e-assessment. Assessment & Evaluation in Higher Education, 44(2), 233-48. http://dx.doi.org/10.1080/02602938.2018.1493087

NUS Libraries (n. d.). Academic integrity essentials. https://libguides.nus.edu.sg/new2nus/acadintegrity

Sciences Po (2023, January 27). Sciences Po bans the use of ChatGPT without transparent referencing. https://newsroom.sciencespo.fr/sciences-po-bans-the-use-of-chatgpt/

Shand, G. (2020). Bringing OSCEs into the 21st century: Why internet access is a requirement for assessment validity. Medical Teacher, 42(4), 469-71. http://dx.doi.org/10.1080/0142159X.2019.1693527

Wilde, J. (2023, January 27). ChatGPT passes medical, law, and business exams. Morning Brew. https://www.morningbrew.com/daily/stories/2023/01/26/chatgpt-passes-medical-law-business-exams

 

Print Friendly, PDF & Email
Skip to toolbar