MCQs with a Twist: Using Python to Get More Out of LumiNUS Assessments

Hong Anh LE, Department of Biological Sciences/Special Programme in Science (SPS), Faculty of Science, and
Chammika UDALAGAMA, Department of Physics/SPS, Faculty of Science

In this blog, Anh and Chammika share their experience on the implementation of Unrestricted, Collaborative Open-book Assessments (UCOAs).

Photo courtesy of Christina Morillo from Pexels

Overview

Assessments are inevitable in teaching and learning. However, they tend not to be welcomed by students (and some instructors), and are often seen as a ‘necessary evil’. However, assessments can offer opportunities for learning in addition to just assessing if students have retained module-related content. For instance, assessments can reveal areas of weaknesses, allow students to rectify misconceptions and refine their level of understanding of the module-related content. Assessments can also motivate and direct learning (Wormald, Schoeman, Somasunderam, & Penn, 2009).

Unfortunately, assessments are not always able to fulfil their potential as learning opportunities. This could be due in part to a lack of immediate feedback (Gilley & Clarkston, 2014). It could take weeks to finish marking an exam, by which time students might have forgotten the many struggles of the assessment. Furthermore, they might not be motivated to read the feedback if there are no tangible incentives for improvement. Our need for an assessment activity that enhances students’ learning through feedback prompted us to explore alternative assessment formats.

Unrestricted, Collaborative Open-book Assessments (UCOAs)

We explored an assessment format we like to call Unrestricted, Collaborative Open-Book Assessments (UCOAs) that combine benefits from a few other more established formats. UCOAs allow students to both discuss with classmates—hence, “Collaborative” (Kapitanoff, 2009)—and use any available materials, including those from the internet—hence, “Unrestricted” and “Open-Book” (Ardiansyah, Ujihanti, Aryanti, & Meirani, 2018). Sounds strange!? Perhaps, but UCOAs may have substantial potential to enhance learning, given the benefits associated with both collaborative testing and open-book assessments. For instance, the collaborative aspect of the assessment ensures that students receive immediate feedback when they discuss one another’s answers with their classmates. Also, having to harmonise different views within a group would require students to hone their interpersonal and critical thinking skills (Kapitanoff, 2009; Sandahl, 2010). The unrestricted, open-book options would also be considered authentic in that students get to exercise their resourcefulness the way they might do in a real-world situation (Moore, 2018).

The Twist: Randomisation of Numerical Values in LumiNUS Assessments

We wanted to use UCOAs for a modestly large class (GEH1033 “How the Ocean Works”, with 150 students). With such numbers, practicality dictated the use of multiple-choice questions (MCQs). However, if we as educators are not careful in formulating the questions, MCQs can be a poor form of assessment that fails to activate higher-order, deep learning (Xu, 2016).

To overcome these hurdles, we individualised carefully crafted MCQs by using randomised numerical values and randomised options. In this scheme, the whole class sees the same MCQs, but each student would see different numbers (e.g. one student will see the question “What is the sum of 5 and 2?”, while another will see the question “What is the sum of 10 and 8?”). The resulting individualisation of questions allows us to reap the benefits of UCOA while discouraging cheating.

To date, we have generated a large number of these individualised MCQs using the Python programming language, which we then use on the university’s learning management system LumiNUS.

What do the Students Have to Say?

We asked the students taking GEH1033 to give their thoughts about UCOAs and randomised MCQs, and 75 students responded.

Figure 1. Students provide feedback about UCOAs and MCQs.

Among the respondents, as shown in Figure 1, the majority believed UCOAs would benefit learning (Question 2) and mitigate testing anxiety (Question 1). While over 75% of the respondents had no concerns about UCOAs (Question 3), the rest highlighted the potential problems related to cheating, incidences of free-loading and the bell curve. This strongly contributed to the preference for randomly generated values over fixed ones reported by over 70% of the respondents (Question 4).

Coming Up

As part of a current project supported by the Teaching Enhancement Grant (TEG), we hope to develop a platform which would allow everyone in the NUS community to use these randomised MCQs in UCOAs for their teaching via LumiNUS.

 

References

Ardiansyah, W., Ujihanti, M., Aryanti, N., & Meirani, W. (2018). Formative Assessment. Holistics Journal, 10(19), 19–27. Retrieved from https://www.jurnal.polsri.ac.id/index.php/holistic/article/view/973/746

Gilley, B. H., & Clarkson, B. (2014). Collaborative testing: Evidence of learning in a controlled in-class study of undergraduate students. Journal of College Science Teaching, 43(3), 83–91. https://doi.org/www.jstor.org/stable/43632038

Kapitanoff, S. H. (2009). Collaborative testing: Cognitive and interpersonal processes related to enhanced test performance. Active Learning in Higher Education, 10(1), 56–70. https://doi.org/10.1177/1469787408100195

Moore, C. P. (2018). Adding authenticity to controlled conditions assessment: introduction of an online, open book, essay based exam. International Journal of Educational Technology in Higher Education, 15(1). https://dx.doi.org/10.1186/s41239-018-0108-z

Sandahl, S. S. (2010). Collaborative testing as a learning strategy in nursing education. Nursing Education Perspectives, 31(3), 142–147. https://doi.org/10.1043/1536-5026-31.3.142

Wormald, B. W., Schoeman, S., Somasunderam, A., & Penn, M. (2009). Assessment drives learning: An unavoidable truth? Anatomical Sciences Education, 2(5), 199–204. https://doi.org/10.1002/ase.102

Xu, X., Kauer, S., & Tupy, S. (2016). Multiple-choice questions: Tips for optimizing assessment in-seat and online. Scholarship of Teaching and Learning in Psychology, 2(2), 147–158. https://doi.org/10.1037/stl0000062


Hong Anh LE is a third-year undergraduate studying Life Sciences and a senior of the Special Programme in Science (SPS, sps.nus.edu.sg). Beyond her academic focus on molecular biology and biophysics, Anh has interests in programming and finding effective learning methods.  on assessment, student living-learning experiences, academic development, and technology-enhanced learning.

Anh can be reached at honganh.le17@sps.nus.edu.sg.

Chammika UDALAGAMA is a senior lecturer in the Department of Physics, the Special Programme in Science (SPS) and the Science Communication Team at the Faculty of Science (FoS). Chammika teaches both undergraduate and graduate courses on topics ranging from quantum mechanics, Python programming to ocean dynamics. He is very interested in using technology to enhance teaching, learning and assessment.  

At present, Chammika, Anh (and M. Hema PRASHAAD, also from SPS) are working on developing several IT tools with the support of a Teaching Enhancement Grant (TEG).

Chammika can be reached at chammika@nus.edu.sg.


 

Print Friendly, PDF & Email