Does Grading an Assignment Matter for Student Engagement – A Case Study in an Interdisciplinary Course with Science and Humanities

LIU Mei Hui1 and Stephen En Rong TAY2

1Department of Food Science and Technology, College of Humanities and Sciences, NUS
2Department of the Built Environment, College of Design and Engineering (CDE), NUS

fstlmh@nus.edu.sg; stephen.tay@nus.edu.sg

 

Liu, M. H., & Tay, S. E. R. (2024). Does grading an assignment matter for student engagement: A case study in an interdisciplinary course with science and humanities [Paper presentation]. In Higher Education Conference in Singapore (HECS) 2024, 3 December, National University of Singapore. https://blog.nus.edu.sg/hecs/hecs2024-liu-tay/

SUB-THEME

Opportunities from Engaging Communities

 

KEYWORDS

Interdisciplinarity, peer learning, student-generated questions, assessment, feedback

 

CATEGORY

Paper Presentation 

 

INTRODUCTION

The Scientific Inquiry II (SI2) course – HSI2007 “Deconstructing Food” – employed scenario-based student-generated questions and answers (sb-SGQA) previously to encourage interdisciplinary learning (Tay & Liu, 2023). In the activity, students were tasked to develop questions and answers based on the learning objectives that are contextualised to community examples beyond the classroom. This contextualisation to a scenario helps develop authentic assessment (Wiggins, 1990). To further increase student engagement with the sb-SGQA activity, the sb-SGQA activity changed to a graded assignment in AY2023/24 Semester 1. This was motivated by literature that reports how a graded assignment motivates students in their learning, specifically as an extrinsic motivator, in which students are incentivised to work towards a reward (i.e. good grade) (Docan, 2006; Harlen et al., 2002; Schinske & Tanner, 2014). Hence, this study aims to answer the following questions:

  1. Does the graded sb-SGQA improve student performance, evidenced through a comparison of the continual assessment marks between the graded and ungraded cohorts?
  2. What are students’ perceptions of the sb-SGQA approach from both the graded and ungraded cohorts?

METHODOLOGY

The graded sb-SGQA (20% weightage) was adopted in AY2023/24 Semester 1, and results were compared with data from AY2022/23 Semester 2, when the sb-SGQA was not graded. Across both cohorts, two continual assessment (CA) components, a MCQ Quiz (20% weightage) and Individual Essay (20% weightage) were analysed as these two components were present in both cohorts. Numerical data was analysed with JASP, an open-source statistical package (Love et al., 2019).

RESULTS

In Figure 1, students analysed and discussed differences between meals served to students in the East and West differ, and Figure 2 demonstrates how students employed content materials from the online community for a case study. Through these questions, students demonstrated concepts of nutrition, food microbiology (e.g., fermented foods), and health-related information.

HECS2024-a20-Fig1

Figure 1. Example of students’ work analysing meals in other communities

 

HECS2024-a20-Fig2

Figure 2. Student work in question-and-answer generation through engaging the digital community.

Though formal evidence has not been collected, we believe the project is impactful based on several observations. Participants demonstrate increased confidence and curiosity as they develop coding and robotics skills, particularly after successfully completing projects or engaging in hackathons. Exposure to tech fairs broadens their understanding of technology’s potential and encourages further exploration. These activities are designed to spark interest in technology and create a positive learning environment, which we believe is key to fostering long-term engagement in the field.

 

When the CA scores were analysed, a statistically significant difference was observed for the MCQ Quiz but not for the Individual Essay (refer to Table 1). This could be attributed to the open-ended nature of the Individual Essay assessment component, which requires student competencies in articulation of ideas and positioning their views, which may have masked the effect.

Table 1
Score comparisons for MCQ Quiz, Individual Essay, and CA across the graded (n=102) and ungraded (n=184) cohorts

HECS2024-a20-Table1

 

Table 2 represents student feedback on the sb-SGQA approach. Majority of the students in both the graded and ungraded cohorts shared that the sb-SGQA has helped with their learning. Though the activity was challenging, the students enjoyed it and recommended it for future courses. The qualitative feedback (refer to Table 3) revealed how Humanities and Sciences students appreciated how their diverse views could be incorporated through the sb-SGQA (Humanities 1, Humanities 3, Science 3). The sb-SGQA also forces students to reflect deeper on the course materials to develop meaningful questions and answers, thus aiding their learning (Humanities 2, Science 1). The contextualisation of the learning objectives to community examples was appreciated by students (Humanities 4, Science 2). The approach was also utilised by students to integrate topics taught through the entire course, thus allowing students to appreciate the course as a whole (Science 4). The themes were similar in the ungraded cohorts.

 

Table 2
Student feedback from the graded (left) and ungraded (right) cohorts separated by “/”. Responses represented as a percentage, and were obtained from 102 respondents in the graded cohort and 120 respondents in the ungraded cohort. The modes are bolded for highlight

HECS2024-a20-Table2

 

Table 3
Qualitative feedback from Humanities and Science students in the graded cohort

HECS2024-a20-Table3

CONCLUSION AND SIGNIFICANCE

The change to a graded assignment increased students’ performance in the MCQ Quiz segment but not the Individual Essay segment. Student perceptions to the approach were generally positive across both the graded and ungraded cohorts. The results suggest that students’ perceived value of a learning activity may not be solely dependent on whether the learning activity is graded or not. The significance of this study lies in how the use of sb-SGQA could aid with community engagement in the creation of case studies without software and hardware costs involved.

REFERENCES

Docan, T. N. (2006). Positive and negative incentives in the classroom: An analysis of grading systems and student motivation. Journal of the Scholarship of Teaching and Learning, 6, 21-40. https://scholarworks.iu.edu/journals/index.php/josotl/article/view/1668/1666

Harlen, W., Crick, R. D., Broadfoot, P., Daugherty, R., Gardner, J., James, M., & Stobart, G. (2002). A systematic review of the impact of summative assessment and tests on students’ motivation for learning. https://dspace.stir.ac.uk/bitstream/1893/19607/1/SysRevImpSummativeAssessment2002.pdf

Love, J., Selker, R., Marsman, M., Jamil, T., Dropmann, D., Verhagen, J., Ly, A., Gronau, Q. F., Šmíra, M., Epskamp, S., Matzke, D., Wild, A., Knight, P., Rouder, J. N., Morey, R. D., & Wagenmakers, E.-J. (2019). JASP: Graphical Statistical Software for Common Statistical Designs. Journal of Statistical Software, 88(2), 1 – 17. https://doi.org/10.18637/jss.v088.i02

Schinske, J., & Tanner, K. (2014). Teaching more by grading less (or differently). CBE Life Sci Educ, 13(2), 159-166. https://doi.org/10.1187/cbe.cbe-14-03-0054

Tay, E. R. S., & Liu, M. H. (2023, 7 December 2023). Exploratory implementation of scenario-based student-generated questions for students from the humanities and sciences in a scientific inquiry course. Higher Education Campus Conference (HECC) 2023, Singapore. https://blog.nus.edu.sg/hecc2023proceedings/exploratory-implementation-of-scenario-based-student-generated-questions-for-students-from-the-humanities-and-sciences-in-a-scientific-inquiry-course/

Wiggins, G. (1990). The case for authentic assessment. Practical Assessment, Research and Evaluation, 2, 1-3. https://doi.org/10.7275/ffb1-mm19

Whisper AI: Enhancing Feedback on Oral Assessments and Facilitating Research and Analysis

Muzzammil Yassin

Centre for Language Studies (CLS), Faculty of Arts and Social Sciences (FASS)

clsmmy@nus.edu.sg

 

Muzzammil Yassin (2024). Whisper AI: Enhancing feedback on oral assessments and facilitating research and analysis [Paper presentation]. In Higher Education Conference in Singapore (HECS) 2024, 3 December, National University of Singapore. https://blog.nus.edu.sg/hecs/hecs2024-m-yassin/

SUB-THEME

Opportunities from Generative AI

 

KEYWORDS

Whisper AI, feedback, speaking assessments, speech corpus, transcribing

 

CATEGORY

Paper Presentation 

 

INTRODUCTION

Feedback in its various forms plays an integral role in learning a foreign language. In the Arabic Studies Programme at the Centre for Language Studies (CLS), oral assessments have traditionally involved either face-to-face interviews or presentations delivered by learners. The latter may be done “live” or submitted as a recording. Feedback provided on these assessments in the Arabic Studies Programme has usually been scarce, and when provided upon request by the student, is usually general and without much detail or correction. Furthermore, unlike a written assignment, the data from oral assignments might usually not be collected or stored. Thus, when students are provided with feedback, they do not usually get the opportunity to hear what they have said, or how they said it, or visualise it.

 

This presentation, based on action research, explores the potential of using Whisper from Open AI to enhance the feedback mechanism for oral assignments in language classrooms. The usage of AI discussed here falls within the detect-diagnose-act framework (mentioned in Molenaar, 2022). Over the period of AY 2023/24, Whisper AI was used to transcribe a relatively large amount of spoken data in a bid to provide enhanced feedback (Figures 1 and 2). This facilitated the analysis of students’ linguistic output in order to identify areas for improvement in pronunciation, grammatical structures, idiomaticity, and vocabulary choice. Research has demonstrated the usefulness of transcripts for students in the feedback process (Lynch, 2001, 2007). In addition to facilitating enhanced feedback, the transcription of the spoken data allows for it to be stored and compiled into a corpus. This will subsequently allow for reflection, error analysis, and data-driven design of activities. Thus, utilising Whisper on data collected from oral assignments can help facilitate research in the long run.

 

HECS2024-a105-Fig1
Figure 1. Sample of the code run on Google Colab

 

HECS2024-a105-Fig2
Figure 2. Sample of the code run on Google Colab on multiple audio files

 

OUTCOMES

The outcome of this intervention has an impact on two aspects: feedback provided to the learner, and data compiled for further reflection and research.

 

Regarding the first, students are provided with a feedback table containing the transcript of their presentation or oral interview. This is usually done in the second half of the semester, after the oral assessment test has been conducted. Written comments are provided alongside the transcript. Students are presented with this feedback table during a consultation session, held face-to-face or over Zoom. The feedback contained within the table is discussed and suggestions are provided for students on how to improve their speaking proficiency. If required, excerpts from the recording can be played. This allows students to hear what they said and visualise it as well. Students take note of the errors made and seek further clarification if necessary. The written comments and the transcript are theirs to keep for future reference.

 

Student feedback from AY 2023/24 shows positive comments regarding the feedback provided throughout the semester, a large part of which included feedback on oral assessments which were enhanced with the help of Whisper AI (Figures 3 and 4). Such feedback also plays a role in further developing students’ speaking skills (Al Jahromi, 2020).

HECS2024-a105-Fig3
Figure 3. Sample feedback on an oral assessment question with areas for improvement colour coded

 

HECS2024-a105-Fig4
Figure 4. Sample feedback table with areas for improvement colour coded

 

The other area which benefits from this intervention using Whisper AI relates to collecting and compiling spoken data. This would be extremely useful for reflection and research purposes, especially data that is collected on a longitudinal basis; from the beginner course—LAR1201 “Arabic 1” to the highest advanced course—LAR4202 “Arabic 6”. Such a corpus would allow teachers to document part of the development of learners’ speaking abilities. Such rich and varied transcribed data, along with the audio recordings, has the potential to contribute to a better understanding of areas related to language acquisition and a data-driven design of teaching material.

 

The following table provides a brief ‘before’ and ‘after’ overview of my teaching practice with regardS to providing feedback on oral assessments.

Table 1
Overview of my teaching practice and the quality of feedback given to oral assessments ‘before’ and ‘after’ the application of Whisper AI

HECS2024-a105-Table1

 

In summary, WhisperAI has helped to fill a gap relating to providing more detailed feedback to students of the CLS Arabic Studies Programme for oral assessments. In addition to the benefit of helping students ‘notice’ and ‘visualise’ (Lynch, 2001) their speech, the data from such interventions can be used to create activities that aim towards correcting common errors in learners’ speech/written production. Technology is a force multiplier which enhances the learning experience when used appropriately. The augmentative approach to using AI demonstrated above seeks to benefit both the learner and the teacher. This is part of what the detect-diagnose-act framework advocates (Molenaar, 2022).

 

REFERENCES

Al Jahromi, D. (2020). Can teacher and peer formative feedback enhance L2 university students’ oral presentation skills? In Hidri, S. (eds) Changing Language Assessment. Palgrave Macmillan. https://doi.org/10.1007/978-3-030-42269-1_5

Lynch, T. (2001). Seeing what they meant: transcribing as a route to noticing. ELT Journal, 55(2), 124–132. https://doi.org/10.1093/elt/55.2.124

Lynch, T. (2007). Learning from the transcripts of an oral communication task. ELT Journal, 61(4), 311–320. https://doi.org/10.1093/elt/ccm050

Molenaar, I. (2022). Towards hybrid human-AI learning technologies. European Journal of Education, 57, 632–645. https://doi.org/10.1111/ejed.12527

Stillwell, C., Curabba, B., Alexander, K., Kidd, A., Kim, E., Stone, P., & Wyle, C. (2010). Students transcribing tasks: noticing fluency, accuracy, and complexity, ELT Journal, 64(4), 445–455. https://doi.org/10.1093/elt/ccp081

 

 

Viewing Message: 1 of 1.
Warning

Blog.nus accounts will move to SSO login soon. Once implemented, only current NUS staff and students will be able to log in to Blog.nus. Public blogs remain readable to non-logged in users. (More information.)