LIU Mei Hui1 and Stephen En Rong TAY2
1Department of Food Science and Technology, College of Humanities and Sciences, NUS
2Department of the Built Environment, College of Design and Engineering (CDE), NUS
fstlmh@nus.edu.sg; stephen.tay@nus.edu.sg
Liu, M. H., & Tay, S. E. R. (2024). Does grading an assignment matter for student engagement: A case study in an interdisciplinary course with science and humanities [Paper presentation]. In Higher Education Conference in Singapore (HECS) 2024, 3 December, National University of Singapore. https://blog.nus.edu.sg/hecs/hecs2024-liu-tay/
SUB-THEME
Opportunities from Engaging Communities
KEYWORDS
Interdisciplinarity, peer learning, student-generated questions, assessment, feedback
CATEGORY
Paper Presentation
INTRODUCTION
The Scientific Inquiry II (SI2) course – HSI2007 “Deconstructing Food” – employed scenario-based student-generated questions and answers (sb-SGQA) previously to encourage interdisciplinary learning (Tay & Liu, 2023). In the activity, students were tasked to develop questions and answers based on the learning objectives that are contextualised to community examples beyond the classroom. This contextualisation to a scenario helps develop authentic assessment (Wiggins, 1990). To further increase student engagement with the sb-SGQA activity, the sb-SGQA activity changed to a graded assignment in AY2023/24 Semester 1. This was motivated by literature that reports how a graded assignment motivates students in their learning, specifically as an extrinsic motivator, in which students are incentivised to work towards a reward (i.e. good grade) (Docan, 2006; Harlen et al., 2002; Schinske & Tanner, 2014). Hence, this study aims to answer the following questions:
- Does the graded sb-SGQA improve student performance, evidenced through a comparison of the continual assessment marks between the graded and ungraded cohorts?
- What are students’ perceptions of the sb-SGQA approach from both the graded and ungraded cohorts?
METHODOLOGY
The graded sb-SGQA (20% weightage) was adopted in AY2023/24 Semester 1, and results were compared with data from AY2022/23 Semester 2, when the sb-SGQA was not graded. Across both cohorts, two continual assessment (CA) components, a MCQ Quiz (20% weightage) and Individual Essay (20% weightage) were analysed as these two components were present in both cohorts. Numerical data was analysed with JASP, an open-source statistical package (Love et al., 2019).
RESULTS
In Figure 1, students analysed and discussed differences between meals served to students in the East and West differ, and Figure 2 demonstrates how students employed content materials from the online community for a case study. Through these questions, students demonstrated concepts of nutrition, food microbiology (e.g., fermented foods), and health-related information.
Figure 1. Example of students’ work analysing meals in other communities
Figure 2. Student work in question-and-answer generation through engaging the digital community.
Though formal evidence has not been collected, we believe the project is impactful based on several observations. Participants demonstrate increased confidence and curiosity as they develop coding and robotics skills, particularly after successfully completing projects or engaging in hackathons. Exposure to tech fairs broadens their understanding of technology’s potential and encourages further exploration. These activities are designed to spark interest in technology and create a positive learning environment, which we believe is key to fostering long-term engagement in the field.
When the CA scores were analysed, a statistically significant difference was observed for the MCQ Quiz but not for the Individual Essay (refer to Table 1). This could be attributed to the open-ended nature of the Individual Essay assessment component, which requires student competencies in articulation of ideas and positioning their views, which may have masked the effect.
Table 1
Score comparisons for MCQ Quiz, Individual Essay, and CA across the graded (n=102) and ungraded (n=184) cohorts
Table 2 represents student feedback on the sb-SGQA approach. Majority of the students in both the graded and ungraded cohorts shared that the sb-SGQA has helped with their learning. Though the activity was challenging, the students enjoyed it and recommended it for future courses. The qualitative feedback (refer to Table 3) revealed how Humanities and Sciences students appreciated how their diverse views could be incorporated through the sb-SGQA (Humanities 1, Humanities 3, Science 3). The sb-SGQA also forces students to reflect deeper on the course materials to develop meaningful questions and answers, thus aiding their learning (Humanities 2, Science 1). The contextualisation of the learning objectives to community examples was appreciated by students (Humanities 4, Science 2). The approach was also utilised by students to integrate topics taught through the entire course, thus allowing students to appreciate the course as a whole (Science 4). The themes were similar in the ungraded cohorts.
Table 2
Student feedback from the graded (left) and ungraded (right) cohorts separated by “/”. Responses represented as a percentage, and were obtained from 102 respondents in the graded cohort and 120 respondents in the ungraded cohort. The modes are bolded for highlight
Table 3
Qualitative feedback from Humanities and Science students in the graded cohort
CONCLUSION AND SIGNIFICANCE
The change to a graded assignment increased students’ performance in the MCQ Quiz segment but not the Individual Essay segment. Student perceptions to the approach were generally positive across both the graded and ungraded cohorts. The results suggest that students’ perceived value of a learning activity may not be solely dependent on whether the learning activity is graded or not. The significance of this study lies in how the use of sb-SGQA could aid with community engagement in the creation of case studies without software and hardware costs involved.
REFERENCES
Docan, T. N. (2006). Positive and negative incentives in the classroom: An analysis of grading systems and student motivation. Journal of the Scholarship of Teaching and Learning, 6, 21-40. https://scholarworks.iu.edu/journals/index.php/josotl/article/view/1668/1666
Harlen, W., Crick, R. D., Broadfoot, P., Daugherty, R., Gardner, J., James, M., & Stobart, G. (2002). A systematic review of the impact of summative assessment and tests on students’ motivation for learning. https://dspace.stir.ac.uk/bitstream/1893/19607/1/SysRevImpSummativeAssessment2002.pdf
Love, J., Selker, R., Marsman, M., Jamil, T., Dropmann, D., Verhagen, J., Ly, A., Gronau, Q. F., Šmíra, M., Epskamp, S., Matzke, D., Wild, A., Knight, P., Rouder, J. N., Morey, R. D., & Wagenmakers, E.-J. (2019). JASP: Graphical Statistical Software for Common Statistical Designs. Journal of Statistical Software, 88(2), 1 – 17. https://doi.org/10.18637/jss.v088.i02
Schinske, J., & Tanner, K. (2014). Teaching more by grading less (or differently). CBE Life Sci Educ, 13(2), 159-166. https://doi.org/10.1187/cbe.cbe-14-03-0054
Tay, E. R. S., & Liu, M. H. (2023, 7 December 2023). Exploratory implementation of scenario-based student-generated questions for students from the humanities and sciences in a scientific inquiry course. Higher Education Campus Conference (HECC) 2023, Singapore. https://blog.nus.edu.sg/hecc2023proceedings/exploratory-implementation-of-scenario-based-student-generated-questions-for-students-from-the-humanities-and-sciences-in-a-scientific-inquiry-course/
Wiggins, G. (1990). The case for authentic assessment. Practical Assessment, Research and Evaluation, 2, 1-3. https://doi.org/10.7275/ffb1-mm19