Mark GAN
Centre for Development of Teaching & Learning (CDTL)
In this month’s Special Column, we consider some sources and examples of student learning questionnaires for your adoption or use as pre- and post-tests.
Gan, M. J. S. (2022, November 29). Examples of validated student learning questionnaires for adoption or adaptation. Teaching Connections. https://blog.nus.edu.sg/teachingconnections/2022/11/30/examples-of-validated-student-learning-questionnaires-for-adoption-or-adaptation/
When you consider measuring students’ learning using pre- and post-tests, one of the measuring tools that comes to mind is the use of survey scales (a set of items measuring a construct/attribute/trait or characteristic of learner or learning), or more commonly called a ‘questionnaire’ (a set of closed-ended or open-ended questions/items for data collection). Rather than creating your own instrument, there are many validated and relevant surveys which you can readily adopt or adapt. Consider the following sources (most are accessible from NUS Libraries):
- Health and Psychosocial Instruments (HaPI) database–a bibliographic database that provides comprehensive, accurate information about behavioural and psychosocial measurement tools in research across diverse disciplines and professions. You can search for it under NUS Libraries–Databases Search.
- APA PsycInfo database–provides information on many thousands of published research studies which have utilized testing instruments. You can search for it under NUS Libraries–Databases Search.
- ERIC (Education database)–provides extensive access to education-related literature. You can search for it under NUS Libraries–Databases Search.
- Practical Assessment, Research and Evaluation (PARE)–an online journal providing access to refereed articles that can have a positive impact on assessment, research, and evaluation.
- The Test Collection at ETS–a database (commercial) of more than 25,000 tests and other measurement devices, most of which were created by authors outside ETS.
- Mental Measurements Yearbook (MMY)–This series (commercial) includes timely, consumer-oriented test reviews, providing evaluative information to promote and encourage informed test selection.
How do you select or identify potential scales to incorporate into a survey to measure students’ learning outcomes? Here are six steps (Johnson & Morgan, 2016) to evaluate an instrument that you have read about or have been recommended by a colleague:
- Clarify the purpose of the instrument.
- Consider the fit of your context with the instrument.
- Take a look at what the research says about the use of this instrument.
- Obtain review copies of the instrument (plus the scoring, reporting and interpretation protocol) from the author(s). Note that some instruments are free to use for educational purposes if permission is sought and granted by the authors.
- Make a summary of the strength and weaknesses (validity and reliability) of the instrument.
- Decide (with the help of colleagues with survey expertise) on the use of the instrument for your students.
Here are examples of four instruments which you may find useful for measuring students’ learning outcomes in different contexts:
- Study Process Questionnaire (SPQ)–a widely used measure of learning approach and was proposed to have three orientations: surface, deep, and achieving, each with an underlying motive and strategy (Biggs, 1987) (see shortened 18-item SPQ by Fox, McManus, & Winder, 2001).
- Motivated Strategies for Learning Questionnaire (MSLQ)–for assessing students’ motivational orientations and their use of different learning strategies for a university course (Pintrich, 1991; Pintrich, Smith, Garcia & McKeachie, 1993).
- First Year Experience Questionnaire (FYEQ)–measures undergraduate student engagement, including online, self‐managed, peer and student‐staff engagement (see Krause & Coates, 2008).
- Interdisciplinary Understanding Questionnaire (IUQ)–to assess students’ interdisciplinary understanding in relation to knowledge of different disciplinary paradigms, knowledge of interdisciplinarity, reflection skills, critical reflection skills, communication skills, and collaboration skills (see Schijf et al., 2022) .
Do keep in mind that each type of measure (i.e., self-report or questionnaires, observation, teacher ratings) has its strengths and weaknesses. When collecting evidence of student learning, it is important to combine instruments to better triangulate data as it occurs in their studies over time.
References
Bandura A. (2006). Guide for constructing self-efficacy scales. In F. Pajares, T. Urdan (Eds.). Adolescence and education, Vol 4: Self-efficacy beliefs of adolescents (pp.307-337). Information Age Publishing.
Biggs, J. B. (1987). Student approaches to learning. Australian Council for Educational Research.
Fox, R. A., McManus, I. C., & Winder, B. C. (2001). The shortened Study Process Questionnaire: An investigation of its structure and longitudinal stability using confirmatory factor analysis. British Journal of Educational Psychology, 71(4), 511-530. https://doi.org/10.1348/000709901158659
Johnson, R. L., & Morgan, G. B. (2016). Survey scales: A guide to development, analysis, and reporting. Guilford Publications.
Krause, K. L., & Coates, H. (2008). Students’ engagement in first‐year university. Assessment & Evaluation in Higher Education, 33(5), 493-505. https://doi.org/10.1080/02602930701698892
Pintrich, P. R., Smith, D. A., Garcia, T., & McKeachie, W. J. (1993). Reliability and predictive validity of the Motivated Strategies for Learning Questionnaire (MSLQ). Educational and Psychological Measurement, 53(3), 801-813. https://doi.org/10.1177/0013164493053003024
Pintrich, P. R. (1991). A manual for the use of the Motivated Strategies for Learning Questionnaire (MSLQ). National Center for Research to Improve Postsecondary Teaching and Learning. https://files.eric.ed.gov/fulltext/ED338122.pdf
Schijf, J. E., van der Werf, G. P., & Jansen, E. P. (2022). Measuring interdisciplinary understanding in higher education. European Journal of Higher Education, 1-19. https://doi.org/10.1080/21568235.2022.2058045
Mark GAN is an Associate Director of the Centre for Development of Teaching and Learning (CDTL) in NUS. He has been involved in a wide variety of higher educational initiatives and programmes to enhance professional development of staff, such as courses for developing a Teaching Portfolio and writing of teaching inquiry grants. His research interests include feedback and assessment, and the impact of academic development work on teaching and learning. Mark has a PhD in Education from the University of Auckland, supervised by Professor John Hattie. Mark can be reached at mark.gan@nus.edu.sg. |