Technology in Pedagogy, No. 17, May 2013
Written by Kiruthika Ragupathi
Online Learning and Educational Apps seem to be the new buzzwords in education. The advent of MOOCs, educational applications (apps) and online lectures delivered via iTunesU, Coursera and TED, look set to bring about a paradigm shift in modern pedagogy. Yet, it is always important to be mindful of the educational principles that underpin good (and sound) pedagogy says Erle Lim, an Associate Professor at the Department of medicine, Yong Loo Lin School of Medicine from the National University of Singapore. As educators, it is important to ask, “Are we engaging our students?”, and more importantly, “Are we teaching students to curate knowledge rather than just acquire lots of meaningless facts?”
Assessments and high-stakes examinations are therefore important to determine if students are learning (and applying what they learn). Despite the healthy skepticism about these new-fangled ideas, we need to ask ourselves if we should embrace technology and better utilize smart devices and online tools to fully engage our students and test their ability to apply what they have learned, rather than just regurgitate “rote” knowledge. In this session, A/P Lim discussed the potential for online assessments – how to use them, and when not to.
A/P Lim started the session with a brief introduction to online assessments, and then highlighted the benefits and problems associated with using online assessments.
Assessment + computer + network= Online assessment.
Benefits of online assessments
- Instant and detailed feedback – how students perform, how the top-end students in relation of bottom-end students
- Flexibility of location and time – log on and take on the exam at any time – important to differentiate to formative and summative assessments.
- Multimedia – makes it more lively when these multimedia objects are incorporated
- Enables Interactivity – blogs, forums
- Academic dishonesty – essay questions can be automatically submitted to platforms like Turnitin, iThenticate to check for plagiarism
- Lower long-term costs
- Instant feedback/instant marking
- Reliability (machine vs. human marking) – scoring is impartial
- Impartiality (machine vs. human)
- Greater storage efficiency – digital versus hard-copy exam scripts
- Able to distribute multiple versions of the exam
- Evaluate individual vs. group performance – how has one individual scored vs. the cohort
- The report generating capability allows to identify learning problem areas.
- Allows to mix and match question styles in the exams
Disadvantages of online assessments
- Online assessments can be expensive to establish
- They are not suitable for all assessment types
- Cool is not necessarily good. Just because something is new and easily available may not be the best. Sometimes established old things are better.
- There is potential for academic dishonesty and plagiarism, even with Turnitin, it is possible to tweak the answer to be not detected.
- The online assessments gives only the “right” and “wrong” answers, and not necessarily on not necessarily on how students arrived at the answers.
- Potential for glitches, and therefore every problem has to be envisaged
The Modified Essay Question – an evolving scenario
The questions in a written examination can be constructed in different ways, e.g. short answer questions (SAQ) or essay questions. However, the use of short answer questions (SAQ) AND essays for online assessments make it difficult to mark online. Therefore the essay questions were modified to a “Modified Essay Question (MEQ)” which replicates the clinical encounter and assesses clinical problem- solving skills. The clinical case is presented in a chronological sequence of items in an evolving case scenario. After each item a decision is required, and the student is not allowed to preview the subsequent item until the decision has been made. The MEQs test higher order cognitive skills, problem-solving and reasoning ability, rather than factual recall and rote learning, and is generally context-dependent.
How useful is the MEQ
- Measures all levels of Buckwalter’s cognitive abilities: recall or recognition of isolated information, data interpretation, and problem solving;
- Measures all of Bloom’s 5 levels of Cognitive Processing: Knowledge, Comprehension, Analysis, Synthesis, and Evaluation;
- Construct and content validity;
- Dependable reliability coefficients;
- Correlate well with subsequent clinical performance
- Allows students to think completely in a new way with firm pedagogical underpinnings
Challenges and limitations of using MEQs
- Recall of knowledge and the questions
- Structurally flawed compared with MCQs.
- MEQ re-marking: lower scores than were awarded by the original, discipline-based expert markers.
- Failed to achieve its primary purpose of assessing higher cognitive skills.
Points to consider when planning a good test/examination
- Valid: The test measures what it is supposed to measure
- Reliable: (a) At any given time, the same student should be able to score the same mark, even if he/she had taken the test at a different time (b) Score what you are supposed to score
- Objective: Different markers should mark the same script with the same standard, and award the same mark
- Comprehensive: tests what one needs to know
- Simple and fair: (a) language clear, unambiguous questions (b) Tests appropriate level of knowledge
- Scoreable: Mark distribution fair
How to set Good MEQs?
- Understand the objectives of the assessment and be familiar with curriculum materials that relate to learning outcomes.
- Determine expected standards: what do you expect the candidate to know? It is important that there is a clear alignment between what students have learned and what they are being tested on. Always test what is taught to them, not to test beyond students’ level of understanding.
- It is also a good idea to involve peers when setting the MEQs and getting colleagues to try the questions out. This will also enable you to determine if the timing allotted is adequate and will also allow you to assess the adequacy of mark distribution. Get comments and criticisms from your peers.
- Do not set questions in silo. The formation of MEQ committees will be advisable, and ensure a good distribution of specialists in the committee (e.g., paediatrics and adult, subspecialty groups).
- Provide sufficient realistic clinical and contextual information, thereby creating authenticity in the cases.
- The components of the online assessment in order to increase discriminant value of examination.
- The design of the assessment should be contextual, sequential, with enough time to think. Due to the use of sequentially expanding data, students should not be allowed to return to their previous responses in order to change answers.
How to set FAIR MEQs
- Good quality images
- Data for interpretation:
- Information must be fairly presented: don’t overwhelm the candidates
- Choose relevant/reasonable tests: no esoteric tests (if possible), don’t give unnecessary data to interpret
- If tests essential to the MEQ but students not expected to know how to interpret: can use to teach – i.e. give them the report, but leave to them to interpret eg CT brain image showing ICH
- Keep to the curriculum
Q & A Session
Following the presentation by A/P Erle Lim, a lively discussion ensued and listed below are some questions from the subsequent Q & A session.
Q: | Why do you find essay questions difficult to mark online? I have a very opposite experience. Maybe your system is different. If your question is sufficiently clear, students will be able to cope and I don’t find it difficult to mark. |
EL: | There are advantages and disadvantages. One is you don’t have to read bad handwriting. |
Q: | You talked doing the assessment anytime and anywhere. How do we know if they are doing it in groups or doing it by themselves? |
EL: | I am sure there are some settings to be able to control the assessment. |
Q: | How do you go about the bell curve? |
EL: | We do get a decent bell curve. It is not necessarily even. We accept the marks as they are and for the School of Medicine, we do not do a bell curve. Every individual exam is not tweaked, it is only done at an overall level. |