Tag Archives: Online assessment

Conducting assessments online: How can you support your students?

In a previous post, I focused on the options that teachers may consider for conducting assessments online. While these may be important, it is far more important to start thinking about students and their well-being. The current COVID-19 outbreak is indeed a difficult time for our students as well, who may be dealing with a great deal of anxiety and stress. While we need to ensure the quality of our courses and the assessments, we need to give the best opportunity for our students to complete their courses.

Therefore, when designing your assessments, it may be good to think about:

  • How you can minimise additional anxiety for students in these difficult times.
    Recognise the fact that students may be facing challenging personal circumstances while working from home. Particularly when entire families are working/learning from home during this period, it may be challenging for them to even find a quiet spot to take the exam or attend your live lectures. Or that students may have periods of illness during reading weeks, revision and/or examination periods.
  • What you can do to offer flexibility in your assessments, but of course, while maintaining  accountability. For example, you could:
    • Be flexible with your deadlines.
    • Have extended duration or provide longer time to complete your assessments,
    • Offer multiple attempts for students to complete an assessment. You could then take the average of the best attempt or best two attempts.
    • Give choice in the topic, method, criteria, weighting or timing of assessments.
    • Give freedom and autonomy on the assessment format (essay/paper, poster, presentation, video, infographic).
    • Provide a second chance for students. Have students take a time-constrained closed-book online exam, and then follow it up with a second copy of the same, but as an automated take-home online assessment. The student may use class notes, the text, or any other available reference materials except other checking with their peers or others. A composite grade is calculated using the following formula: in-class exam score plus the product of half the points missed on the in-class exam times the percentage score on the take-home exam. (adapted from “Better testing for better learning” by Murray, 1990 In College Teaching)
  • Group work and/or projects. While using group projects as your assessment, think about if your can offer flexibility in whether they can work in group or alone. Working in groups during this period may be stressful particularly when their grades have to depend on their peers’ work as well.
  • How students may experience varying levels of digital literacies and internet connectivity, while completing their assessment tasks.
  • How you can provide a robust assessment method but still be able to meet the needs of students with diverse needs, including reasonable adjustment.
  • Finally and more importantly, how your assessments can offer an equitable solution for both you and your students.
[This post is a reproduction from my other blog on Towards Open Learning]

Conducting assessments online: What options do you have?

The COVID-19 pandemic has thrown many faculty members into a limbo particularly when it concerns assessments. While most have managed with online teaching to some extent either through recording of their lectures or conducting live webinars, many are still grappling with how to conduct assessments online. In this post, I put together some options that you may wish to consider be it using your campus learning management systems (LMS) [examples include: Moodle, Canvas, Blackboard, LumiNUS], or other open platforms that may be available to you.

When developing your online assessments, some important aspects for you to consider include:

  • Establishing the purpose of your assessment, and what you hope to achieve
  • Aligning your assessments with the learning outcomes of your course
  • Communicating your expectations and requirements clearly to your students

More importantly, you will need to remember that online examinations are quite different from your usual face-to-face examinations. Simply converting your current face-to-face examinations into an online format can be a recipe for failure; as these online exams are not in a confined, invigilated environment. So it is important that you ask yourself the following questions when planning to conduct online assessments:

  • How do you make the assessment process meaningful to both you and your students?
  • Should the assessment be synchronous(real-time) or asynchronous (delayed), group or individual?
  • Is securing the assessment necessary? If so, at what level?
  • How do you plan to provide high quality feedback and opportunities for your students to act on feedback?

In this post, I list some online assessment options that you can use. I have also included some examples and available tools that you can consider.

Traditional assignments online
Traditional assignments can be in the form of Essays, Case studies, Article reviews, Proposal writing or Report writing. You can get students to submit the essays to you for review. Most LMS systems have either an Assignments tool or Files Submission tool with which students can submit these traditional assignments online. If you are using Microsoft Teams or the Google Classroom, you can similarly use the Assignments to upload files. Additionally, you can also use the Essay type (short answer) questions with the Quiz tool for students to submit essays.
Whatever be the tool, be transparent in the marking criteria so that students know what is expected of them, and be specific about the allocated marks, word limits. Finally, be sure to offer individualised relevant feedback and a summary of common mistakes for the entire class.

Quizzes (automated online assessment)
Online Quizzes that contain multiple-choice questions, multiple response questions, fill-in-the-blanks, true/false, and matching questions can be developed to assess students’ learning or to assess their prior knowledge. Such automated quizzes can also be embedded within video lectures or micro-lectures to test students’ learning, and these quizzes are generally referred to as in-video quizzes.

Timed online assessment
If you are considering to conduct your mid-semester exams and final exams in an online format, you can design assessments that are time-constrained. To minimise cheating in such exams, you should also consider:

  • randomisation of questions and options in your MCQs
  • personalisation of questions (e.g., numerical values are personalised for individual students; questions are selected from a large pool of questions which are of same difficulty level)
  • structure the questions in such a way that students are prevented from returning to previous question or section (sequencing your questions)
  • require students to provide a short justification (rationale) for each MCQ question (sometimes, referred to as two-tiered MCQs).

Online interaction
The Forum or Discussion tool, blogs and wikis facilitate asynchronous interaction. You can use these tools to monitor student learning via their contributions to online forums, chats, blogs and wikis. These contributions can be in the form of reading summaries, collaborative learning where students work in small groups to provide critical peer review on each other’s work.

Group assessments online 
Students can work in groups to create online presentations, project artefacts and upload their presentations for you to review or to be reviewed by peers or both.

Critical reflection and meta-cognition  
You can consider the use of electronic portfolios, online journals, logs, diaries, blogs, wikis, embedded reflective activities. For any of these, you can consider using peer and self-assessment to assess these critical reflection pieces.

Online oral examinations
One-on-one or small group oral examinations can be conducted via any video conferencing tool such as Skype, Microsoft Teams, Zoom, Google Duo. Additionally, you can also consider students to role play or participate in debates via the online video conferencing platforms to assess their learning. You can use the recording function in these tools, in case you would like to review these at a later date.

Getting students to submit assessment questions
You can get students to create and submit assessment questions online for each topic or course. An online quiz with two-part short answer/essay question can be used to get students to (1) create and input their assessment question and (2) write their explanation on what is being assessed and why it is important for student learning and how it is related to the learning outcome(s) of the topic or course. Alternatively you can get students submit their assessment questions on the forum, and conversations with peers and instructors on the strength and weakness of the assessment question via the forum.

Take-home quizzes on reading assignments
A take-home quiz for every reading assignment with one question for approximately each page of the text, with the order of the questions following the order of the pages. The test questions (stem) should be “objective” in a fairly literal sense, the answers (the options) should be quite specific; to answer a question, students should need to do little more than find the right sentence or paragraph and read it with a certain degree of understanding.
(extracted from “Tests that also teach” by Williams, 1988, In American Philosophical Association Newsletter on Teaching Philosophy)

Group multiple-choice test
When taking a multiple-choice in-class test, students could consult with their peers if they wished; but each student would finally have to complete the online assessment, and would be graded individually.
(adapted from “Better testing for better learning” by Murray, 1990 In College Teaching)

Paired testing
This assessment consists of a series of thirty-question exams of two parts each. The first set of fifteen questions is taken individually by each student (can be a timed quiz). But, the second set of fifteen questions is assigned to student teams of two. Student teams can discuss each test item but were not required. Finally, each student turns in an individual answer to the quiz.
(extracted from “Peer-mediated testing: The effects of an alternative testing procedure in higher education” by Hendrickson, J. M., Brady, M. P., and Algozzine., B., 1987, Educational and Psychological Research)

This blogpost is adapted from a resource guide on designing effective online assessments that was developed by me many years back, which I use for conducting workshops on designing online assessments at my campus.

[This post is a reproduction from my other blog on Towards Open Learning]

Virtually Vygotsky: Using Technology to Scaffold Student Learning

Technology in Pedagogy, No. 20, April 2014
Written by Kiruthika Ragupathi

Download

Introduction

What is scaffolding? How does it help learning? Can technology be used? Does it work? And who is Vygotsky? These were the questions that Adrian Lee, a senior lecturer in the Department of Chemistry at the National University of Singapore set out to answer in this session on “Virtually Vygotsky: Using Technology to Scaffold Student Learning”. Through this session, Adrian showcased technologies that can be used before, during and after class to provide appropriate scaffolding to students.

Scaffolding, he says, can come in a variety of forms, from increasing engagement, providing alternate learning strategies, resolving learning bottlenecks, and (paradoxically) taking away support to allow students to master material, among other things. The Zone of Proximal Development (ZPD) underpins some of the ideas of constructivism, and according to Vygotsky (1978), “the actual developmental level characterizes mental development retrospectively, while the Zone of Proximal Development characterizes mental development prospectively.” Vygotsky believed that when a student is at the ZPD for a particular task, providing the appropriate guidance (scaffolding) will give the student enough of a “boost” to achieve the task.

scaffold-1

The term ‘scaffolding’, coined by Bruner, was developed as a metaphor to describe the type of assistance offered by a teacher or more competent peer to support learning. In the process of scaffolding, the teacher helps student master a task or concept that the student is initially unable to grasp independently. The teacher then offers assistance with only those skills that are beyond student’s capability. Once the student masters the task with the benefit of scaffolding, the scaffolding can then be removed and the student will then be able to complete the task again on his own. Therefore, what is of great importance is enabling the student to complete as much of the task as possible, unassisted (Wood et al, 1976). How this translates to the constructivist approach is that the area of guidance grows as we teach the subject matter, and as the students mastery level of the subject concepts changes. The model of instructional scaffolding can be illustrated as below:

scaffold-4

Adrian also emphasized on the seven principles of undergraduate education by Chickering and Gamson (1987) that is still incredibly pertinent in today’s teaching.

  1. Encourages contacts between faculty and students
  2. Develops reciprocity and cooperation among students
  3. Uses active learning techniques
  4. Gives prompt feedback
  5. Emphasizes time on task
  6. Communicates high expectations
  7. Respects diverse talents and ways of learning

It is these principles, he said, that shapes his teaching and helps him decide on when and how to provide the much needed scaffolding for his students’ learning.

Note:
For more information on how to use IVLE to implement the 7 principles of effective teaching, please see: http://www.cdtl.nus.edu.sg/staff/guides/ivle-tip-sheet-2.pdf

Scaffolding student learning through technology

Adrian illustrated some technologies that he uses to scaffold student learning, some of which he highlighted were available within IVLE. The first five items are those that are used outside the classroom while the last item is a technology that is used inside the classroom.

1.      Lesson plan
He explained that he uses IVLE lesson plan to connect all the instructional resources for the module, and as a one-stop location for students to access these resources. He uses the topic approach compared to the commonly adopted weekly approach, as the topic approach can easily act as an index of his lecture notes and remains accurate on a year-to-year basis as it is independent of changing national holidays.

Five to six different concepts are covered each week. Each topic in the lesson plan usually consists of:

  • Play-lists – to allow students to access the online lectures in the recommended sequence before the face-to-face lessons
  • Weblinks – to provide additional materials, wherever necessary, to enhance student understanding
  • Online quizzes – to test student understanding of concepts. Each quiz consists of about 10 MCQ or “fill-in-the-blank” questions
  • Multimedia tutorials – to support the various class exercises through tutorials that discuss the concepts
  • Spreadsheets –  to enable students to work out problems, thereby boosting understanding through interactive visualizations

A sample lesson plan for one topic is shown below:

scaffold-3

2.      Online quizzes

Online quizzes are mainly used to understand what students don’t know. Students watch lecture videos before class (a flipped classroom approach), and take a short quiz on the lecture content. Each quiz question is designed as an MCQ type or “fill-in-the-blank” type, but it also requires students to provide a rationale for their chosen answer. Each student is given 5 attempts. When a student gets a question wrong, feedback is provided along with a hint pointing to the right answer. Students generally would be able to get full marks for the online quizzes within the allowed 5 attempts. The rationale students provide for each MCQ question will give insights on what students don’t know. Adrian explained that his focus was mainly on students’ first attempt of the quiz, as this can act as a good gauge of students’ understanding. He said he used this information to fine-tune his lectures, to pick out discriminatory questions and address student misconceptions.

Samples of the online quiz, designed using IVLE assessment tool is illustrated below:quiz-1 quiz-2 quiz-3

quiz-2

quiz-3

 3.      Interactive visualizations

Adrian uses excel spreadsheets to design interactive visualizations. For each question appearing on the online quiz (discussed above), he provides students with at least one related interactive visualization. Students will be allowed to interact with these visualizations while attempting the online quizzes. They will be able to visualize changes that occur when changing the values provided in the spreadsheets.

interactive-visuals

 4.      Peer assessment

Peer assessment is an important component that can be used to enhance student learning. Adrian uses peer assessment to get students to assess their peers’ essays. He also provides a grading rubric that students can use as a guide while marking. Finally, he makes it a point to return the feedback from the peers back to the individual students. Each student gets to mark at least 3 essays. This allows students to see other’s work compared to their own essays. His class, being a heterogonous class prompted him to moderate student grades taking into account the easy-graders and strict-graders. In addition, he also gets students to mark their own essay after having marked their peers’ essays. This acts as a reflective element for their own learning.

Thus with peer assessment, students get an opportunity to observe their peers’ learning process and also be able to get a more detailed knowledge of their classmates’ work. This fosters increased responsibility in students, enabling students to be fair and accurate when assessing their peer’s essay, thereby making fair judgments, while also aiding in self-assessment of their own work.

Adrian also suggested the use of TeamMates, an online peer evaluation tool used by some of his colleagues in the Department of Chemistry, though he acknowledged that he is yet to try it personally. (refer to Technology in Pedagogy Series, No. 18,  on “Leveraging peer feedback”).

5.      Online tutorials

Adrian uses Camtasia Studio to create the online tutorials. These online video tutorials guide students through some of the main ideas discussed for a particular topic and allows them to work on their homework assignments without the need for them to attend his lectures. Other tools that are available at NUS to create such online tutorials and/or lectures are:

  • Adobe Presenter (Breeze) and Adobe Presenter Video Creator,
  • Camtasia Relay, and
  • Ink2Go.

6.      Learner response systems

Learner response systems can be used as a formative assessment to guide teaching, to measure what students are thinking and then address it immediately in class. They can be used to check students’ prior knowledge, probe their current understanding, and uncover student misconceptions. They can also provide feedback to the instructors about their students’ understanding and to students about their own understanding. These learner response system fall into the category of “taking technology into the classroom”. The two options that Adrian uses for this purpose are:

Some ways in which Adrian uses learner responses systems to integrate with the idea of scaffolding:

  1. Get students to answer the questions on an individual basis based on what is being discussed in class.
  2. Ask a question based on the lecture notes, but on a topic/concept that has not been discussed in great detail.
  3. Students can work in groups of 3-4 to answer the questions posed, and come back with a group answer.
  4. Get students to answer individually, and then work in groups to analyse the questions posed. Finally, get students to answer the question again individually.

The distribution of answers is then displayed immediately to the entire class. If the results show that a significant number of students chose wrong answers to a question, then the teacher can revisit or clarify the points he or she just made in class and if most students chose the correct answers to a question, then the teacher can just move on to another topic.

Q & A session

Following the presentation by Adrian, participants had the following questions:

Q:  Do you use the online quizzes as a formative or summative tool? If so   how does it provide scaffolding?
AL:

I use it as for formative assessment purposes. If you want students to tell you   what they don’t know, then you cannot have a summative assessment. We would   need to have formative assessment.

First, I allow students to take   5 allowed attempts, and as there are only 4 choices to each MCQ question,   student should be able to get 100% by the fifth attempt. Second, I award only   a participatory grade, and hence there is no pressure on the students to get   it all right.  The first attempt   requires students to key in their rationale for choosing the answer. Based on   the rationale, I am able to determine if students understand important concepts,   and pin-point topics that students need further guidance.

Q:  How do you decide on what type of questions to use in an online quiz,   without understanding student’s prior knowledge?
AL: In designing the MCQ questions,   experience with handling the course in previous years definitely helps. If it   is the first time, then have multiple tier MCQs (with rationale) or have free   text answers. Based on the answers that students give, and after a year’s   worth of experience, we would then able to design better MCQs.
Q:  When you use many technology tools in your course, do students complain?
AL: During my first class, I   explain to my students as to why I use certain tools, and how they can   benefit by using those tools.
Q:  How much support is there for learner response systems in the   literature?
AL: Current research has good support   for the use of learner response systems. It really depends on how you use   them.
Q:  When you ask MCQ questions using learner response systems, can students   share their answers with their peers?
AL: Students are developing a   learning community. Although community learning helps, at times individual   learning is equally important. Hence I would use all approaches – single   answer; single answer after discussion with peers; group answer.
Q:  How often do you use clicker questions in each lecture?
AL: I use about 1 or 2 questions   per lecture. Students seem to appreciate the use of such questions in the   classroom. I allow about 1-2 minutes for students to answer each question.


Suggestions from group discussion

Following the Q & A session, Adrian posed the following questions for participants to discuss in groups as to how they employed instructional scaffolding in their own classrooms and disciplines:

  1. Do you use technology to scaffold your teaching?
  2. How do you employ scaffolding in your teaching?
  3. Why use technology to provide scaffolding?
  4. How does scaffolding support best practice in your classroom?

Adrian asked participants to ponder over how they get students cycle through their learning; and keep a record of their learning and progress. A summary of the group discussion is given below:

Online quizzes and feedback:

  • Pre-lecture reading quiz
  • Online survey forms: used more as a survey than as a quiz to collect a mid-term feedback from students.

Peer assessment / Peer feedback:

  • An online form was used for collecting peer feedback for group work – using a grade (high, moderate, low) along with qualitative comments that gives reasons for providing that grade. Participants agreed that when students just know that there is peer assessment allows for a better behaviour.
  • Participants also felt that the use of peer feedback moderates group behaviours, improves group dynamics, enhances their reasoning and rationalizing abilities and is also meant to avoid the free-rider problems in group work.
  • It was also shared that to get students to reflect on their own learning and to improve the group dynamics, it is important to get students to sit down as a team and explain to each other what their contributions are.  Generally it was felt that peer assessments are quite accurate—weaker students generally feel that they are better than what they are, while stronger students feel that they can do better. Once they talk to each other, weaker students tend remove the natural bias and tend to grade better.

Learner response systems:

Participants also shared other learner response tools that could be used other than clickers and questionSMS:

All of these tools can be used from any online platform and/or mobile devices.

Other useful ways of scaffolding:

  • Facebook (FB) groups can be used for peer learning and peer review – students can comment and discuss; quiet students are more empowered to participate in the discussions, particularly since the FB space is one that students are familiar with and are comfortable using it. (refer to Technology in Pedagogy Series, No. 1 on Facebook for Teaching and Learning)
  • Wikis / blogs can be used to get students to teach others as well learn from each other. (refer to Technology in Pedagogy Series, No. 2 on The Lunchtime Guide to Student Blogging and Technology in Pedagogy Series, No. 5 on Wikis for Participatory Learning). However, it was noted that some help and guidance is needed to get students to use wikis.
  • YouTube videos to explain concepts
  • Google docs to do peer work. (refer to Technology in Pedagogy Series, No. 3 on Google Docs and the Lonely Craft of Writing)

References

Chickering, A. W., & Gamson, Z. F. (1987). Seven principles for good practice in undergraduate education, 
American Association of Higher Education Bulletin, 39, 3–7.

Ragupathi (2010). Using IVLE to implement the 7 principles of effective teaching.  http://www.cdtl.nus.edu.sg/staff/guides/ivle-tip-sheet-2.pdf

Vygotsky, L. S. (1978). Mind in Society: Development of Higher Psychological Processes, Harvard University Press, 86–87.

Wood, D., Bruner, J. S. & Ross, G. (1976). The role of tutoring in problem solving, Journal of Psychology and Psychiatry, 17, 89–100.

How shall we know them? Learning through assessment

Technology in Pedagogy, No. 19, March 2014
Written by Kiruthika Ragupathi

The vast majority of us learn1; the question is of course, what we learn. Simon Watts, an associate professor in the Department of Chemistry from the Faculty of Science, National University of Singapore believes that all aspects of what we do as a teacher should facilitate student achievement of learning outcomes – this includes both ‘teaching’ and ‘assessment’ activities. Of course, the achievement of learning outcomes by reviewing a final exam one has just failed is clerically a little inconvenient, but in learning terms the assessment would have achieved one of its primary objectives.

Having spent time working on learning and personal support systems in higher education both in Oxford and New Zealand,  A/P Watts talked about his abiding fascination with the way people learn and think2, how they communicate and work in groups. Thus it came as no surprise when he started the session by elaborating on his pedagogic research which is centered around culture3,4 – the NUS culture, particularly the student-staff culture. The paradigm for the work is: behaviours, attitudes and customs; though he is sure that there will be other external drivers that may reinforce these. Without understanding the prevailing culture, it is very difficult to study the learning processes says A/P Watts particularly when the cultural paradigm here in Singapore is so interesting.

A learning process characterized by memorizing and rote learning, “a low level learning culture” does not give a deep grasp of the subject matter, and will affect the ability to process the full breadth and implications of the material concerned5. It has been said that this culture is common amongst many Singaporean students. However, A/P Watts proposes that the learning process be treated like a Tango (a dance where one partner leads, and other follows), and we, as learning facilitators, have a duty to lead this dance. His hypothesis is that the current culture is a result of this Tango.

In this session, A/P Watts discussed the initial development of two applications that facilitate student learning:

  • pre-laboratory tests
  • secure Modified Essay Question (MEQ)

The IVLE assessment tool was used for administering both these tests. The development of these applications is about how IVLE can be used to facilitate students to achieve planned modular learning outcomes and also look at how staff are facilitating high order student learning.

Pre-laboratory tests

The pre-laboratory tests were designed for CM2192, a compulsory module in Chemistry. Many times, as facilitators, we face the challenge of teaching students who simply don’t read at all or do not know how to read the textbook/scripts; and they seem to read and feel as if they need to memorize everything. This is a huge problem when teaching the laboratory, since time is limited for any kind of theory discussion. Hence the use of pre-tests which it is hoped help students focus their reading and help them prepare and understand the most important parts of the material before performing the experiments.

Therefore, the desired learning outcomes of the pre-lab tests designed by A/P Watts and the Chemistry Teaching Team were on safety, familiarity with script, and subject familiarity, but not necessarily requiring them to have an in-depth knowledge of the subject but the basic knowledge needed for the laboratory exercise. The laboratory exercises were designed in such a way that half were based on analytical chemistry while the other half were on physical chemistry. The focused reading of the scripts in preparation for the pre-tests forces students to think about and understand what they are about to do, and also provides framework for studying key concepts for the laboratory exercises. These tests act like a “door wardens” with an outcome of pass or fail. Any student not able to pass the test with 2 attempts is not allowed to take the practical: subtly informing students that preparation is needed.

The teaching team drafted a total of 15 unique questions per practical reflecting desired learning outcomes. Although each member of the team chose the form of their questions, effectively there were two categories: either questions that were yes/no or true/false (i.e. a 50% chance of being right in a blind guess), or more difficult questions: “Fill in the blanks” or “Which of the following is FALSE or TRUE?  (1 of 4)”. Each pre-lab test has 6 questions of which the students would need to get 5 correct to pass the test. The IVLE Assessment tool was used to design the pre-lab tests with every student taking the pre-lab test before every practical exercise. Each test is open for 60 minutes, only available and unique to that batch of the students taking that particular laboratory exercise.

The questions that A/P Watts asked himself were:

  • Is this technique in this application effective for student learning and in the effective use of staff time?
  • Do question types used influence pre-lab scores?
  • Are there marked differences between two the analytical and physical chemistry halves of course?
  • Do students “learn”? (not what do they learn: questions or assessment methods?)
  • Is there a link between pre-lab tests and final assessments?

From this experiment, it was fairly clear the staff time was reduced considerably from 35 hours per semester (based on individual tests twice for each practical on each week) to about 20 hours per semester (of which 70% is manually downloading marks and 20% re-setting tests). With more IVLE applications, it is estimated that this might take only 5 hours per semester.

It also became apparent that students had difficulty with analytical compared physical chemistry questions. However, it was also noted that physical chemistry section had more of “50% chance” T/F questions. Feedback from the participants proposed the usage of negative marking even when using the T/F question type would be able to take care of the issue; while other participants suggested getting students to key in the rationale for choosing a particular choice or to justify a reason for their choice over other choices.

A/P Watts recognized that it was difficult to gauge the effectiveness of student learning though students have reported that these pre-lab tests have helped them better understand and focus on the experiment.

Pedagogical advantages that pre-tests offer

Pre-test is a measurement of the learning received either through pre-lectures, readings or scripts as a result of understanding students’ prior knowledge before participating in an activity. It should be noted that such pre-tests can be used not only in the laboratory setting but also for any activity that requires students to prior preparation for participation in an activity.

Reasons for having pre-tests with focused readings/ video lectures are:

  • They are helpful to quantify the knowledge attained in the class and if the desired learning outcomes were achieved by students with diverse learning styles and varied preparation. More specifically, the tests indicate how the students are prepared for the learning activities and how they are learning.
  • The focused readings / video lectures force students to think about and understand what they are about to do, and also provide them with a framework for studying key concepts for tests.
  • They should also promote curiosity and imagination to predict outcome of experiments or activities while also promoting good reading/listening comprehension strategies like: previewing, re-reading, making connections, and self-questioning, before, during and after reading the scripts.
  • It is hoped that they also improve students’ participation and engagement during learning activities/laboratory exercises.
  • The data collected from the tests may enable facilitators to target students requiring extra help and will also help in identifying teaching and learning methods that need to be changed or developed.

Modified Essay Question (MEQ)

Modified Essay Questions (MEQs) are often included as assessments to test higher order cognitive skills as the more commonly used multiple-choice questions (MCQs) are generally regarded as testing for knowledge recall only. Thus an MEQ is a compromise between multiple-choice question (MCQ) and essay and sits in between these two test instruments in terms of the ability to test higher cognitive skills and in the ease of marking to a consistent standard.

A/P Watts used MEQs as part of the final examination (50% assessment balance) for the module CM4282 – Energy Resources. The desired learning outcomes were: Subject familiarity; ability to quantify abstract problems; ability to use simple models; lateral and logical reasoning.

For developing the MEQs, he again used the IVLE Assessment Tool, but now in a secure environment. Students were allowed to take the test in a secure mode while being able to access specific documents (a Singapore Statistical and Information Pack and a Singapore government policy paper:  Singapore National Climate Change Strategy).

The MEQs were unique in the sense that it employed a “No turning back” design. To illustrate this further, take for example an MEQ with four-part question, students move from Q1a à Q1b à Q1c à Q1d. The student needs the answer to (for example) Q1a to answer Q1b, and so on. Hence, if a student gets Q1a wrong he/she will be disadvantaged in the questions that follow which depended on the Q1a answer. In this situation, it would not be fair to penalize the student in questions 1b, 1c and 1d for the error in 1a. Hence, after students answers Q1a, they are given the correct answer to 1a, but cannot go back and correct their original answer, but can only proceed to the next question. However, now they have a good 1a answer with which to proceed. A sample of an MEQ used is given below:

MEQ - Sample

The research questions that A/P Watts asked himself were:

  • Is this technique in this application effective for student learning and in the effective use of staff time?
  • How does using a computer affect student performance6?
  • How does “no turning back” affect student performance?
  • Do students “learn” from the exam?

The MEQ allowed the staff to mark the final exam in 4 hours, mainly due to the structure of the IVLE assessment; thereby reducing staff time. Students indicated that it was the hardest module but the most enjoyable they had done, and had suggestions for improvement. However, students also felt that the MEQ ‘no turning back’ structure did not help them have a better understanding of the full set of questions and hence did not allow them to do planning at the beginning of the exam before taking the questions. Taking this feedback into consideration, A/P Watts has decided to allow students access to a soft copy of the paper during the exam. As a follow-up, until there were “drawing” input abilities for IVLE, he also felt the need to use it more for course-work exams rather than only for final examinations.

A/P Watts ended his presentation and opened up the discussion posing questions and seeking feedback and ideas from participants.  Some of the questions that were posed include:

  • How do we test absolute student learning and how do we know what they are learning?
  • Participants’ thoughts on the “no turning back” structure

This stimulated a lively discussion with participants sharing their experiences in the laboratories and in the use of assessments.

Q:  Most of the time in the laboratory classes, 2% of the un-prepared   students take up 98% of staff time. So with the use of the pre-tests, does   the preparation of the students match the pre-test scores?
SW: More students were prepared,   but unfortunately students were still focused on getting the results that   they need to get, and generally do not come with probing questions to be   discussed during the lab. Often we see students start to think and prepare   for the labs only when they start writing their reports; and so the pre-lab   questionnaire is a way to get students to start thinking about it   before-hand.Comment from participants:

You mentioned that students   analytical scores improve over the term – would this be due to the fact that   they are involved in the lab work before they learn a concept and hence   during the end of term their understanding of concepts are better.

 

Q:  Is it a lot a work for a pre-laboratory work?
SW: Looking at a combination and   changing questions for each set of pre-lab tests is automatic, opening up   tests to different groups of students does not require much time. The time   needed is for the preparation of the questions, which is a team effort, and   as mentioned earlier time is saved in marking them, and getting students to   be better prepared for the laboratory exercises. 
Q:  You only allow them to take the pre-lab two attempts. Will it be a disadvantage   the students?
SW: Yes, only two attempts are   allowed as we also do not have a big pool of questions to recycle. However,   when students fail both the attempts, this alerts the academic who runs the   practical session, to assist if there is a genuine problem. But for students   who do not take the effort to attempt the questions, they will not be allowed   to do the practical. 
Q:  Wouldn’t changing numerical values will allow you to create a large   pool of questions?
SW: Yes, that is entirely possible,   and something I badly want to do. I have not found an easy way to do this   within the IVLE system; but will be exploring for ways to have that happen   with the IVLE team. 
Q:  Are the students asking more qualitative questions during the lab   after the pre-lab? Many students just give you answer that they got from   their senior’s reports
SW: Not necessarily. I asked   students questions about their results during the lab and did not really get   better qualitative answers.  Some will   just give you answers from their senior’s reports.

The discussion then moved on to improving the laboratory learning experience and making it engaging and fun. One participant mentioned that it might be good to have at least one of the practical to be a flawed practical in the first year, as a way of training them into the lab as a practical skill; while others felt practical is progressive work and it is important to address the average NUS student – it is not that their attitudes are bad, but they need some encouragement to progress over the years. Participants also noted that there are other factors that play a part – curriculum design; validation by external bodies; breadth of coverage in the first and second year. They also felt the need to think for the common student, not dumb it down – get the structure, get them to move a little bit out of their comfort zone; but will still need to have the teaching team colleagues on board. There were suggestions to add to the current video lectures before the labs with virtual practical using Java applets.  As teachers we aim to facilitate an inquisitive nature in our students, although often one asks observational questions first.

Others highlighted the fact that if we consider culture is a problem (i.e. if seniors are passing lab materials and report to juniors), why are we as facilitators not riding on that to use it to our advantage. While grappling with a lot of things; we teach skills, but we also need to teach attitudes.  Having talked to a lot of students – they need to learn but not at the expense of their grades – you try to make the system such that we encourage them along the way.

Another issue that was discussed was on training of Graduate Teaching Assistants, as sometimes the lab technicians and GTAs might give away the correct results, getting students to repeat the exercise rather than help them probe further.

Finally the discussion moved to MEQs; particularly the usage of MEQs during for course-work tests rather than for the final exams. It was largely agreed that using essay type question for large classes or resorting to MCQs was not the way to go for enhancing student learning. Hence, MEQs would be a good option to consider particularly if easy questions could be designed for the first few tests; and as we proceed over the semester – have more difficult questions. This will show an improvement in the assessment, and could be due to their familiarity of the content as well as the process over the semester.

 

References

  1. Basic Cognitive Theory (e.g. Piaget) for example Paul, K. [1951] The Psychology of Intelligence. Routledge, London.
  2. Schaeffer, B. K., Rubenfels, M.G. [2001] Critical Thinking, what is it and how do we teach it? Current Issues in Nursing. 6e J.M. Dochtorman and H.K. Grace, Mosby, St Louise.
  3. Baker, D. Taylor, P.C.S. [1995] The effect of culture on the learning of science in non‐western countries: the results of an integrated research review. Int. J. Sci. Educ. 17, 695-704.
  4. Bishop, R., Berryman, M. [2006] Culture speaks: cultural relationships and classroom learning. Huia. Wellington, NZ.
  5. Biggs, J. B. [2003] Teaching for quality learning at university. Buckingham Open University Press, 2e
  6. Sloan & McGinnis, (1978); Russell & Wei (2004); Powers, et al., (1994); Mogey (2010); Burke and Cizek (2006); Russell and Haney (1997); Mogey et al., (2010), etc.

Online Assessments

Technology in Pedagogy, No. 17, May 2013
Written by Kiruthika Ragupathi

Online Learning and Educational Apps seem to be the new buzzwords in education. The advent of MOOCs, educational applications (apps) and online lectures delivered via iTunesU, Coursera and TED, look set to bring about a paradigm shift in modern pedagogy. Yet, it is always important to be mindful of the educational principles that underpin good (and sound) pedagogy says Erle Lim, an Associate Professor at the Department of medicine, Yong Loo Lin School of Medicine from the National University of Singapore. As educators, it is important to ask, “Are we engaging our students?”, and more importantly, “Are we teaching students to curate knowledge rather than just acquire lots of meaningless facts?”

Assessments and high-stakes examinations are therefore important to determine if students are learning (and applying what they learn). Despite the healthy skepticism about these new-fangled ideas, we need to ask ourselves if we should embrace technology and better utilize smart devices and online tools to fully engage our students and test their ability to apply what they have learned, rather than just regurgitate “rote” knowledge. In this session, A/P Lim discussed the potential for online assessments – how to use them, and when not to.

A/P Lim started the session with a brief introduction to online assessments, and then highlighted the benefits and problems associated with using online assessments.

 Assessment + computer + network= Online assessment.

Benefits of online assessments

  • Instant and detailed feedback – how students perform, how the top-end students in relation of bottom-end students
  • Flexibility of location and time – log on and take on the exam at any time – important to differentiate to formative and summative assessments.
  • Multimedia – makes it more lively when these multimedia objects are incorporated
  • Enables Interactivity – blogs, forums
  • Academic dishonesty – essay questions can be automatically submitted to platforms like  Turnitin, iThenticate to check for plagiarism
  • Lower long-term costs
  • Instant feedback/instant marking
  • Reliability (machine vs. human marking) – scoring is impartial
  • Impartiality (machine vs. human)
  • Greater storage efficiency – digital versus hard-copy exam scripts
  • Able to distribute multiple versions of the exam 
  • Evaluate individual vs. group performance – how has one individual scored vs. the cohort
  • The report generating capability allows to identify learning problem areas.
  • Allows to mix and match question styles in the exams

Disadvantages of online assessments

  • Online assessments can be expensive to establish
  • They are not suitable for all assessment types
  • Cool is not necessarily good. Just because something is new and easily available may not be the best. Sometimes established old things are better.
  • There is potential for academic dishonesty and plagiarism, even with Turnitin, it is possible to tweak the answer to be not detected.
  • The online assessments gives only the “right” and “wrong” answers, and not necessarily on not necessarily on how students arrived at the answers.
  • Potential for glitches, and therefore every problem has to be envisaged

The Modified Essay Question – an evolving scenario

The questions in a written examination can be constructed in different ways, e.g. short answer questions (SAQ) or essay questions. However, the use of short answer questions (SAQ) AND essays for online assessments make it difficult to mark online. Therefore the essay questions were modified to a “Modified Essay Question (MEQ)” which replicates the clinical encounter and assesses clinical problem- solving skills. The clinical case is presented in a chronological sequence of items in an evolving case scenario. After each item a decision is required, and the student is not allowed to preview the subsequent item until the decision has been made. The MEQs test higher order cognitive skills, problem-solving and reasoning ability, rather than factual recall and rote learning, and is generally context-dependent.

How useful is the MEQ

  • Measures all levels of Buckwalter’s cognitive abilities: recall or recognition of isolated information, data interpretation, and problem solving;
  • Measures all of Bloom’s 5 levels of Cognitive Processing: Knowledge, Comprehension, Analysis, Synthesis, and Evaluation;
  • Construct and content validity;
  • Dependable reliability coefficients;
  • Correlate well with subsequent clinical performance
  • Allows students to think completely in a new way with firm pedagogical underpinnings

Challenges and limitations of using MEQs

  • Recall of knowledge and the questions
  • Structurally flawed compared with MCQs.
  • MEQ re-marking: lower scores than were awarded by the original, discipline-based expert markers.
  • Failed to achieve its primary purpose of assessing higher cognitive skills.

Points to consider when planning a good test/examination

  • Valid: The test measures what it is supposed to measure
  • Reliable:  (a) At any given time, the same student should be able to score the same mark, even if he/she had taken the test at a different time (b) Score what you are supposed to score
  • Objective: Different markers should mark the same script with the same standard, and award the same mark
  • Comprehensive: tests what one needs to know
  • Simple and fair: (a) language clear, unambiguous questions (b) Tests appropriate level of knowledge
  • Scoreable: Mark distribution fair

How to set Good MEQs?

  • Understand the objectives of the assessment and be familiar with curriculum materials that relate to learning outcomes.
  • Determine expected standards: what do you expect the candidate to know? It is important that there is a clear alignment between what students have learned and what they are being tested on. Always test what is taught to them, not to test beyond students’ level of understanding.
  • It is also a good idea to involve peers when setting the MEQs and getting colleagues to try the questions out. This will also enable you to determine if the timing allotted is adequate and will also allow you to assess the adequacy of mark distribution. Get comments and criticisms from your peers.
  • Do not set questions in silo. The formation of MEQ committees will be advisable, and ensure a good distribution of specialists in the committee (e.g., paediatrics and adult, subspecialty groups).
  • Provide sufficient realistic clinical and contextual information, thereby creating authenticity in the cases.
  • The components of the online assessment in order to increase discriminant value of examination.
  • The design of the assessment should be contextual, sequential, with enough time to think. Due to the use of sequentially expanding data, students should not be allowed to return to their previous responses in order to change answers.

How to set FAIR MEQs

  • Good quality images
  • Data for interpretation:
  • Information must be fairly presented: don’t overwhelm the candidates
  • Choose relevant/reasonable tests: no esoteric tests (if possible), don’t give unnecessary data to interpret
  • If tests essential to the MEQ but students not expected to know how to interpret: can use to teach – i.e. give them the report, but leave to them to interpret eg CT brain image showing ICH
  • Keep to the curriculum

Q & A Session

Following the presentation by A/P Erle Lim, a lively discussion ensued and listed below are some questions from the subsequent Q & A session.

Q:  Why do you find essay questions difficult to mark online? I have a very opposite experience. Maybe your system is different. If your question is sufficiently clear, students will be able to cope and I don’t find it difficult to mark.
EL: There are advantages and disadvantages. One is you don’t have to read bad handwriting.
Q:  You talked doing the assessment anytime and anywhere. How do we know if they are doing it in groups or doing it by themselves?
EL: I am sure there are some settings to be able to control the assessment.
Q:  How do you go about the bell curve?
EL: We do get a decent bell curve. It is not necessarily even.  We accept the marks as they are and for the School of Medicine, we do not do a bell curve.  Every individual exam is not tweaked, it is only done at an overall level.