Monthly Archives: March 2014

How shall we know them? Learning through assessment

Technology in Pedagogy, No. 19, March 2014
Written by Kiruthika Ragupathi

The vast majority of us learn1; the question is of course, what we learn. Simon Watts, an associate professor in the Department of Chemistry from the Faculty of Science, National University of Singapore believes that all aspects of what we do as a teacher should facilitate student achievement of learning outcomes – this includes both ‘teaching’ and ‘assessment’ activities. Of course, the achievement of learning outcomes by reviewing a final exam one has just failed is clerically a little inconvenient, but in learning terms the assessment would have achieved one of its primary objectives.

Having spent time working on learning and personal support systems in higher education both in Oxford and New Zealand,  A/P Watts talked about his abiding fascination with the way people learn and think2, how they communicate and work in groups. Thus it came as no surprise when he started the session by elaborating on his pedagogic research which is centered around culture3,4 – the NUS culture, particularly the student-staff culture. The paradigm for the work is: behaviours, attitudes and customs; though he is sure that there will be other external drivers that may reinforce these. Without understanding the prevailing culture, it is very difficult to study the learning processes says A/P Watts particularly when the cultural paradigm here in Singapore is so interesting.

A learning process characterized by memorizing and rote learning, “a low level learning culture” does not give a deep grasp of the subject matter, and will affect the ability to process the full breadth and implications of the material concerned5. It has been said that this culture is common amongst many Singaporean students. However, A/P Watts proposes that the learning process be treated like a Tango (a dance where one partner leads, and other follows), and we, as learning facilitators, have a duty to lead this dance. His hypothesis is that the current culture is a result of this Tango.

In this session, A/P Watts discussed the initial development of two applications that facilitate student learning:

  • pre-laboratory tests
  • secure Modified Essay Question (MEQ)

The IVLE assessment tool was used for administering both these tests. The development of these applications is about how IVLE can be used to facilitate students to achieve planned modular learning outcomes and also look at how staff are facilitating high order student learning.

Pre-laboratory tests

The pre-laboratory tests were designed for CM2192, a compulsory module in Chemistry. Many times, as facilitators, we face the challenge of teaching students who simply don’t read at all or do not know how to read the textbook/scripts; and they seem to read and feel as if they need to memorize everything. This is a huge problem when teaching the laboratory, since time is limited for any kind of theory discussion. Hence the use of pre-tests which it is hoped help students focus their reading and help them prepare and understand the most important parts of the material before performing the experiments.

Therefore, the desired learning outcomes of the pre-lab tests designed by A/P Watts and the Chemistry Teaching Team were on safety, familiarity with script, and subject familiarity, but not necessarily requiring them to have an in-depth knowledge of the subject but the basic knowledge needed for the laboratory exercise. The laboratory exercises were designed in such a way that half were based on analytical chemistry while the other half were on physical chemistry. The focused reading of the scripts in preparation for the pre-tests forces students to think about and understand what they are about to do, and also provides framework for studying key concepts for the laboratory exercises. These tests act like a “door wardens” with an outcome of pass or fail. Any student not able to pass the test with 2 attempts is not allowed to take the practical: subtly informing students that preparation is needed.

The teaching team drafted a total of 15 unique questions per practical reflecting desired learning outcomes. Although each member of the team chose the form of their questions, effectively there were two categories: either questions that were yes/no or true/false (i.e. a 50% chance of being right in a blind guess), or more difficult questions: “Fill in the blanks” or “Which of the following is FALSE or TRUE?  (1 of 4)”. Each pre-lab test has 6 questions of which the students would need to get 5 correct to pass the test. The IVLE Assessment tool was used to design the pre-lab tests with every student taking the pre-lab test before every practical exercise. Each test is open for 60 minutes, only available and unique to that batch of the students taking that particular laboratory exercise.

The questions that A/P Watts asked himself were:

  • Is this technique in this application effective for student learning and in the effective use of staff time?
  • Do question types used influence pre-lab scores?
  • Are there marked differences between two the analytical and physical chemistry halves of course?
  • Do students “learn”? (not what do they learn: questions or assessment methods?)
  • Is there a link between pre-lab tests and final assessments?

From this experiment, it was fairly clear the staff time was reduced considerably from 35 hours per semester (based on individual tests twice for each practical on each week) to about 20 hours per semester (of which 70% is manually downloading marks and 20% re-setting tests). With more IVLE applications, it is estimated that this might take only 5 hours per semester.

It also became apparent that students had difficulty with analytical compared physical chemistry questions. However, it was also noted that physical chemistry section had more of “50% chance” T/F questions. Feedback from the participants proposed the usage of negative marking even when using the T/F question type would be able to take care of the issue; while other participants suggested getting students to key in the rationale for choosing a particular choice or to justify a reason for their choice over other choices.

A/P Watts recognized that it was difficult to gauge the effectiveness of student learning though students have reported that these pre-lab tests have helped them better understand and focus on the experiment.

Pedagogical advantages that pre-tests offer

Pre-test is a measurement of the learning received either through pre-lectures, readings or scripts as a result of understanding students’ prior knowledge before participating in an activity. It should be noted that such pre-tests can be used not only in the laboratory setting but also for any activity that requires students to prior preparation for participation in an activity.

Reasons for having pre-tests with focused readings/ video lectures are:

  • They are helpful to quantify the knowledge attained in the class and if the desired learning outcomes were achieved by students with diverse learning styles and varied preparation. More specifically, the tests indicate how the students are prepared for the learning activities and how they are learning.
  • The focused readings / video lectures force students to think about and understand what they are about to do, and also provide them with a framework for studying key concepts for tests.
  • They should also promote curiosity and imagination to predict outcome of experiments or activities while also promoting good reading/listening comprehension strategies like: previewing, re-reading, making connections, and self-questioning, before, during and after reading the scripts.
  • It is hoped that they also improve students’ participation and engagement during learning activities/laboratory exercises.
  • The data collected from the tests may enable facilitators to target students requiring extra help and will also help in identifying teaching and learning methods that need to be changed or developed.

Modified Essay Question (MEQ)

Modified Essay Questions (MEQs) are often included as assessments to test higher order cognitive skills as the more commonly used multiple-choice questions (MCQs) are generally regarded as testing for knowledge recall only. Thus an MEQ is a compromise between multiple-choice question (MCQ) and essay and sits in between these two test instruments in terms of the ability to test higher cognitive skills and in the ease of marking to a consistent standard.

A/P Watts used MEQs as part of the final examination (50% assessment balance) for the module CM4282 – Energy Resources. The desired learning outcomes were: Subject familiarity; ability to quantify abstract problems; ability to use simple models; lateral and logical reasoning.

For developing the MEQs, he again used the IVLE Assessment Tool, but now in a secure environment. Students were allowed to take the test in a secure mode while being able to access specific documents (a Singapore Statistical and Information Pack and a Singapore government policy paper:  Singapore National Climate Change Strategy).

The MEQs were unique in the sense that it employed a “No turning back” design. To illustrate this further, take for example an MEQ with four-part question, students move from Q1a à Q1b à Q1c à Q1d. The student needs the answer to (for example) Q1a to answer Q1b, and so on. Hence, if a student gets Q1a wrong he/she will be disadvantaged in the questions that follow which depended on the Q1a answer. In this situation, it would not be fair to penalize the student in questions 1b, 1c and 1d for the error in 1a. Hence, after students answers Q1a, they are given the correct answer to 1a, but cannot go back and correct their original answer, but can only proceed to the next question. However, now they have a good 1a answer with which to proceed. A sample of an MEQ used is given below:

MEQ - Sample

The research questions that A/P Watts asked himself were:

  • Is this technique in this application effective for student learning and in the effective use of staff time?
  • How does using a computer affect student performance6?
  • How does “no turning back” affect student performance?
  • Do students “learn” from the exam?

The MEQ allowed the staff to mark the final exam in 4 hours, mainly due to the structure of the IVLE assessment; thereby reducing staff time. Students indicated that it was the hardest module but the most enjoyable they had done, and had suggestions for improvement. However, students also felt that the MEQ ‘no turning back’ structure did not help them have a better understanding of the full set of questions and hence did not allow them to do planning at the beginning of the exam before taking the questions. Taking this feedback into consideration, A/P Watts has decided to allow students access to a soft copy of the paper during the exam. As a follow-up, until there were “drawing” input abilities for IVLE, he also felt the need to use it more for course-work exams rather than only for final examinations.

A/P Watts ended his presentation and opened up the discussion posing questions and seeking feedback and ideas from participants.  Some of the questions that were posed include:

  • How do we test absolute student learning and how do we know what they are learning?
  • Participants’ thoughts on the “no turning back” structure

This stimulated a lively discussion with participants sharing their experiences in the laboratories and in the use of assessments.

Q:  Most of the time in the laboratory classes, 2% of the un-prepared   students take up 98% of staff time. So with the use of the pre-tests, does   the preparation of the students match the pre-test scores?
SW: More students were prepared,   but unfortunately students were still focused on getting the results that   they need to get, and generally do not come with probing questions to be   discussed during the lab. Often we see students start to think and prepare   for the labs only when they start writing their reports; and so the pre-lab   questionnaire is a way to get students to start thinking about it   before-hand.Comment from participants:

You mentioned that students   analytical scores improve over the term – would this be due to the fact that   they are involved in the lab work before they learn a concept and hence   during the end of term their understanding of concepts are better.

 

Q:  Is it a lot a work for a pre-laboratory work?
SW: Looking at a combination and   changing questions for each set of pre-lab tests is automatic, opening up   tests to different groups of students does not require much time. The time   needed is for the preparation of the questions, which is a team effort, and   as mentioned earlier time is saved in marking them, and getting students to   be better prepared for the laboratory exercises. 
Q:  You only allow them to take the pre-lab two attempts. Will it be a disadvantage   the students?
SW: Yes, only two attempts are   allowed as we also do not have a big pool of questions to recycle. However,   when students fail both the attempts, this alerts the academic who runs the   practical session, to assist if there is a genuine problem. But for students   who do not take the effort to attempt the questions, they will not be allowed   to do the practical. 
Q:  Wouldn’t changing numerical values will allow you to create a large   pool of questions?
SW: Yes, that is entirely possible,   and something I badly want to do. I have not found an easy way to do this   within the IVLE system; but will be exploring for ways to have that happen   with the IVLE team. 
Q:  Are the students asking more qualitative questions during the lab   after the pre-lab? Many students just give you answer that they got from   their senior’s reports
SW: Not necessarily. I asked   students questions about their results during the lab and did not really get   better qualitative answers.  Some will   just give you answers from their senior’s reports.

The discussion then moved on to improving the laboratory learning experience and making it engaging and fun. One participant mentioned that it might be good to have at least one of the practical to be a flawed practical in the first year, as a way of training them into the lab as a practical skill; while others felt practical is progressive work and it is important to address the average NUS student – it is not that their attitudes are bad, but they need some encouragement to progress over the years. Participants also noted that there are other factors that play a part – curriculum design; validation by external bodies; breadth of coverage in the first and second year. They also felt the need to think for the common student, not dumb it down – get the structure, get them to move a little bit out of their comfort zone; but will still need to have the teaching team colleagues on board. There were suggestions to add to the current video lectures before the labs with virtual practical using Java applets.  As teachers we aim to facilitate an inquisitive nature in our students, although often one asks observational questions first.

Others highlighted the fact that if we consider culture is a problem (i.e. if seniors are passing lab materials and report to juniors), why are we as facilitators not riding on that to use it to our advantage. While grappling with a lot of things; we teach skills, but we also need to teach attitudes.  Having talked to a lot of students – they need to learn but not at the expense of their grades – you try to make the system such that we encourage them along the way.

Another issue that was discussed was on training of Graduate Teaching Assistants, as sometimes the lab technicians and GTAs might give away the correct results, getting students to repeat the exercise rather than help them probe further.

Finally the discussion moved to MEQs; particularly the usage of MEQs during for course-work tests rather than for the final exams. It was largely agreed that using essay type question for large classes or resorting to MCQs was not the way to go for enhancing student learning. Hence, MEQs would be a good option to consider particularly if easy questions could be designed for the first few tests; and as we proceed over the semester – have more difficult questions. This will show an improvement in the assessment, and could be due to their familiarity of the content as well as the process over the semester.

 

References

  1. Basic Cognitive Theory (e.g. Piaget) for example Paul, K. [1951] The Psychology of Intelligence. Routledge, London.
  2. Schaeffer, B. K., Rubenfels, M.G. [2001] Critical Thinking, what is it and how do we teach it? Current Issues in Nursing. 6e J.M. Dochtorman and H.K. Grace, Mosby, St Louise.
  3. Baker, D. Taylor, P.C.S. [1995] The effect of culture on the learning of science in non‐western countries: the results of an integrated research review. Int. J. Sci. Educ. 17, 695-704.
  4. Bishop, R., Berryman, M. [2006] Culture speaks: cultural relationships and classroom learning. Huia. Wellington, NZ.
  5. Biggs, J. B. [2003] Teaching for quality learning at university. Buckingham Open University Press, 2e
  6. Sloan & McGinnis, (1978); Russell & Wei (2004); Powers, et al., (1994); Mogey (2010); Burke and Cizek (2006); Russell and Haney (1997); Mogey et al., (2010), etc.

Leveraging Peer Feedback


Technology in Pedagogy, No. 18, February 2014
Written by Kiruthika Ragupathi

Using peer feedback among students is a useful tool in education. Feedback from peers is usually available more speedily than instructor feedback and is given in a language that they can easily relate to; effectively conveyed, it helps them review and question their own personal beliefs. This process of giving peer feedback requires active engagement as well as a critical understanding of what an assessment task demands and the criteria used to assess and grade work.

Peer feedback can communicate peer expectations in team assignments says Damith Rajapakse, a senior lecturer with the Department of Computer Science from the School of Computing at the National University of Singapore. He says, instructors can also use peer evaluations to reward or penalize team members based on students’ contribution levels and can easily supplement instructor feedback on student performance. In this session, Damith introduced a tool that he and his team developed to manage peer feedback and peer evaluations in class.

Reason for using Peer Feedback

Damith started the sharing session by highlighting a study Norman Vaughan from the Mount Royal University, Canada on the perceived value of peer assessment feedback. The students in that study were asked to rate the value of peer assessment feedback before and after a course, and then it was followed up with getting student’s perceptions on the value of teacher assessment feedback. It was reported that an emphasis on formative peer feedback impacted students’ perceptions on the value of instructor assessment. The study results highlights that students’ participation in a peer-feedback activity creates a win-win situation for both instructors and students with students valuing their teacher feedback.

When Damith started getting his students from the large classes to work in teams, he felt the importance and usefulness of using peer feedback. That is because, when working in teams, students who generally slack tend to rarely participate and usually contribute very little to the project work, yet these students are not penalized enough and ride on the other members’ work to attain grades higher than what they deserve. To give a grade that a student really deserves, Damith felt the necessity for a system that can allow students to easily give peer feedback and enable teachers to access the information effortlessly. Secondly, when there are a number of deliverables from students every week, instructors would not have enough time to give feedback immediately, particularly on student presentations.

These two situations prompted his team to conceptualize on developing an online system, TEAMMATES (http://teammatesOnline.info) that can be used for managing peer feedback. An online system makes the maintenance and collection of feedback easy and effective. He shared that the system is currently being used by over 50 universities with over 7000 users and can be freely accessed.

TEAMMATES information

A video tour of the TEAMMATES system can be accessed from the TEAMMATES home page.

Damith detailed four main features of the TEAMMATES system, and how it was useful for his classes:

1.      Peer evaluation / feedback for student teams

When students work on team projects, he found TEAMMATES to be particularly useful for collecting feedback from peers working on the same project. Students first estimate the performance of their peers’ participation and contribution to the project, and provide anonymous feedback to their team members. Second, they complete a self-evaluation of their performance using the system. This allows them to compare their own evaluation with the final score assigned by one’s own team’s perception. Finally, they also provide confidential feedback and comments on their peers to the instructor. This facilitates the instructor to easily identify the problem teams and be able to moderate the scores. It also gives an opportunity for instructors to intervene at an early stage while students get the time needed to amend their behaviour/problem before it is too late.Peer FeedbackSince the comments from peers are made transparent and open to all team members, students take ownership and responsibility. All of these, enable the instructors to penalise any under-performing student with more conviction and reward the deserving students with confidence.

For participants to get a sense of the system, Damith then went to demonstrate the usage of the system both from an instructor’s perspective and a student’s perspective. Once logged in to the TEAMMATES system, an instructor can easily create a course and enroll his students by copying from an already available spreadsheet. Instructors can then assign time periods at which students can give feedback, and the system follows up with an automatic email reminder to students. The students are then required to login to the system through the link provided in the email and get to key in the feedback based on their own participation and contribution, peer’s participation and contribution taking into consideration the team dynamics.

2.      Flexibility to create other feedback paths

As an instructor, you will be allowed the flexibility to determine your own set of questions, feedback paths, and visibility levels from the system. The real flexibility lies in allowing the instructor to pick any question from the set of questions for his students to give feedback on their peers.

The feedback paths can be chosen so as to allow feedback to be provided:
(a) between students in the same course,
(b) for various instructors in the course,
(c) amongst other teams within the course

The visibility level is so flexible that the instructor can choose from various options like:  students can see the feedback or comment; students can see the author of the comment; etc.

3.      Closed-loop feedback

A closed-loop feedback system is planned in such a way that it not only allows instructors to receive anonymous feedback from the students but also enables them to respond to the particular student, but anonymously as well. Thus, students are able to receive personalized quick responses. Instructors are also able to easily track if their students are reading the comments, and appropriate intervention is then possible when necessary. Depending on the questions chosen, instructors would be able to better understand student misconceptions and unclear areas.

4.      A repository of student information

The last feature was not so much related to peer feedback, but allows for easy maintenance of one’s own students that an instructor has taught thus far. This will provide an easy access for instructors to collect feedback from past students, and also be able to contact them for guest lectures.

Pedagogical advantages that Peer Feedback offers

Damith highlighted the following features that he liked and prompted him to start the development of a student peer evaluation tool:

1.      Provide early and frequent feedback

Peer Feedback enables students to gain initial feedback on their work at an early stage, not only in a timely manner but more frequently as well allowing them to respond to the feedback in future assignments. Providing early, frequent, and incremental feedback in a non-threatening environment can play an effective formative role in students’ personal development.

2.      Formative first, summative second

Peer feedback is about students providing constructive comments on a peer’s work; it does not involve awarding of marks but is a formative step prior to submission of a piece of work. Such feedback can help students to recognize and rectify gaps between peer/instructor expectations and their own performance. TEAMMATES is designed for both formative and summative purposes, but places greater emphasis on the formative.

3.      Shifts responsibility from the teacher to the students

This way, students are more involved in their assessment process when they have a larger responsibility. Not only do students take a closer look at the performance of their peers, but they are also constantly reminded of their own performance and are likely to use that as their frame of reference for improving.

4.      Develop self-reflection skills in our students

It enables the development of critical reflection skills and the ability to give constructive feedback to peers. Students can better engage with assessment criteria and internalise them for application in their own work. It enables the development of skills like making informed judgments, self-evaluation, critical reflection skills, critical thinking, analyzing learning outcomes and formulating constructive feedback to the peers.

5.      Introduce diversity in teams

The instructor has the ability to better understand the student strengths and weaknesses in terms of team dynamics, knowledge, skills and attitude based on the system scores. This enables faculty to introduce diversity in the teams based on student capabilities and contribution in team projects. The more the diversity in a team, the higher the benefits for each student as peers would learn to depend on each other in a positive way for a variety of learning tasks with the diverse groups that they work with.

Summary of Feedback/ Suggestions from the Discussion

Following the presentation by Damith, participants got into a lively discussion and asked Damith questions on how they could start using the system. Listed below are some questions from the subsequent Q & A session.

Q:  How does the system quantify the scores – the positives (+) and the   negatives (-)?
DR:

The   system does not quantify the scores. This is scaled down internally to address   the quantification. The values are taken to make a comparison, and acts more   of a red flag to faculty. Therefore, the values displayed are relative, and as   instructors you will need to look for mismatch (i.e.) the relative   proportions.

Q:  Are students honest or do they overplay their own contributions?
DR: Usually students underplay the   team members’ performance, but overplays their own contributions and   participation. Therefore, it is normal to have a gap between the contribution   level ‘claimed’ by a student and the level ‘perceived’ by peers. Hence this   needs to be taken into consideration when grading.
Q:  Are there unintended consequences? Can this lead to unhappiness   amongst team members?
DR:

The   situation is not as lenient as before. The students who do more work   generally like the system, as they have their own voice and are able to   report the actual scenario back to the instructors. Since the comments are   visible to all, students will need to take on ownership and responsibility.

Q:  What if slackers don’t give feedback?
DR: Sometimes, there is a   possibility for this to happen. A student who doesn’t give feedback would get   a reminder from the system. As an instructor, you could also give a gentle   nudge or reminder to the students. And if the students still don’t and is   marked down by other team members for his participation, then this will act   as a sort of confirmation.
Q:  Does the system provide criteria to the students that they can refer   to before giving score?
DR: Yes, these can be done. The   rubrics can be included as part of the question itself i.e. at the question   level.
Q:  Do you plan to integrate it with the NUS learning management system?
DR: Since the system caters to many   other universities and schools, there is no plans to integrate it with the   NUS learning management system.
Q:  Is the use of TEAMMATES Free?
DR: Yes, you can register for an   account for TEAMMATES at http://teammatesOnline.info.   No installation is required, just get an account, and you can start using the   system right away! However, students would need to use their google accounts   to use the TEAMMATES system.

 

References:

Morrow, L. I.  (2006) An application of peer feedback to undergraduates’ writing of critical literature reviews, Practice and Evidence of Scholarship of Teaching and Learning in Higher Education, 1(2), 61-72.

Draper, S. & Cutts, C. (2006). Targeted remediation for a computer programming course using student facilitators, Practice and Evidence of Scholarship of Teaching and Learning in Higher Education 1(2), October 2006, 117-128.