Why the use of multiple choice is better than I originally thought.
My relationship with assessment and feedback could be described as a love-hate one. Hate because assessment has always seemed to end up becoming something far more complicated than it needs to be (think Life after Levels, which in reality became Levels after Levels). Hate because people so often disagree on the terminology of assessment, which leads to a lack of overall clarity and coherence in how it is undertaken. Hate because of the hours spent writing out feedback comments that were read once and then left to melt into oblivion, forgotten and ineffective in actually moving students forwards.
However, I am currently in a love phase. Love because of the inevitable increase in the number of conversations that are being generated by the current curriculum evolution (revolution?). Love, because of the clarity with which a number of people are currently writing about assessment and feedback, offering practical ways in which these tools can be used for the effective progression of our students. Love, because I am seeing the impact that simple, efficient methods are having on the progression of my students.
What was the catalyst for this? Well, it was actually a Maths book! Mark McCourt’s book Teaching for Mastery provided the spark for my current enjoyment. In it, he promotes the use of tightly designed multiple choice questions (MCQs) as a diagnostic method for identifying misconceptions among students. Along with Mark Enser’s very helpful reminder that feedback should be about improving the student, not the piece of work (Teach like Nobody’s Watching), this approach has stimulated a seismic shift in the way that I approach assessment and feedback.
MCQs are often maligned as a basic method for testing simple knowledge recall. Although this can often be the case, they can also be so much more than that. When MCQs are well designed, all answers are useful answers – the wrong answers revealing to you the misconceptions that a student holds.
For example, here is a question from one of our recently introduced core knowledge tests used at the end of every KS4 topic:
- Study Figure 1 (a plate tectonic map). Where is constructive?
| A | The Eurasian plate |
| B | Hawaii |
| C | The boundary between the Pacific and North American plates |
| D | The boundary between the South American and African plates |
If a student answers A, it shows that they have misunderstood that plates in and of themselves are not constructive or destructive, but that it is the boundaries between plates that matter. Answer B shows students again have a similar misconception, but in relation to specific locations, based upon the hazards that they experience. Finally, if they answer C, it shows that they do not have a strong enough grasp of the differences between the different types of plate boundaries. This depth of understanding of what the students do not know, as opposed to what it is that they do know, has had a transformative effect on how I feedback to students.
Once the questions have been well designed, undertaking the whole class feedback that Adam Boxer has recently advocated (blog post here) becomes a relatively straightforward process. After students have undertaken the knowledge test and self-assessed it (saving me time), I go through their results, marking down the questions where multiple students have the same misconceptions, very quickly identifying the specific concepts and ideas that require re-teaching or building in, like Boxer suggests.
Of course, this is only one method of feeding back to students and it would lose its effectiveness if it was the only approach used, but just by making this simple adjustment to how I assess students, I have found that there are less gaps in students’ knowledge and that my feedback is becoming less about improving the piece of work and more about improving the student and their knowledge, to the benefit of all involved.
One thought on “Assessment and Feedback”