Is there a place for multiple-choice in English? Part I

Image

This is a blog about my experience of using multiple-choice questions as a form of formative assessment in English. It will be split into three parts. This initial post lays out my present thinking around the format and how I think it could be of use to teachers. In the next few days I well post on explaining exactly how I implemented the assessments within a cycle of teaching, before I finally conclude with my reflections about its impact and suggestions for further development.

Whilst the focus of my experimentation has been in English – using multiple choice to guide my instruction of teaching Of Mice and Men to a middle set year 11 class I hope that some of the insights that I will be sharing will be of use to other curriculum areas, particularly those essay based subjects where there is lots of reading to do and lots of knowledge to acquire, and which may not initially appear to lend themselves to the multiple choice format.

Much of my thinking here has been inspired by the chapter on assessment in Leadership Leverage, and by recent blog posts on the subject by Daisy Christodoulou http://bit.ly/17y37nP and Harry Fletcher-Wood http://bit.ly/HnELmq  Through these readings and my own subsequent reflections, I have become increasingly convinced that multiple choice can offer teachers an important and, perhaps even, necessary method for providing ongoing formative data that can reliably inform short and medium term planning.

As I have already suggested English is not usually associated with the use of multiple choice as a means of assessing understanding, at least in this country where I have not certainly not seen it employed in any meaningful way. I suppose I can recall instances where I have seen the format used to crudely test basic understanding, such as in a short quiz at the beginning of a lesson to re-cap events in a novel or the names of key characters and settings. But this blog is not interested in that kind of use of multiple-choice approach; instead, it’s about something far more complex and nuanced than that, something that I am only beginning to really understand and appreciate for myself.

In America, students who study English have far more exposure to the format than students in this country, where state schools regularly use multiple choice questions as part of their standardised reading assessments. Students in this country, however, tend to have their reading assessed through extended written responses – a situation that is only going to increase in light of the recent national curriculum developments. Now, I am certainly not in any way proposing that one model of assessment is inherently better than the other, or that I favour a move away from extended writing. I don’t. As it happens, I think students should be encouraged to write at length and be given every opportunity to do so in a meaningful way across the curriculum.

What I am interested in exploring and learning from the American system is the way that the multiple-choice assessment model can provide teachers in this country with a robust formative vehicle to ensure that when students do come to their extended writing they are more likely to flourish because they have the requisite deep knowledge and understanding that the skill of writing relies upon. Ultimately, you have to know a great deal about a book, a poem, a text or a topic in order to truly be able to synthesise, evaluate and analyse in your writing, criteria which are usually found in the highest bands of reading assessment mark schemes.

Now, if you had asked me last year what I thought about the use of multiple choice assessment in general, let alone in English, I would probably have baulked, thinking either that the format was too simplistic to be any way useful, or that it simply was not compatible with the demands of gauging ability or progress in English. After all, both the English GCSE and A level exams test reading via the extended essay, so what possible benefit could the format hold for English teachers outside of this. This model of assessment – one that sees knowledge and understanding expressed through the medium of the written word – lies at the heart of the ethos of many an English curriculum. It explains why most schools run some kind of half termly key assessment task: an extended essay that assess reading or writing, which is then leveled with the results duly entered onto a spreadsheet.

But this model of assessment seems to me to have a number of significant flaws. Firstly, and most importantly, it is a lag measure. In other words, it gives summative information about student achievement and progress after instruction has ended, when it is simply too late to do anything about it. The class has moved onto to the next text or unit of work, with their scores or levels firmly in the rear view mirror, disappearing into nothingness. No doubt, some schools – those not overly bogged down in the mire of propping up year 11 outcomes –probably do use this data more wisely, either to adapt medium and long term planning or to intervene for whole classes, groups or individuals. I’m guessing that this is probably not the norm, though.

Yet, I think that even this kind of reflective and pragmatic approach is in itself problematic, particularly with the vagaries of levels factored into the equation. Either the data itself used to inform intervention is erroneous – from my experience it is hard to ensure the consistent application of levels, particularly in large departments – or because the skills descriptors themselves are too generic and cloudy to offer meaningful information that can inform further instruction. The intervention ends up addressing everything by repeating what didn’t work the first time, or it focuses on the wrong thing. Rarely is the result of this kind of remedial action retested, particularly at KS3 where there is rarely the time or the will. I am surely not alone in thinking that these half term key assessments are often more for us and our accountability systems than they are for the students who will have to live with the results of our teaching for the rest of their lives – for better or for worse.

And here is where I am starting to think that multiple choice might provide a much-needed helping hand, providing us English teachers with the means of increasing the chances that when our students do come to do extended written responses they write much better because they know what they are writing about: that they have the necessary knowledge and understanding to access those higher band reading criteria. Whilst I seem to have been exclusively focusing here on using multiple choice questions for developing reading, in truth because there is such an intrinsic relationship between knowledge about something (the reading) and the ability to express that understanding clearly and effectively (the writing), the format is ultimately likely to develop both skills simultaneously – Joe Kirby’s metaphor of the double helix remains pertinent: http://bit.ly/148KSRd

My next blog will explore the set up and practicalities of how I have been trying to implement the use of multiple-choice questions in my teaching and offer some of my initial findings.

Published by Phil Stock

Deputy Headteacher, Teaching, learning and assessment. Interested in education, spending time with my family and running - all views are my own. @joeybagstock

14 thoughts on “Is there a place for multiple-choice in English? Part I

  1. ahhh, the ease of marking reduced to Option A, B, C. I have given it as part 1 of an assessment to my 7 and 8 and part 2 is an extended writing piece based on the correct choices. Exam week next week, results will be interesting.

      1. So Yr 7 & 8 completed their tests ABC and then were asked to use it to write an extended piece. Whilst the marking was easy and the children did well circling answers, it proved nothing other than they can read and identify skills. When It came to using the skills, (prepared for in advance) the writing lacked imagination and flair which I suppose is the danger of automaton tick box questions. Does it work for English Lit- yes if they need to know sequential aspects of plot, doesn’t if they are required to write emphatically. But I work in SE Asia so tick box and fact is preferred over skills and empathy. Art and Music off the timetable in favour of extra Maths and Languages. All hail the production line of future fodder.

      2. Thanks very much for the update. Sorry to hear that yours was a less than fruitful approach. I thinking I am having a little more joy, though it is still very early days. My next blog will explain my methods and discuss some initial, which may be of interest. Thanks again for sharing your experiences. Let me know if anything changes.

    1. Peter, thank you for leaving a question. Whilst I am familiar with the concept of SOLO, I do not feel I know it well enough to fully understand your meaning. Do you mean in terms of stations around a room, with different activities for students working at different stages of understanding? If so, then I think what I am trying to get at is arriving at detailed appreciation of the depth of student understanding and misconception on a regular basis with reliable results and ease of practical implementation. If you could articulate your thoughts in this regard, I would be very grateful. Generally, I am a little distrustful of skills taxonomies. Though I agree with the principle of being able to work through different phases of understanding and application, in practice the knowledge part (the deep understanding) is passed by too quickly in the desire to get to the expression part – in this case the written outcome. In my experience students often do not have the sound understanding basis – period, theme, context, authorial craft – to write in a sophisticated way. Be interested to read your thoughts. Phil

  2. Interested to read the next post Phil – I would like to broaden my assessment strategies

    I would like to learn more after hearing from Harry Fletcher Wood about hinge questions .

    Debbie 🙂

Leave a comment