Everything Now: resisting the urge to implement too much too soon

marshmallow-test-4-728

There are so many good ideas in education at the moment – knowledge organisers, whole class feedback, multiple-choice questions, low stakes quizzing, dual coding, etc. – it is hard to keep up. I’m on board with almost all of these ideas approaches, and in this enlightened evidence-based age in which we live, it feels good to be finally doing the right thing!

And yet, I wonder that we may be in danger of repeating some of the mistakes from the past. I don’t mean we risk returning to the dark days of learning styles, multiple intelligences unfounded taxonomies and pyramids of this and that. Thankfully, I think those days are long gone. I’m more thinking that as profession we still tend to rush towards implementing each and every new idea that comes along without engaging in any real process of critical evaluation. We’ve eschewed some of the guff from the past, but I am not sure we have learnt how to handle research evidence in a disciplined way, and as a consequence we risk creating future brain gyms.

It seems to me that we are still of the mindset that when we see something new, particularly something that conforms to our biases, our eyes light up and we want to get it up and running in our classrooms as quickly as possible. This is probably why so a lot of good ideas get implemented so badly, because we don’t allow ourselves the time and space to think about how they are going to work, if at all, in our contexts. As Mark Enser points out in this excellent post, what start off as promising interventions or sensible ways of managing workloads, run the risk of getting bastardised into something less effective and even more time-consuming.

Dylan Wiliam and Graham Nuthall understand the two main threats to effective implementation: lack of practical guidance and/or lack of theoretical understanding. For Wiliam, ‘Teachers will not take up attractive sounding ideas, albeit based on extensive research, if these are presented as general principles which leave entirely to them the task of translating them into everyday practice.’ Indeed. And for Nuthall, ‘in most cases, there is a description of what to do and how to do it, but no description of why it might work. There is no explanation of the underlying learning principles.’ Again, this strikes a chord.

I would add to this a third threat: time. In my last post, I provided some advice on how to use mini whiteboards more effectively in classrooms. The post was not well read (to be fair, they never are!) which was not really a surprise. It’s not a sexy topic and most people already know how to use whiteboards well, don’t they? Maybe; maybe not. The reason I wrote the blog was because what I see time and time again is ineffective use of mini whiteboards in lessons. Too often, there appears to be a conceptual misunderstanding of their purpose, or a lack of expertise and confidence in their practical application. More time working on this simple strategy would probably make for its better use as a teaching tool. But we are always searching for something new.

Knowledge organisers are anther case in point. You only need to type the phrase into Google to see a huge disparity in what people think they are for and how they are using them with their students. I may be wrong, but I would imagine that up and down the country a lot of time and effort has gone into generating knowledge organisers, but not so much care and attention into working out exactly how they should be used with the students. Do they even work? I think they are excellent, but do we actually know if they make a difference to outcomes. Alex Quigley poses similarly troubling questions for a range of other current ideas in this thought-provoking piece.

I should stress here that I don’t see myself sitting atop any of this. I’m not scoffing at others putting into practice things they read about on Twitter or learn about at conferences. Most of it is excellent and seems eminently sensible. I am just the same as everyone else. If I see someone share something that I think sounds good, and if that thing is grounded in some kind evidence, then I am inclined to agree with it and want to bring it into my classroom and across my school. The risk of not doing something that sounds so right is often enough of an impetus to make me want to act.

It is only in the last couple of years, that I have not only learnt the value of stepping back and thinking things through, but also, importantly, developed the discipline to resist acting immediately. Often the pressures of getting results and wanting to do well by your students – whether as a class teacher or a school leader – can make it very difficult to not try new ideas and approaches. But resist we must. If we don’t allow ourselves the time to properly understand the theory and practice of a new idea, and the time to turn that theory into practice, then even the best ideas will likely fail.

Which leads me to evaluation – quite possibly the biggest thing missing from most school improvement activity, whether at the classroom or school level. I’m a huge advocate of helping to turn research evidence into practical action, but I am increasingly mindful of the need to try and evaluate the impact of any changes we make to our practice, however hard or imperfect that might be. If we don’t properly consider the impact of the changes that we introduce in our classrooms and our schools, we will never know what is worth doing what is best left alone.

Whereas earlier in my career, we tended to implement an idea from the DFE or the senior management team without any real kind of evaluation of its impact, we now tend to implement an idea from research or cognitive science without any real kind of evaluation. I’m inclined to think that these ideas, often helpfully distilled by popular educationists or other bloggers, are far superior to the days of yore, but I still think we need to hold them up to the light through the process of evaluation. Findings from fields such as cognitive science are really only the first stage of the evidence process – the bit that often takes place in the lab, or uses undergraduates and inauthentic learning materials. Whilst this is hugely important and valuable, there is another important stage, and that is the evaluations we set up in our own contexts using some variation of this simple formula: does intervention X work in context Y under Z conditions?

If we cannot answer a question like this, should we really be implementing something into our classroom or our school?

Thanks for reading.

References:

Black, P. J. & Wiliam, D. (1998) Inside the black box: raising standards through classroom assessment

Nuthall, G. (2007) The Hidden Lives of Learners

 

 

 

 

Advertisements

3 thoughts on “Everything Now: resisting the urge to implement too much too soon

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s