Evaluating CPD: hard but not impossible

Screenshot 2016-04-09 08.22.31*

At a time of shrinking budgets, there is a need for reliable formative and summative feedback about the efficacy of professional learning. It is not acceptable to assume that, however well intentioned or well received a school’s CPD programme is, it is necessarily right for it to continue. If it is not having an impact on student outcomes, whether in the narrowest sense of achievement or more broadly across other competencies, then it has, at the very least, to be called into question. It may well be that other forms of professional learning are more effective, or perhaps, as some would argue, that no CPD would have more impact, freeing up busy teachers to plan and mark better. If you have no way of knowing, then you may be wasting valuable time and resources on the wrong thing.

The problem is that whilst we may well agree that evaluating the impact of training on student outcomes is important, it is far from straightforward to measure this impact in a robust and efficient way. I know how hard it is because we have spent the past few years trying to figure out how to do evaluation better. I don’t think we have cracked it – far from it – but with the support of organisations like the fantastic Teacher Development Trust, we are getting closer to understanding what successful evaluation looks like and how to align our systems and practices so they are congruent with the content and aims of our professional learning.

There are a number of theoretical models for evaluating professional development, all of which have benefits and flaws. Kirkpatrick’s (1959, 1977, 1978) model from the world of business offers four types of evaluation. Despite its criticisms, such as the failure to consider the wider cultural factors of the organisation and assumptions about the causality between the levels, it provides a useful framework for thinking about what should go into effective evaluation. Likewise, although it runs counter to what we know about effective CPD, namely having a clear sense of intended outcomes, Scriven’s (1972) notion of goal-free evaluation also has its place, allowing within the evaluation process a place for identifying a range of impact outcomes, whether originally intended or not.

My favourite model for evaluating CPD, however, is Guskey’s (2000) hierarchy of five levels of impact. In this model the five levels are arranged hierarchically with each one increasing in complexity. The final two levels – including the last one which looks at the impact of professional learning on student outcomes – are the hardest to achieve, which no doubt explains why so many schools, including my own, have not done them terribly well. In many respects Guskey’s model bears similarities to Kirkpatrick’s framework, but crucially it adds a fifth level of evaluation, one that looks at the impact at an organisational level, which is useful for trying to make sure that the aims of a school’s CPD programmes are not undermined elsewhere by its culture or systems.

In the rest of this post, I will briefly outline each of the five levels in Guskey’s model and then explain what practices we are currently undertaking within each to improve the evaluation of our professional development. This is very much still a work in progress, so any feedback received would help us make further refinements moving forward.

  1. Reaction quality Evaluates how staff feel about the quality of their professional learning

In many respects, this area of evaluation is quite soft: basing evaluation on whether participants liked or disliked specific activities rather than objectively evaluating its impact on where it counts has been rightfully challenged as being weak. I do, think, however, that it is still important to include some element of staff qualitative feedback within the overall evaluation process, particularly if suggestions can be acted upon easily to increase buy-in.

To this end, we send out reaction quality surveys after every short form CPD session. It has only two sections. The first asks participants to evaluate the extent to which session objectives have been met, whilst the second invites more ‘goal free’ reaction feedback by asking about what was learned and what participants would like to see included or amended in future sessions.

Screenshot 2016-04-09 08.21.02

  1. Learning evaluationmeasures knowledge, skills and attitudes acquired through training

This aspect of evaluation is linked in with our appraisal process. I have already written about the changes to our appraisal this year, which have gone down well so far with enhancements to follow after feedback. Essentially, all teachers, classroom support staff and non-teaching staff identify two main goals: one that is a subject (or department/role) target orientated towards developing a specific aspect of pedagogy, practice or knowledge, whilst the second is a learning question, allowing for the enquiries into the more nebulous and complex aspects of improvement that lie at the heart of our daily practice.

The subject goal is supported by departments or teams during their fortnightly subject CPD time. For instance, a couple of science teachers seeking to improve their modelling might work together using IRIS lesson observation equipment, or a group of religious studies teachers might run seminars during department pedagogy time on the knowledge required to teach their new specifications. The enquiry question is supported by the wider CPD programme, the bulk of which takes places in learning communities that are selected during the appraisal process and aim to provide the necessary input and ongoing support.

The evaluation itself comes in two parts. The first is a professional audit, which we instigated for the first time last year and will be revisited in the summer term to see the extent to which knowledge has changed. The second part is built into the appraisal process, where through a combination of a learning journal, voluntary targeted observations and professional dialogue colleagues can demonstrate the new knowledge and insights they have acquired in their department training or through participation in their learning community.

The model is based upon a number of sources, including the helpful lesson study enquiry cycle put together by the Teacher Development Trust. Both interim appraisal and annual appraisals provide opportunities for meaningful discussions about individual development, as well as for the evaluation of individual and aggregated professional learning. This is not so much about holding individuals to account, but rather as a means of fostering an ethos of continual improvement and gaining insight of what training adds value and what doesn’t.

  1. Organisational evaluation – assesses the support and ethos of the organisation

This third level of evaluation in Guskey’s model represents the missing part of Kirkpatrick’s framework – evaluation of school ethos and support for CPD. As Guskey observes, it would be ridiculous for an individual teacher or group of teachers to receive high quality training that they understand in theory, agree with in principle but cannot put into practice because of ‘organizational practices that are incompatible with implementation efforts’.

The problem, however, with assessing the support and ethos across a whole school, and evaluating whether it is aligned with the objectives and content of the professional learning programme, is that it requires an objective, external voice – the ‘critical friend’ cited in recent reports into effective teaching and professional development. Fortunately, we are members of the Teacher Development Trust and one of the benefits of membership to their network is the regular external audit of CPD. Unlike other brands of external judgement, this one is supportive and helpful – both in the summative, but moreover in the formative sense. This post from TeacherToolkit provides a useful insight into one school’s experience of the TDT audit.

The audit is split into 7 categories with three levels of award for elements within these categories – Gold, Silver and Bronze. In assessing the overall quality of professional learning, it canvasses the views of all members of teaching and non-teaching staff. This is done via a pre-visit survey and then through extended interviews with a cross-section of staff during the day of the evaluation, which is peer reviewed with another member of the network. What I particularly like about the TDT audit is the way it provides rigorous external feedback into what is working and what requires improvement. There is no spurious judgement, but rather crucial feedback about what staff think about their own school’s CPD and a cool appraisal of whether of not its culture and practices enable new learning to be enacted.

  1. Behaviour evaluation – focuses on changes in behaviours as a result of training received

Professional development cannot really considered to have been successful if the day-to-day behaviours of teachers have not changed. As we all know, this usually takes a great deal of time. Even small changes in practice, such as trying to avoid talking whilst students are working can take a great deal of practice and feedback. Focused observations are a useful support in this process and can be requested by individuals who want to gain feedback on how their behaviours have changed and what they may wish to consider to change in the future. These observations are agreed at the outset and are purely developmental.

Perhaps the most reliable and useful source of ongoing evaluation into a teacher’s behaviorial change in the classroom is from the students’ themselves. Next year, we intend to introduce student evaluations, which again are not designed to catch staff out but rather to gain useful feedback for teachers with regards to the one or two identified areas of change that they have been deliberately working on, for either their subject goal or their learning question. It was too soon to introduce this year, particularly as we wanted to be careful about how we make sure that student evaluations are embraced not feared.

  1. Results evaluation assesses the impact of professional development on outcomes

At the outset of the appraisal in early October, teachers identify specific classes, groups of students and aspects of their classroom teaching or their students’ learning that they want to change as a consequence of their professional learning. This identification of outcomes is a structured and supported process, which not only looks back at previous examined and non-examined results, but also looks forward to future curriculum and timetable challenges. We no longer set arbitrary performance targets, but do seek to establish clearly-defined outcomes in relation to student learning. Again, the TDT resources have proven a very useful guide.

The intention for this year is to look closely at the impact of bespoke department and school-wide professional development on specified student outcomes. There may be some mileage in considering this in the aggregate too, but we are very much aware that much of the nuance is lost in such a process. It may be possible in the future to more closely align the goals of individual classroom contexts to those at department or whole school level, but this is very much something for the future. This is by no means a flawless approach, but it does get much closer to evaluating the thread between teacher growth and student achievement. David Weston, Chief Executive of the Teacher Development Trust, provides a different more immediate way of building evaluation into professional development with this wonderful worked example of a group of science teachers working on a common problem.

As I have already stressed, ours is still very much a work in progress. I do think, however, that we are much further along in understanding the importance of evaluation in relation to professional development, and what this might look like in practice.

Thanks for reading.

References:

Bates (2004) ‘A critical analysis of evaluation practice: the Kirkpatrick model and the principle of beneficence’

Creemers B., L. Kyriakides and P. Antoniou (2013) Teacher Professional Development for Improving Quality of Teaching

Guskey, T (2000) Evaluating Professional Development

Scriven (1991) ‘Prose and Cons about Goal-Free Evaluation

* image adapted from: http://www.growthengineering.co.uk/why-public-recognition-motivates-us/

 

Advertisements

Appraisal: down but maybe not quite out!

interview_178524k

So, it’s that time of the school year when teachers dust off their performance management paperwork, remind themselves of the targets set 12 months previously, and then cobble together some ‘evidence’ to meet them. In some schools this is a routine, perfunctory process, a bit time consuming and inconvenient, but nevertheless relatively benign; in others, however, it is still a bit time consuming and inconvenient, but with a lot more additional stress, with exam performance targets under close scrutiny and pay awards in the balance. In either case, the whole process is a monumental waste of time.

In recent weeks two very different responses to the future of annual appraisal have emerged. For some, the whole process is so flawed, broken and inefficient that the only logical cause of action is to get rid of it completely. Jack Marwood’s post on the subject is also instructive here. At the other end of the spectrum are those who also see the process as flawed, broken and inefficient but not necessarily terminally so. For these, a more humane, purposeful and impactful appraisal procedure is possible – one that balances the needs of the individual teacher with the needs of the students in the school. Whilst I can certainly see the appeal of jettisoning the behemoth that is performance management, I think there is still hope: that appraisal can be done better.

Appraisal and professional growth

This week we took our first significant step towards building a better appraisal model. We believe the changes that we have introduced will over time help to develop teachers and improve the quality of teaching and learning in the school. By taking out the deeply flawed and reductive measure of exam performance, and shifting the emphasis towards disciplined self-enquiry, we have begun to see teachers setting more meaningful, focused and impactful objectives for themselves. The fact that these identified goals are then married to provision from the school professional development programme is, we think, much more rigorous and much more likely to bring about change in the classroom.

Every teacher and classroom based staff member identifies two professional learning goals – one that relates to their subject pedagogy and framed as a target; the other more enquiry based and formed as a question. Both objectives are informed by reflection into current practice coupled with anticipation of future challenge. A number of tools have been created to guide this enquiry process, which include looking at the broad range of student outcome data (assessment, book learning, survey results) as well as more evaluative teacher reflection information. The introduction of a learning journal knits the whole process together, and is where all ongoing professional development activity will be recorded, whether it is wider reading, CPD session summaries, planning ideas or reflection notes. At the review stage we want the conversation to be about lessons learned around understanding teaching and learning, not crude interrogations of decontextualised numerical data.

Perhaps the other important change to the way we are developing appraisal is giving it the time and respect that it deserves. I have written before about our new Wednesday afternoon Professional Growth programme, where we have two hours enshrined CPD every week. This structure allows us the scope to invest in getting professional learning right. Last week we set aside some of our two hour training slot to afford staff time and space to think carefully about their development and what they need to focus on to improve and make a difference to the students that they teach or support. We also used took yesterday as INSET day so that the vast majority of staff could have a sustained period of time to discuss their professional learning – to look closely at what has gone before to better plan for what lies ahead.

Screenshot 2015-10-10 09.58.24

Subject pedagogy goal

This objective is very much focused on developing an aspect of the teaching craft. It is highly specific, both in terms of the actual aspect of pedagogy identified, but also in relation to the stated student outcomes that will follow as a result of any change in teacher behaviour. Last year we introduced lesson study into the school through the fantastic Teacher Development Trust. The process of setting an enquiry question at the heart of the lesson study model greatly informed the way we are framing subject pedagogy targets. We want to get much better at concentrating our efforts where they are most required and these kinds of focused goals do just that, as well help us to measure the impact of our professional development programme on student outcomes by evaluating the impact of individual training plans and looking at the cumulative effect of those plans across the whole school.

Screenshot 2015-10-10 09.55.15

Research and Enquiry Question

Unlike the subject pedagogy goal, which focuses more on improvements to the art of teaching, the enquiry question is geared towards reaching a better understanding of student learning. The school has three main focuses in relation to better understanding how students learn: metacognition, short and long-term memory and feedback. Enquiry questions are set in light of one of these three overarching themes and reflect the convergence of individual teacher need and whole school priority. The theme inherent in the question determines the learning community that the teacher is part for the rest of the year – an iterative process that begins with a research overview, wider reading and group discussion before moving towards collaborative planning and individual on-going enquiry supported by a lead learner. Accountability is not so much about providing a definitive answer to the question, but rather demonstrating a definitive sense that the question has prompted deeper understanding of the underlying issues and how they might be addressed.

Screenshot 2015-10-10 09.55.32

This is by no means a perfect model – far from it. It will obviously take a few years to refine the process, and we must make sure that we continue to provide the time necessary throughout the year for meaningful conversations about the impact of the professional learning on what happens in the classroom. Gone must go the days of meeting once a year to set crude performance targets that everyone forgets about until 12 months down the line. We are already thinking about affording the interim review the same status as the annual review by giving over another INSET day to evaluate progress and adjust development plans accordingly.

Appraisal directly linked to unreliable performance outcomes does not work – it breeds a culture of fear and inertia, when what we want is continual professional learning that leads to one or two informed intentional changes aimed where the need is the greatest. We hope our model is moving closer in this direction.

Thanks for reading.