At a time of shrinking budgets, there is a need for reliable formative and summative feedback about the efficacy of professional learning. It is not acceptable to assume that, however well intentioned or well received a school’s CPD programme is, it is necessarily right for it to continue. If it is not having an impact on student outcomes, whether in the narrowest sense of achievement or more broadly across other competencies, then it has, at the very least, to be called into question. It may well be that other forms of professional learning are more effective, or perhaps, as some would argue, that no CPD would have more impact, freeing up busy teachers to plan and mark better. If you have no way of knowing, then you may be wasting valuable time and resources on the wrong thing.
The problem is that whilst we may well agree that evaluating the impact of training on student outcomes is important, it is far from straightforward to measure this impact in a robust and efficient way. I know how hard it is because we have spent the past few years trying to figure out how to do evaluation better. I don’t think we have cracked it – far from it – but with the support of organisations like the fantastic Teacher Development Trust, we are getting closer to understanding what successful evaluation looks like and how to align our systems and practices so they are congruent with the content and aims of our professional learning.
There are a number of theoretical models for evaluating professional development, all of which have benefits and flaws. Kirkpatrick’s (1959, 1977, 1978) model from the world of business offers four types of evaluation. Despite its criticisms, such as the failure to consider the wider cultural factors of the organisation and assumptions about the causality between the levels, it provides a useful framework for thinking about what should go into effective evaluation. Likewise, although it runs counter to what we know about effective CPD, namely having a clear sense of intended outcomes, Scriven’s (1972) notion of goal-free evaluation also has its place, allowing within the evaluation process a place for identifying a range of impact outcomes, whether originally intended or not.
My favourite model for evaluating CPD, however, is Guskey’s (2000) hierarchy of five levels of impact. In this model the five levels are arranged hierarchically with each one increasing in complexity. The final two levels – including the last one which looks at the impact of professional learning on student outcomes – are the hardest to achieve, which no doubt explains why so many schools, including my own, have not done them terribly well. In many respects Guskey’s model bears similarities to Kirkpatrick’s framework, but crucially it adds a fifth level of evaluation, one that looks at the impact at an organisational level, which is useful for trying to make sure that the aims of a school’s CPD programmes are not undermined elsewhere by its culture or systems.
In the rest of this post, I will briefly outline each of the five levels in Guskey’s model and then explain what practices we are currently undertaking within each to improve the evaluation of our professional development. This is very much still a work in progress, so any feedback received would help us make further refinements moving forward.
- Reaction quality – Evaluates how staff feel about the quality of their professional learning
In many respects, this area of evaluation is quite soft: basing evaluation on whether participants liked or disliked specific activities rather than objectively evaluating its impact on where it counts has been rightfully challenged as being weak. I do, think, however, that it is still important to include some element of staff qualitative feedback within the overall evaluation process, particularly if suggestions can be acted upon easily to increase buy-in.
To this end, we send out reaction quality surveys after every short form CPD session. It has only two sections. The first asks participants to evaluate the extent to which session objectives have been met, whilst the second invites more ‘goal free’ reaction feedback by asking about what was learned and what participants would like to see included or amended in future sessions.
- Learning evaluation – measures knowledge, skills and attitudes acquired through training
This aspect of evaluation is linked in with our appraisal process. I have already written about the changes to our appraisal this year, which have gone down well so far with enhancements to follow after feedback. Essentially, all teachers, classroom support staff and non-teaching staff identify two main goals: one that is a subject (or department/role) target orientated towards developing a specific aspect of pedagogy, practice or knowledge, whilst the second is a learning question, allowing for the enquiries into the more nebulous and complex aspects of improvement that lie at the heart of our daily practice.
The subject goal is supported by departments or teams during their fortnightly subject CPD time. For instance, a couple of science teachers seeking to improve their modelling might work together using IRIS lesson observation equipment, or a group of religious studies teachers might run seminars during department pedagogy time on the knowledge required to teach their new specifications. The enquiry question is supported by the wider CPD programme, the bulk of which takes places in learning communities that are selected during the appraisal process and aim to provide the necessary input and ongoing support.
The evaluation itself comes in two parts. The first is a professional audit, which we instigated for the first time last year and will be revisited in the summer term to see the extent to which knowledge has changed. The second part is built into the appraisal process, where through a combination of a learning journal, voluntary targeted observations and professional dialogue colleagues can demonstrate the new knowledge and insights they have acquired in their department training or through participation in their learning community.
The model is based upon a number of sources, including the helpful lesson study enquiry cycle put together by the Teacher Development Trust. Both interim appraisal and annual appraisals provide opportunities for meaningful discussions about individual development, as well as for the evaluation of individual and aggregated professional learning. This is not so much about holding individuals to account, but rather as a means of fostering an ethos of continual improvement and gaining insight of what training adds value and what doesn’t.
- Organisational evaluation – assesses the support and ethos of the organisation
This third level of evaluation in Guskey’s model represents the missing part of Kirkpatrick’s framework – evaluation of school ethos and support for CPD. As Guskey observes, it would be ridiculous for an individual teacher or group of teachers to receive high quality training that they understand in theory, agree with in principle but cannot put into practice because of ‘organizational practices that are incompatible with implementation efforts’.
The problem, however, with assessing the support and ethos across a whole school, and evaluating whether it is aligned with the objectives and content of the professional learning programme, is that it requires an objective, external voice – the ‘critical friend’ cited in recent reports into effective teaching and professional development. Fortunately, we are members of the Teacher Development Trust and one of the benefits of membership to their network is the regular external audit of CPD. Unlike other brands of external judgement, this one is supportive and helpful – both in the summative, but moreover in the formative sense. This post from TeacherToolkit provides a useful insight into one school’s experience of the TDT audit.
The audit is split into 7 categories with three levels of award for elements within these categories – Gold, Silver and Bronze. In assessing the overall quality of professional learning, it canvasses the views of all members of teaching and non-teaching staff. This is done via a pre-visit survey and then through extended interviews with a cross-section of staff during the day of the evaluation, which is peer reviewed with another member of the network. What I particularly like about the TDT audit is the way it provides rigorous external feedback into what is working and what requires improvement. There is no spurious judgement, but rather crucial feedback about what staff think about their own school’s CPD and a cool appraisal of whether of not its culture and practices enable new learning to be enacted.
- Behaviour evaluation – focuses on changes in behaviours as a result of training received
Professional development cannot really considered to have been successful if the day-to-day behaviours of teachers have not changed. As we all know, this usually takes a great deal of time. Even small changes in practice, such as trying to avoid talking whilst students are working can take a great deal of practice and feedback. Focused observations are a useful support in this process and can be requested by individuals who want to gain feedback on how their behaviours have changed and what they may wish to consider to change in the future. These observations are agreed at the outset and are purely developmental.
Perhaps the most reliable and useful source of ongoing evaluation into a teacher’s behaviorial change in the classroom is from the students’ themselves. Next year, we intend to introduce student evaluations, which again are not designed to catch staff out but rather to gain useful feedback for teachers with regards to the one or two identified areas of change that they have been deliberately working on, for either their subject goal or their learning question. It was too soon to introduce this year, particularly as we wanted to be careful about how we make sure that student evaluations are embraced not feared.
- Results evaluation – assesses the impact of professional development on outcomes
At the outset of the appraisal in early October, teachers identify specific classes, groups of students and aspects of their classroom teaching or their students’ learning that they want to change as a consequence of their professional learning. This identification of outcomes is a structured and supported process, which not only looks back at previous examined and non-examined results, but also looks forward to future curriculum and timetable challenges. We no longer set arbitrary performance targets, but do seek to establish clearly-defined outcomes in relation to student learning. Again, the TDT resources have proven a very useful guide.
The intention for this year is to look closely at the impact of bespoke department and school-wide professional development on specified student outcomes. There may be some mileage in considering this in the aggregate too, but we are very much aware that much of the nuance is lost in such a process. It may be possible in the future to more closely align the goals of individual classroom contexts to those at department or whole school level, but this is very much something for the future. This is by no means a flawless approach, but it does get much closer to evaluating the thread between teacher growth and student achievement. David Weston, Chief Executive of the Teacher Development Trust, provides a different more immediate way of building evaluation into professional development with this wonderful worked example of a group of science teachers working on a common problem.
As I have already stressed, ours is still very much a work in progress. I do think, however, that we are much further along in understanding the importance of evaluation in relation to professional development, and what this might look like in practice.
Thanks for reading.
Bates (2004) ‘A critical analysis of evaluation practice: the Kirkpatrick model and the principle of beneficence’
Creemers B., L. Kyriakides and P. Antoniou (2013) Teacher Professional Development for Improving Quality of Teaching
Guskey, T (2000) Evaluating Professional Development
Scriven (1991) ‘Prose and Cons about Goal-Free Evaluation
* image adapted from: http://www.growthengineering.co.uk/why-public-recognition-motivates-us/