Disciplined enquiry, or how to get better at getting better

screenshot-2016-09-17-07-30-34

How do you know what to do to improve your teaching? And if you can identify what you need to do get better, how do you know whether what you are doing to try and improve is actually making a difference where it really matters: in developing your students’ learning?

I think there are probably five main sources available to teachers to help them identify areas for their improvement. These are the data on their students’ outcomes, feedback from their colleagues, feedback from their students, research evidence into what works and where, and, finally, their reflections about their practice.

Each of these sources can be extremely useful, providing teachers with valuable insights into where they might need to focus. Equally, they can all be very unhelpful, giving unreliable feedback on areas of strength and weakness, particularly where limitations and nuances are not fully understood, or where potential improvement tools are used as performance measures.

Perhaps the best approach is to take a number of these sources of feedback together, increasing the likelihood of identifying genuine areas for improvement. In subsequent posts, I hope to outline a framework that harnesses these feedback mechanisms into a clear and systematic structure, but for now I want to focus on exploring just one means of self-improvement: getting better at being you.

In many respects, you are both the best source of feedback, and the worst of source of feedback; you can be wise and foolish in equal measure! The problem is that, whilst you are undoubtedly the one who spends the most time with your students and the one who thinks the most carefully about how to help them improve, you are also extremely prone to bias and flawed thinking, which can make it hard for you to trust your judgements, especially in relation to developing your own practice.

Others have written extensively about human fallibility and the dangers of trusting instinct. Daniel Kahnemman’s Thinking Fast and Slow, David Didau’s What If Everything You Knew About Education Was Wrong? and David Mcraney’s You Are Not So Smart all provide excellent insights into how we humans routinely get things wrong. It is clear, then, that we need to understand and respect our cognitive limitations and avoid thinking we know what works just because it feels right. Instinct is not enough. That said, I believe we can be useful sources of feedback in relation to improving our own teaching, particularly if we can learn how to reduce the impact of our biases and can get better at being more objective.

What is disciplined enquiry

Honing the skills of restrained reflection is the hallmark of a disciplined enquirer, and disciplined enquiry is what I have come to think is probably the best we way can grow and develop as a profession. Like many terms in education, disciplined enquiry means lots different things to lots of different people. For me, it represents the intersection between the science and the craft of teaching, and involves a systematic approach that encourages teachers to ‘think hard’ about their improvement and making use of the best available evidence to inform their decision-making. My definition of a disciplined enquirer tries to capture this complexity:

A disciplined enquirer draws upon internal and external experience – they operate as both subject and object in relation to improving their own practice. Through a systematic framework a disciplined enquirer develops the ability to limit the impact of bias, whilst learning how to become more attune to interpreting the complexity of the classroom, such as appreciating the role of emotions, the impact of actions and the nature of relationships. Over time and through deliberate noticing they become increasingly sensitive to interpreting patterns of behaviour and learning how to react better in the moment and how to make better decisions in the future.

Understanding how we make decisions

Perhaps the first step to becoming a disciplined enquirer is to recognise the nature of decision-making itself. Kahneman’s model of system one and system two thinking is instructive here. System one thinking describes the way we use mental shortcuts to quickly make sense of complex phenomena and to give us the appearance of coherence and control, whereas the system two model uses a more methodical and analytical approach to decision-making, where we take our time to review and weigh up choices. The trade off between the two modes is time and effort. The result is that busy teachers come to rely more and more on quick, instinctive system one thinking over the slower, more deliberate system two model, which can lead to mistakes.

As well as understanding how we make decisions and how we react to given situations, a disciplined enquirer needs to appreciate the way that we gain insights in the first place, since it is the opening up new ways of seeing that we are ultimately looking for in order to help us improve our practice. It seems to me that if we know the conditions under which we are more likely to learn something new, whether about our teaching, our students’ learning or any other aspect of the classroom environment, then we are better able to take steps to recreate these conditions and harness them when they manifest.

In Seeing What Others Don’t See, Gary Klein uses a triple-path-model to illustrate the ways in which we commonly reach such new insights. Klein’s model challenges the widely held notion of eureka moments, where inspiration or epiphany follows long periods of gestation. From studying decision-making in naturalistic conditions, Klein suggests there are three main triggers that typically lead to new insights – contradiction, connection, and creative desperation. These triggers, working on their own or in combination, shift or supplant the existing anchors that we ordinarily rely upon to make decisions. An anchor is a belief or story that gives us a sense of coherence and informs the decisions that we make, often without us even realising.

1

In some respects, Klein’s anchors resemble the idea of mental shortcuts, or heuristics, in Kahneman’s model of system one thinking. The anchor and the heuristic both guide action, usually subconsciously, and both can prevent us from seeing things clearly. Whilst we need heuristics (or anchors) to make our daily lives manageable – getting from A to B, for instance, without endlessly checking the route – for more complex decision making, such as that which constitutes classroom teaching, they can often lead us to make mistakes or develop false notions of what works. Disciplined enquiry should therefore seek to find ways to engage system two thinking, and to consciously trigger the cultivation of better anchors to help us improve our decision-making.

There are a number of steps that can help achieve this end. The diagram below gives an idea of what this might look like in practice. None of the suggestions are a panacea – it is surprisingly difficult to shift our thinking in relation to our deeply held values and beliefs – but they are an attempt to provide some sense of how we could get better at not only making decisions, but also of being aware of the reasons why we are making those decisions in the first place. The goal for disciplined enquiry is, then, to try ti find ways to override system one intuition, and activiate system two consideration.

2

Identifying inconsistency

One example Klein uses to illustrate the trigger of identifying inconsistency is the case of an American police officer who whilst following a new car is struck by the strange behaviour of the man in the passenger seat. Following the car, which is otherwise being driven normally, the officer notices the passenger appear to stub a cigarette out on the seat. What he witnesses is at odds with his understanding of what people normally do when riding as passengers in new cars. As a result he decides to pull the car over – an action that leads to an arrest, when it turns out that the car has in fact been stolen.

There are several ways a disciplined enquirer can set out to deliberately create this kind of inconsistency of thought – the sort of cognitive dissonance that might lead to a useful new insight into an aspect of pedagogy. One obvious way is to actively seek out alternative views or dissenting voices. Rather than always being surrounded by likeminded opinions, whether online or in the staffroom, teachers wishing to improve their practice should spend time listening to the views of those with contrary positions. This approach helps to avoid groupthink and fosters the kind of self-questioning that might shed light on an area of practice previously hidden.

Spotting coincidence

Unlike the trigger of identifying inconsistency, the trigger of spotting coincidence is about looking for similarities and patterns between phenomena and using these revealed relationships to build new insights. One of Klein’s examples of how spotting coincidence can change understanding and lead to meaningful changes in practice involves the American physician, Michael Gottilieb. After noticing connections between the symptoms of a number of his homosexual patients in the early 1980s, Gottilieb began to realise that what he was actually dealing with was something very different and very important from what he had previously experienced. His insights led him to publish the first announcement of the AIDS epidemic.

There are two crucial aspects of this story in respect of disciplined enquiry. The first is that Gottilieb’s insight didn’t happen overnight. It was slow process over a long period of time involving the gradual noticing of patterns that could not initially be attributed to something already known. Too often us teachers try to make too many changes to our practices too quickly, without understanding or assessing their impact. The second important point is how much Gottilieb retained his focus – he didn’t just notice something once, think it was interesting and then move on; instead he relentlessly pursued an emerging pattern, consciously noting down his observations, until he could formulate his observations into something more concrete and usable.

One of the key things that leads to developing new insights is thus a combination of time and deliberate attention: being alive to the possibility that two or three things that have something in common may lead to something more meaningful, or they may not. As the name suggests, disciplined enquiry involves disciplined focus, something so often overlooked in education in the scramble to share untested best practice. It is far better to isolate one or two variables in the classroom and look to notice their impact on student learning, than to proceed on a whim.

Escaping an empasse

Perhaps the most poignant story in Klein’s book is the story of a group of smokejumers who were parachuted into the hills of Montana in 1949 in an attempt to control a large forest fire that was spreading quickly. The firefighters were soon caught in the fire themselves which was moving swiftly up the side of the grassy hillside. The men tried to outrun the fire, but sadly only two of the original 15 made it to the top. The other 13 could not run fast enough and were consumed by the onrushing flames.

One of the two men to survive was Wagner Dodge who, like the others, initially tried to outrun the flames, but, unlike the others, realised that this wasn’t going to work and unless he did something different he would die. His quick-thinking insight was to set fire to a patch of grass ahead of him, thus creating an area of safety where he could stand with the fire deprived of its fuel. In a moment of literal life and death decision-making, Dodge had arrived at a creative solution that had unfortunately passed his friends by. Out of desperation, Dodge had discarded his intuition (to run), and thought hard about a radical solution (to cut of the fire’s fuel source).

Obviously, as important as teaching is, it is not really a profession that rests on life or death decisions. That said, there are aspects from the story of the Colorado smokejumpers, in particular the counterintuitive actions of Wagner Dodge, that a disciplined enquirer can learn from in an effort to increase their chances of generating new insights. Foremost amongst those lessons, is the way that a fixed condition – in this case the fire sweeping up the fireside – forced Dodge to focus on the other variables open to him. It may be that self-imposed limitations, such as deadlines, parameters for recording reflections or routines of practice, rather than stifle thinking, may actually encourage new ways of seeing. Being forced to consider all possibilities, including rejecting existing ideas and beliefs, could enhance our ability to make great sense of student interaction or learning. After all, the famous Pomodoro Technique is largely predicated on the notion that short bursts of focused, time-bound thinking produce much better results that longer, drawn out periods of study.

Disciplined enquiry is not easy and does make demands on what is already a very demanding job. That said, if there is a framework and culture that supports disciplined enquiry and makes the systematic study of one or two areas of improvement routine, then I think it could be a powerful means of both individual teacher and whole school improvement. What this framework might look like will be the subject of my next post.

Advertisements

Evaluating CPD: hard but not impossible

Screenshot 2016-04-09 08.22.31*

At a time of shrinking budgets, there is a need for reliable formative and summative feedback about the efficacy of professional learning. It is not acceptable to assume that, however well intentioned or well received a school’s CPD programme is, it is necessarily right for it to continue. If it is not having an impact on student outcomes, whether in the narrowest sense of achievement or more broadly across other competencies, then it has, at the very least, to be called into question. It may well be that other forms of professional learning are more effective, or perhaps, as some would argue, that no CPD would have more impact, freeing up busy teachers to plan and mark better. If you have no way of knowing, then you may be wasting valuable time and resources on the wrong thing.

The problem is that whilst we may well agree that evaluating the impact of training on student outcomes is important, it is far from straightforward to measure this impact in a robust and efficient way. I know how hard it is because we have spent the past few years trying to figure out how to do evaluation better. I don’t think we have cracked it – far from it – but with the support of organisations like the fantastic Teacher Development Trust, we are getting closer to understanding what successful evaluation looks like and how to align our systems and practices so they are congruent with the content and aims of our professional learning.

There are a number of theoretical models for evaluating professional development, all of which have benefits and flaws. Kirkpatrick’s (1959, 1977, 1978) model from the world of business offers four types of evaluation. Despite its criticisms, such as the failure to consider the wider cultural factors of the organisation and assumptions about the causality between the levels, it provides a useful framework for thinking about what should go into effective evaluation. Likewise, although it runs counter to what we know about effective CPD, namely having a clear sense of intended outcomes, Scriven’s (1972) notion of goal-free evaluation also has its place, allowing within the evaluation process a place for identifying a range of impact outcomes, whether originally intended or not.

My favourite model for evaluating CPD, however, is Guskey’s (2000) hierarchy of five levels of impact. In this model the five levels are arranged hierarchically with each one increasing in complexity. The final two levels – including the last one which looks at the impact of professional learning on student outcomes – are the hardest to achieve, which no doubt explains why so many schools, including my own, have not done them terribly well. In many respects Guskey’s model bears similarities to Kirkpatrick’s framework, but crucially it adds a fifth level of evaluation, one that looks at the impact at an organisational level, which is useful for trying to make sure that the aims of a school’s CPD programmes are not undermined elsewhere by its culture or systems.

In the rest of this post, I will briefly outline each of the five levels in Guskey’s model and then explain what practices we are currently undertaking within each to improve the evaluation of our professional development. This is very much still a work in progress, so any feedback received would help us make further refinements moving forward.

  1. Reaction quality Evaluates how staff feel about the quality of their professional learning

In many respects, this area of evaluation is quite soft: basing evaluation on whether participants liked or disliked specific activities rather than objectively evaluating its impact on where it counts has been rightfully challenged as being weak. I do, think, however, that it is still important to include some element of staff qualitative feedback within the overall evaluation process, particularly if suggestions can be acted upon easily to increase buy-in.

To this end, we send out reaction quality surveys after every short form CPD session. It has only two sections. The first asks participants to evaluate the extent to which session objectives have been met, whilst the second invites more ‘goal free’ reaction feedback by asking about what was learned and what participants would like to see included or amended in future sessions.

Screenshot 2016-04-09 08.21.02

  1. Learning evaluationmeasures knowledge, skills and attitudes acquired through training

This aspect of evaluation is linked in with our appraisal process. I have already written about the changes to our appraisal this year, which have gone down well so far with enhancements to follow after feedback. Essentially, all teachers, classroom support staff and non-teaching staff identify two main goals: one that is a subject (or department/role) target orientated towards developing a specific aspect of pedagogy, practice or knowledge, whilst the second is a learning question, allowing for the enquiries into the more nebulous and complex aspects of improvement that lie at the heart of our daily practice.

The subject goal is supported by departments or teams during their fortnightly subject CPD time. For instance, a couple of science teachers seeking to improve their modelling might work together using IRIS lesson observation equipment, or a group of religious studies teachers might run seminars during department pedagogy time on the knowledge required to teach their new specifications. The enquiry question is supported by the wider CPD programme, the bulk of which takes places in learning communities that are selected during the appraisal process and aim to provide the necessary input and ongoing support.

The evaluation itself comes in two parts. The first is a professional audit, which we instigated for the first time last year and will be revisited in the summer term to see the extent to which knowledge has changed. The second part is built into the appraisal process, where through a combination of a learning journal, voluntary targeted observations and professional dialogue colleagues can demonstrate the new knowledge and insights they have acquired in their department training or through participation in their learning community.

The model is based upon a number of sources, including the helpful lesson study enquiry cycle put together by the Teacher Development Trust. Both interim appraisal and annual appraisals provide opportunities for meaningful discussions about individual development, as well as for the evaluation of individual and aggregated professional learning. This is not so much about holding individuals to account, but rather as a means of fostering an ethos of continual improvement and gaining insight of what training adds value and what doesn’t.

  1. Organisational evaluation – assesses the support and ethos of the organisation

This third level of evaluation in Guskey’s model represents the missing part of Kirkpatrick’s framework – evaluation of school ethos and support for CPD. As Guskey observes, it would be ridiculous for an individual teacher or group of teachers to receive high quality training that they understand in theory, agree with in principle but cannot put into practice because of ‘organizational practices that are incompatible with implementation efforts’.

The problem, however, with assessing the support and ethos across a whole school, and evaluating whether it is aligned with the objectives and content of the professional learning programme, is that it requires an objective, external voice – the ‘critical friend’ cited in recent reports into effective teaching and professional development. Fortunately, we are members of the Teacher Development Trust and one of the benefits of membership to their network is the regular external audit of CPD. Unlike other brands of external judgement, this one is supportive and helpful – both in the summative, but moreover in the formative sense. This post from TeacherToolkit provides a useful insight into one school’s experience of the TDT audit.

The audit is split into 7 categories with three levels of award for elements within these categories – Gold, Silver and Bronze. In assessing the overall quality of professional learning, it canvasses the views of all members of teaching and non-teaching staff. This is done via a pre-visit survey and then through extended interviews with a cross-section of staff during the day of the evaluation, which is peer reviewed with another member of the network. What I particularly like about the TDT audit is the way it provides rigorous external feedback into what is working and what requires improvement. There is no spurious judgement, but rather crucial feedback about what staff think about their own school’s CPD and a cool appraisal of whether of not its culture and practices enable new learning to be enacted.

  1. Behaviour evaluation – focuses on changes in behaviours as a result of training received

Professional development cannot really considered to have been successful if the day-to-day behaviours of teachers have not changed. As we all know, this usually takes a great deal of time. Even small changes in practice, such as trying to avoid talking whilst students are working can take a great deal of practice and feedback. Focused observations are a useful support in this process and can be requested by individuals who want to gain feedback on how their behaviours have changed and what they may wish to consider to change in the future. These observations are agreed at the outset and are purely developmental.

Perhaps the most reliable and useful source of ongoing evaluation into a teacher’s behaviorial change in the classroom is from the students’ themselves. Next year, we intend to introduce student evaluations, which again are not designed to catch staff out but rather to gain useful feedback for teachers with regards to the one or two identified areas of change that they have been deliberately working on, for either their subject goal or their learning question. It was too soon to introduce this year, particularly as we wanted to be careful about how we make sure that student evaluations are embraced not feared.

  1. Results evaluation assesses the impact of professional development on outcomes

At the outset of the appraisal in early October, teachers identify specific classes, groups of students and aspects of their classroom teaching or their students’ learning that they want to change as a consequence of their professional learning. This identification of outcomes is a structured and supported process, which not only looks back at previous examined and non-examined results, but also looks forward to future curriculum and timetable challenges. We no longer set arbitrary performance targets, but do seek to establish clearly-defined outcomes in relation to student learning. Again, the TDT resources have proven a very useful guide.

The intention for this year is to look closely at the impact of bespoke department and school-wide professional development on specified student outcomes. There may be some mileage in considering this in the aggregate too, but we are very much aware that much of the nuance is lost in such a process. It may be possible in the future to more closely align the goals of individual classroom contexts to those at department or whole school level, but this is very much something for the future. This is by no means a flawless approach, but it does get much closer to evaluating the thread between teacher growth and student achievement. David Weston, Chief Executive of the Teacher Development Trust, provides a different more immediate way of building evaluation into professional development with this wonderful worked example of a group of science teachers working on a common problem.

As I have already stressed, ours is still very much a work in progress. I do think, however, that we are much further along in understanding the importance of evaluation in relation to professional development, and what this might look like in practice.

Thanks for reading.

References:

Bates (2004) ‘A critical analysis of evaluation practice: the Kirkpatrick model and the principle of beneficence’

Creemers B., L. Kyriakides and P. Antoniou (2013) Teacher Professional Development for Improving Quality of Teaching

Guskey, T (2000) Evaluating Professional Development

Scriven (1991) ‘Prose and Cons about Goal-Free Evaluation

* image adapted from: http://www.growthengineering.co.uk/why-public-recognition-motivates-us/

 

What Makes Great Training? 10 ideas for developing subject knowledge and pedagogy

Screenshot 2016-01-10 08.08.45

The need to improve the quality of professional training for teachers is, I think, becoming increasingly well understood. In a time of shrinking budgets and teacher shortages, improving professional development has in some ways become as important about teacher recruitment and retention as improving student outcomes.

Recent publications have provided clarity to where leaders should target their efforts to improve in-school professional learning. The 2014 Sutton Trust report into Great Teaching, for instance, outlines the benefits on student outcomes of teachers who are well versed in their subject.

the most effective teachers have deep knowledge of the subjects they teach, and when teachers’ knowledge falls below a certain level it is a significant impediment to student;’ learning. As well as a strong understanding of the material being taught, teacher must also understand the ways students think about the content, be able to evaluate the thinking behind students’ own methods, and identify common misconception.’

The need to focus school CPD on developing subject-pedagogy alongside more generic forms of training is also a feature of the more recent Teacher Development review, Developing Great Teaching:

the findings from this review indicate the importance of focussing on generic and subject specific pedagogy, so it will be important to consider how subject expertise in particular can be developed alongside more generic aspects as part of CPDL.

In many respects, it should not come as any great surprise that the greatest impact on student outcomes is likely to come from a teacher who knows their subject well and how to teach the nuances and challenges of it to different learners at different stages of their development. The heavy focus on developing ‘generic’ skills was wrong and imbalanced.

As with most things related to teacher development, however, just knowing about what to do can be a far cry from actually being able to put it into practice. I know a lot of teachers, myself included, who broadly understand how to improve aspects of pedagogy, such as giving explanations, honing questioning or improving modeling, but they are not always able to do so themselves. Implementing the how often proves more difficult than understanding the why.

In similar vein, I suspect some school leaders understand the need to focus CPD efforts on developing subject pedagogy, but have not yet figured out how to do it effectively. I know from my own experience how hard it is to make a more subject-specific model of professional learning work. Time is a significant factor, but so are levels of expertise, particularly, for example, amongst some heads of departments.

For years, I suspect that many subject leaders have not really been responsible for shaping the professional development of their teams. Just turning over that responsibility– particularly at a time of considerable change in exam syllabus and assessment – is unlikely to bring about any significant change in the quality of professional learning. This responsibility is significant, especially for less experienced colleagues or colleagues schooled in genericism.

Last September, we changed our curriculum, which meant we could enshrine two hours of professional development a week. The majority of that time – around 40+ hours – is dedicated to improving subject knowledge and subject pedagogy. Whilst this is fantastic, just making more time available was only ever a part of the answer to reversing the failings of the past. We want departments to be in a position to continually develop a better understanding of their subject’s unique demands, so we need to provide them with the tools and guidance to make this happen, which takes time and careful planning.

10 Ideas for improving subject knowledge and pedagogy

In this post I offer 10 ideas about the kinds of activities and resources that we have looked to try and introduce to help teachers and departments to develop their subject knowledge and subject pedagogy. It is not an exhaustive list, but I hope it gives a few pointers about where to start the process of developing subject-specific CPD, or where further improvements can be made for those already in a strong position, such as Durrington School in West Sussex.

  1. Presentations and seminars 

Giving presentations or running seminars on particular areas of strength is an excellent way of sharing the responsibility for developing subject knowledge across a department and, moreover, for improving the ability of individuals to present to adult learners. Some of our departments have developed their own subject knowledge audits to identify strengths and design seminar schedules across the course of the year. In some of these sessions there has bee pre reading, or post seminar activities, such as discussion groups or lesson and curriculum planning sessions. It is our intention to have audits for every subject, partly to pinpoint training needs, but also to help identify and, in turn, circulate expertise more easily, particularly across larger departments.

  1. Subject knowledge audits

Identifying the spread of knowledge in a department is an important step in planning for the development of individual teachers and making sure the needs of the students are successfully met. Subject audits also provide an excellent means of identifying existing areas of expertise, which can be harnessed for the benefit of others. Threshold concepts might be a good way to audit knowledge, but whatever methodology is used it is important that the subject knowledge requirements identified are genuine. In some subjects, like English, PGCE audits like this  or this can be quite vague and unhelpful. Rating degrees of confidence with teaching Victorian literature, for instance, is not the same as auditing the books I have read on the subject or posing questions that pinpoint the concepts or historical details I know. This kind of audit is, I would argue, much more helpful for identifying gaps in knowledge or for throwing up important misconceptions.

  1. Lesson Study

Whilst Lesson Study is often conducted by teachers from across a range of different subject areas, in many respects it makes more sense for three teachers from the same subject to get together to investigate a subject-specific research enquiry question. Peter Dudley, one of the architects of introducing this form of professional development activity into the country, certainly sees its benefits. Writing about the ‘learning points’ of groups working on pedagogical content knowledge, he notes how:

…LS group members are held [to account] by the level of detail required in their planning and analysis discussions ([which] forces even tiny difference of view about practice or content to become exposed.

Lesson Study: Professional Learning for Our Time

If you have not yet looked into lesson study, this document is a great introduction into the format and how to implement it into your school. The Teacher Development Trust and their Network of schools across the country provide considerable guidance and ongoing support with implementing Lesson Study as part of an annual membership. I really cannot recommend membership to the TDT enough.

  1. Wider reading

Conducting wider reading or research takes time. Reading books, articles, reports and websites or blogs should therefore be seen as an entirely legitimate and justifiable professional learning activity. It may be that time is required to read a set text, or research recent developments in a subject area. Departments could pay for membership of their professional body and, as a result, receive publications and journals containing valuable advice, links and networking opportunities. Academic and specialist journals are also available online and local universities often have subscriptions and electronic access to periodicals. I wonder how many departments meet to discuss the ideas in a chapter from a text book, or share their thoughts around a poem. These may seem like frivolous activities, but eat up a lot of teachers’ time outside of school, and collaborative discussions such as these can help fuel debate, identify student misconceptions and lead to a shared approach to explaining difficult concepts to children.

  1. Online courses

Online seminar courses and programmes offer an excellent way for teachers to connect with professional learning communities, including some of the most prestigious university departments and academics in the world. There are a number of different online courses, which are perfect for matching up subject specific needs with personalised learning programmes. Many of the courses are free and those that do charge are relatively inexpensive given the quantity and quality of the material provided. It would be entirely possible for both individuals and small groups of teachers to follow the same online programme, or listen and discuss a particular lecture. Mark Miller has written a good post about how he listens to a lot of wider reading on his way to work in the car.

  1. University links

As Michael Young illustrates so well in Knowledge and the Future School, It is important for subject disciplines to stay connected with their learned communities. It is these communities, namely university departments, subject associations and professional bodies, that link classroom practice to current university research and help make sure that teachers have access to cutting edge insights into their subjects and the ways in which they these can be taught. These connections can take different forms according to the nature of the subject, but in each case they help keep teachers abreast of current developments in their field, which, in turn, make sure that students’ learning is at the forefront of knowledge both past and present. It should be perfectly acceptable for teachers or members of a department to use department or INSET time to visit a university library and research information unavailable elsewhere.

  1. Visits, exhibitions and public lectures

Visits to exhibitions, galleries or museums are often the only way for teachers to develop aspects of their subject expertise, perhaps by seeing important works first hand or learning about how an idea, style or period is represented in different formats. Public lectures by leading academics or subject experts are also a useful means of enhancing professional knowledge. Whilst it is more economical and desirable for speakers to speak to entire departments, this is not always be possible to arrange. This post by Harry Fletcher-Wood goes into more detail about why these kinds of visits are an important part of staff development.

  1. School collaboration

The same principles of external and local collaboration should be encouraged across networks of local schools. It may well be the case that individuals or whole departments in nearby schools and sixth form colleges have specific expertise that can be utilised for the benefit of all. As with the harnessing of university expertise, local teacher knowledge and understanding can be purchased or shared as part of a reciprocal arrangement. This could take the form of developing subject knowledge, or sharing specific insights and approaches gained from individuals working closely with examination boards or subject associations. In some instances, particularly in small similar departments and faculties, it may be beneficial to pair up colleagues with similar training needs for collaborative work.

  1. Leverage coaching

If you are lucky enough to have lesson observation equipment like IRIS Connect, then you have a fantastic tool that can help you to develop a shared understand of effective subject pedagogy. There are two main applications of the lesson observation equipment that can make a difference in supporting a department’s work on developing their understanding of effective subject pedagogy. The first is to develop a bank of masterclass videos illustrating different pedagogical techniques, contextualised within the subject and produced by members of the department. The group facility on IRIS Connect is a fantastic way to discuss points of teaching and keeping examples for posterity, such as an optimal explanation of tragedy for use with future trainees. Lesson observation equipment, such as the Discovery Kit option of IRIS, provides the ideal means for subject-specific coaching: short leverage coaching sessions could be a regular feature of departmental time. These again from Harry Fletcher-Wood are a wonderful primer on the methodology.

  1. Subject specific external providers

There are a number of providers of subject-specific training courses and development opportunities. Below is a short list of some of the main providers of subject-specific training. Departments may wish to invite teachers who have been on external training to feedback to the rest of the department, or to colleagues who would benefit from the information or approaches shared. This acts as a further layer of professional development. Subject professional associations offer another potential way of finding out about high quality subject-specific professional development opportunities. Often the website or professional journals of these associations provide details of current courses on offer and discounts for members are available.

Some providers of subject specific knowledge and pedagogy:

  • SSAT

http://www.ssatuk.co.uk/cpd/

  • The Prince’s Teaching Institute

http://www.princes-ti.org.uk/what-we-do/teacher-subject-days/

  • Science Learning Network

https://www.sciencelearningcentres.org.uk/

I have written elsewhere of the impact of reviewing student learning as whole department activity, either as part of a learning review or joint planning and assessment via a collaborative teaching cycle. Both of these are great subject-specific development activities, which I hope to write about again in the future.

Here is a useful link to a list of subject associations.

Thanks for reading.

 

Collaborative teaching cycles: from scrutinising learning to understanding learning

Screenshot 2015-07-06 19.25.12

I’m not really a big fan of the practice of book scrutiny.

What is book scrutiny?

Book scrutiny usually involves a head of department or key stage co-ordinator collecting a sample of students’ books from across the teachers of a year group and evaluating the quality of student progress against some form of rubric or checklist. The evaluator completes a summative analysis and, depending on time and school context, provides formative feedback to the teachers concerned. In some instances this might be a ‘well done, good job’, but at other times it may be more of a ‘this or that was missing’ or ‘ that was not completed in such and such a way’. In both cases, I am not sure little of any value is actually achieved.

I understand why schools use such an approach; until fairly recently we did too. Book scrutiny represents a way of ensuring equality of provision by identifying areas for improvement, such as marking or quality of activities. It sort of makes sense. The problem is that it doesn’t work, even in the most benign of school cultures. If we put aside for now the false premise that learning can be seen in books any more clearly than it can be seen in lessons, book scrutiny is still an epic fail because it doesn’t achieve what it sets out to do: namely to bring about high quality learning for all students. It might help to identify in department variation, but it is unlikely to do anything about it. Compliance alone rarely does.

Screenshot 2015-07-06 18.50.22

Book scrutiny carried out in this manner represents a process and accountability driven model of school development. All that has happened is that one dubious proxy of learning (lesson observation) has been replaced by another (book scrutiny). This top down, or rather, middle-down approach is more likely to lead to alienation and the creation of corrosive professional relationships, because the focus is on compliance to overt performance rather than on professional development through collaboration. In such a model teachers become isolated and collected wisdom and expertise is marginalised. Worse, understanding student learning or the issues of busy teachers is more of an afterthought.

Obviously, it is important for subject leaders to have an understanding of the quality of teaching and learning in their department. But any HOD worth his or her salt would probably already know this without the need to trawl through a set of books. Instead of checking learning in abstraction under a system of compliance, I would rather engender a process of collaboration and openness where the focus was directed solely towards improving student learning by looking at student learning. I would prefer a mechanism that facilitates teachers engaging with the messiness of the classroom experience: sharing ideas about what worked, what didn’t, what explanation was effective, what tasks were or were not successful.

From scrutinising learning to understanding learning

It is my contention that teaching and learning cycles may offer such a means of developing collaborative teacher inquiry – it is a model that lends itself to facilitating teachers working together, where the leader is within the process of understanding student learning, rather than sat outside evaluating it without the context or nuance necessary to see the bigger picture. As I will outline below, at the heart of this process is student learning, whether in the books themselves, or more likely the books in conjunction with discussions, reflections and questions of the teacher who was there at the heart of the process. This ethos of trust and sharing must surely be better than a purely compliance model.

The teaching sequence for independence has been well documented by David Didau, whose five part series on the phases building towards independence remain a must read. This is not really the place for discussing the nature of teaching cycles in and of themselves, but rather their use as a tool for the professional development of subject pedagogy. Suffice to say, at my school we have developed our own version of the teaching cycle and have been working with departments about what it might look like in their subject areas. There are a number of differences to David’s model, which I will try and write about in due course.

Screenshot 2015-07-06 18.51.25

For me, opportunities for meaningful professional collaboration arise at two distinct points of a teaching cycle: in the initial planning for learning phase and then again with a subsequent review of that learning (or performance) at the end. Both of these are strategic points where teachers can learn a lot from working together. The planning stage represents the chance to share likely misconceptions, discuss and refine effective and efficient explanations and to circulate wisdom or innovation. Reviewing the relative strength and weakness of different interpretations of a cycle within a department allows for new insights to be discussed, codified and stored for future use and for teachers and subject leaders to see different ways of teaching broadly similar objectives.

Central to both the planning and the review stage is, of course, actual student work. Over time we intend to build up stores of exemplar material that not only help to set and define what achievement looks like, but also provides a powerful lens through which to understand the processes that goes into the creation of it. This may be in the form of writing, or it may be a video clip of a performance or a model or artefact. Seeing what other teachers are achieving with their students is, I think, much more likely to lead to a rise in attainment than simply receiving ‘results’ of an abstract tick box exercise, irrespective of how deftly this may be handled. In this process of collaboration there is the chance for the teachers to explain the context, challenge each other and enter into a dialogue that gets a little closer to understanding ‘what works.’

Screenshot 2015-07-06 18.51.45

Next steps

We are lucky to be entering a phase of our school development where we have more time for on-going professional learning. From September we will have two hours of enshrined CPD each week – our school will close early on a Wednesday and all students will be off site during this time. These two-hour sessions will largely alternate between two different forms of Professional Growth – subject knowledge and pedagogy (department time) and inquiry and reflection (wider, bespoke CPD). We plan on having departments run at least one collaborative teaching and learning cycle for each year group they teach per year with subject pedagogy time. We will see how it goes this year and review the process next summer. Whilst I doubt it will be perfect, I think it has the potential to be a much more powerful form of active professional development than the static model of process and compliance inherent in the term ‘scrutiny’.

We’ll see.

Thanks for reading.

The Elements of Language – Lessons learned

Right_Way_Wrong_Way_-_Road_Si_2737953

It has been interesting to read the recent online discussions between David Didau and Daisy Christodoulou about the merits and pitfalls of different assessment models. Many of the issues they raise are ones that anyone who has invested time in creating an alternative to National Curriculum Levels has almost certainly encountered for themselves. This is probably the case even more for those working in schools that have piloted these approaches and seen flaws emerge that were not necessarily apparent from the outset. It is easier to envisage an alternative to levels, but perhaps harder to make it work in practice.

This post is about my current thinking in relation to assessment at KS3. It reflects the specific context of my school and the types of challenges and opportunities that we face in the months and years ahead. I wrote the last of my two previous posts on our English assessment model, The Elements of Language, about a year ago and since then my thinking has moved forward quite a bit, partly as a result of our experiences to date, but more as a consequence of us moving towards a significantly enhanced CPD programme next year, which will include substantial and enshrined professional development every week. This significant investment of time should enable departments to collaborate on planning, share their understanding and interpretation of assessment data and get the chance to look closely at student work together – the actual results of what happens in the classroom.

The current beliefs that underpin my approach to assessment can be summed up as follows:

  • performance descriptors are often too vague and unreliable for drawing useful inferences
  • performance descriptors can often mask student underachievement and gaps in learning
  • specific statements of the learning to be mastered organised in a logical sequence are generally more useful
  • in some subjects it is hard to reduce certain aspects of achievement down to a manageable amount of specific statements about learning
  • in practice, a mastery approach to assessment can be time-consuming for teachers to implement and can detract from planning better lessons
  • threshold concepts are a useful way of mapping out transformational pathways to achievement for both students and teachers alike
  • most assessment should primarily aim to inform the next steps, whether in the classroom or more widely across a department or year group
  • any inferences drawn from assessment should be acted upon as quickly as possible
  • assessment can be a useful means of ensuring students learn and make necessary progress, with the caveat that learning takes time and progress does not look the same in every subject
  • looking at and discussing actual student work with colleagues is a powerful way of understanding the impact of classroom teaching and reaching a shared understanding of what success looks like and how to get there
  • assessment is more robust if its draws upon a range of different forms and provides multiple opportunities for that learning to be demonstrated e.g. MCQ, essay, short answers

If, as Dylan Wiliam suggests in the comments at the bottom of Daisy’s recent blog, ‘an assessment is nothing more, or less, than a procedure for making inferences’, then it is wise to make sure that whatever is used in place of levels, ensures these inferences are as reliable as possible and are acted upon as quickly as is necessary. I think that what I am proposing here achieves both these ambitions and, perhaps more importantly, provides a means through which subject professionals can engage in meaningful discussions about student learning, where gaps or misconceptions can be identified and appropriate action can be taken.

Learning from past mistakes

On reflection, I made several errors in my earlier iterations of the Elements of Language. My first mistake was to include knowledge acquisition within the overall assessment framework – knowledge and vocabulary were distinct thresholds of the reading and writing Elements respectively. Whilst I am still very much committed to the centrality of knowledge development, I can see that there are probably better, more robust ways of assessing students’ acquisition of it. Broadly, I am working on the idea that in English – and perhaps other humanities subjects such as history and religious studies – there should be a core knowledge component. This component would be assessed at strategic points throughout the year, using an efficient format such as multiple-choice that provides accurate formative data on whether students have learnt the requisite knowledge or not. I suppose this is a variation on the principle of knowledge organisers, though in my thinking the notion of core knowledge would probably be a bit more detailed as well as closely linked to a systematic programme of vocabulary instruction. There will be more on what I mean about this over the coming months.

My second error was to place the notion of mastery too much at the forefront of the assessment framework – the ‘rubric’ seen by parents, teachers and students articulated what was to be learned in a very explicit way. I now believe that it is probably better for any overarching framework to contain more generalised articulations of the different thresholds (see example below) so that it is clear what stages of transformational learning students need to pass through in order to achieve genuine mastery, say with regards to developing an ability to control writing or adopting an academic voice.  More specific items of learning to be mastered are, I think, better served sitting behind these threshold definitions, encoded as objectives but acting more as standards to be achieved by the end of each academic year. It is possible in my revised model to have different sets of standards depending on where students are at the beginning of the year, thus ensuring rigorous objectives are well matched to different starting points. I should stress that I really only see the notion of standards applying to maths and English at KS3, who have the time, resource and sense of urgency in terms of securing core competences.

From threshold concepts to classroom teaching

In my proposed assessment model specific to-be-learned items would be drawn from the threshold objectives (which, remember, are operating as standards) and these individual learning items would be pursued relentlessly by each teacher until an agreed level of mastery is achieved, in or around the 80% figure. In this model the threshold concepts have effectively been broken down into objectives which have then been mapped out across the units of work for the year. These objectives, or standards, would be assessed in a holistic way only once or twice every year – suitable periods of time in which inferences about long term learning are more likely to be valid.

On a day to day basis the standards across a unit of work would be reduced down yet further into specific learning items that would need to be mastered across a sequence of lessons. This sequence is a manifestation of our version of a teaching and learning cycle, one which we are introducing next year and that I will try and blog about in due course. Bodil Isaksen is right when she explains how the lesson is the wrong unit of learning. I think it is far better to see learning planned across longer periods of time, rather than in discrete one off lessons where there is insufficient time to properly introduce, deconstruct, revisit or assess in a meaningful way. For me, the notion of a sequence or teaching and learning cycle feeds directly into the collaborative subject-based CPD we are planning for next year. Departments will be able to regularly review the relative strengths and weaknesses of a teaching sequence and teachers will be able to get closer to understanding how their students learn.

Worked example:

Below is a copy of my revised Elements of Language for writing, where you will notice I have reduced the amount of threshold concepts from five to four and slightly reconfigured some of the others.

Picture2

In the document below you will notice how the Year 7 standards (where as I suggested above, there might be some students who work to different standards in accordance to their starting point) has been drawn from the overarching threshold definition to coded objectives mapped out across the year’s units of work. Some of the information has obviously been simplified here for illustrative purposes.

Picture3

The document below outlines how the codified objectives across units are then broken down even further into specific to-be-learned items across a teaching and learning cycle.

Picture4

I think this model – where the unifying idea of threshold concepts is used to inform a mastery approach in the classroom – has the potential to be a very powerful driver of learning, particularly as it will be wedded to systematic and collaborative review by departments working collaboratively to better understand student learning.

In truth, there is a lot more to this assessment mode than I have explained, especially around its implementation and wider application in other subject areas. I am, however, minded about the length of this post, so if anyone wants to ask me a question in the comments below or tweet me a query, I would be more than happy to go into more detail.

Thanks for reading.