Principles of Great Assessment #1 Assessment Design

Screenshot 2017-03-10 17.52.06.png

This is the first in a short series of posts on our school’s emerging principles of assessment, which are split into three categories – principles of assessment design; principles of ethics and fairness; and principles for improving reliability and validity. My hope in sharing these principles of assessment is to help other develop greater assessment literacy, and to gain constructive feedback on our work to help us improve and refine our model in the future.

In putting together these assessment principles and an accompanying CPD programme aimed at middle leaders, I have drawn heavily on a number of writers and speakers on assessment, notably Dylan Wiliam, Daniel Koretz, Daisy Christodolou, Rob Coe and Stuart Kime. All of these have a great ability to convey difficult concepts (I only got a C grade in maths, after all) in a clear, accessible and, most importantly, practical way. I would very much recommend following up their work to deepen your understanding of what truly makes great assessment.

  1. Align assessments with the curriculum 

 Screenshot 2017-03-10 17.52.48.png

In many respects, this first principle seems pretty obvious. I doubt many teachers deliberately set out to create and administer assessments that are not aligned with their curriculum. And yet, for a myriad of different reasons, this does seem to happen, with the result that students sit assessments that are not directly sampling the content and skills of the intended curriculum. In these cases the results achieved, and the ability to draw any useful inferences from them, are largely redundant. If the assessment is not assessing the things that were supposed to have been taught, it is almost certainly a waste of time – not only for the students sitting the test, but for the teachers marking it as well.

Several factors can affect the extent to which an assessment is aligned with the curriculum and are important considerations for those responsible for setting assessments. The first is the issue of accountability. Where accountability is unreasonably high and a culture of fear exists, those writing assessments might be tempted to narrow down the focus to cover the ‘most important’ or ‘most visible’ knowledge and skills that drive that accountability. In such cases, assessment ceases to provide any useful inferences about knowledge and understanding.

Assessment can also become detached from the curriculum when that curriculum is not delineated clearly enough from the outset. If there is not a coherent, well-sequenced articulation of the knowledge and skills that students are to learn, then any assessment will always be misaligned, however hard someone tries to make the purpose of the assessment valid. A clear, well structured and shared understanding of the intended curriculum is vital for the enacted curriculum to be successful, and for any assessment of individual and collective attainment to be purposeful.

A final explanation for the divorce of curriculum from assessment is the knowledge and understanding of the person writing the assessment in the first place. To write an assessment that can produce valid inferences requires a solid understanding of the curriculum aims, as well as the most valid and reliable means of assessing them. Speaking for myself, I know that I have got a lot better at writing assessments that are properly aligned with curriculum the more I have understood the links between the two and how to go about bridging them.

  1. Define the purpose of an assessment first 

 Depending on how you view it, there are essentially two main functions of assessment. The first, and probably most important, purpose is as a formative tool to support teaching and learning in the classroom. Examples might include a teacher setting a diagnostic test at the beginning of a new unit to find out what students already know so their teaching can be adapted accordingly. Formative assessment, or responsive teaching, is an integral part of teaching and learning and should be used to identify potential gaps in understanding or misconceptions that can be subsequently addressed.

The second main function of assessment is summative. Whereas examination bodies certify student achievement, in the school context the functions of summative assessment might include assigning students to different groupings based upon perceived attainment, providing inferences to support the reporting of progress home to parents, or the identification of areas of underperformance in need of further support. Dylan Wiliam separates out this accountability function from the summative process, calling it the ‘evaluative’ purpose.

Whether the assessment is designed to support summative or formative inferences is not really the point. What matters here is that the purpose or function of the assessment is made clear to all and that the inferences the assessment is intended to produce are widely understood by all. In this sense, the function of the assessment determines its form. A class test intended to diagnose student understanding of recently taught material will likely look very different from a larger scale summative assessment designed to draw inferences about whether knowledge and skills have been learnt over a longer period of time. Form therefore follows function.

3. Include items that test understanding across the construct continuum

 Many of us think about assessment in the reductive terms of specific questions or units, as if performance on question 1 of Paper 2 was actually a thing worthy of study in and of itself. Assessment should be about approximating student competence in the constructs of the curriculum. A construct can be defined as the abstract conception of a trait or characteristic, such as mathematical or reading ability. Direct constructs measure tangible physical traits like height and weight and are calculated using verifiable methods and stated units of measurement. Unfortunately for us teachers, most educational assessment assesses indirect constructs that cannot be directly measured by such easily understood units. Instead, they are calculated by questions that we think indicate competency, and that stand in for the thing that we cannot measure directly.

Within many indirect constructs, such as writing or reading ability, is likely to be a continuum of achievement possible. So within the construct of reading, for instance, some students will be able to read with greater fluency and/or understanding than others. A good summative assessment therefore needs to differentiate between these differing levels of performance and, through the questions set, define what it means to be at the top, middle or bottom of that continuum. In this light, one of the functions of assessment has to be a way of estimating the position of learners on a continuum. We need to know this to evaluate the relative impact or efficacy of our curricula, and to understand how are students are progressing within it.

Screenshot 2017-03-09 16.52.15.png

  1. Include items that reflect the types of construct knowledge

 Some of the assessments we use do not adequately reflect the range of knowledge and skills of the subjects they are assessing. Perhaps the format of terminal examinations has had too much negative influence on the way we think about our subjects and design assessments for them. In my first few years of teaching, I experienced considerable cognitive dissonance between my understanding of English and the way that it was conceived of within the profession. I knew my own education was based on reading lots of books, and then lots more books about those books, but everything I was confronted with as a new teacher – schemes of work, the literacy strategy, the national curriculum, exam papers– led me to believe that I should really be thinking of English in terms of skills like inference, deduction and analysis.

English is certainly not alone here, with history, geography and religious studies all suffering from a similar identify crisis. This widespread misconception of what constitutes expertise and how that expertise is gained probably explains, at least in part, why so many schools have been unable to envisage a viable alternative to levels. Like me, many of the people responsible for creating something new themselves been infected by errors from the past and have found it difficult to see clearly that one of the big problems with levels was the way they misrepresented the very nature of subjects. And if you don’t fully understand or appreciate what progression looks like in your subject, any assessment you design will be flawed.

Daisy Christodoulou’s Making Good Progress is a helpful corrective, in particular her deliberate practice model of skill acquisition, which is extremely useful in explaining the manner in which different types of declarative and procedural knowledge can go into perfecting a more complex overarching skill. Similarly, Michael Fordham’s many posts on substantive and disciplinary knowledge, and how these might be mapped on to a history progression model are both interesting and instructive. Kris Boulton’s series of posts (inspired by some of Michael’s previous thinking) are also well worth a look. They consider the extent to which different subjects contain more substantive or disciplinary knowledge, and are useful points of reference for those seeking to understand how best to conceive of their subject and, in turn, design assessments that assess the range of underlying forms of knowledge.

Screenshot 2017-03-09 16.53.06.png

  1. Use the most appropriate format for the purpose of the assessment

 The format of an assessment should be determined by its purpose. Typically, subjects are associated with certain formats. So, in English essay tasks are quite common, whilst in maths and science, short exercises where there are right and wrong answers are more the norm. But as Dylan Wiliam suggests, although ‘it is common for different kinds of approaches to be associated with different subjects…there is no reason why this should be so.’ Wiliam draws a useful distinction between two modes of assessment: a marks for style approach (English, history, PE, Art, etc.), where students gain marks for how well they complete a task, and a degree of difficulty approach (maths, science), where students gain marks for how well they progress in a task. It is entirely possible for subjects like English to employ marks for difficulty assessment tasks, such as multiple choice questions, and maths to set marks for style assessments, as this example of comparative judgement in maths clearly demonstrates.

Screenshot 2017-03-09 16.53.18.png

In most cases, the purpose of assessment in the classroom will be formative and so designed to facilitate improvements to student learning. In such instances, where the final skill has not yet been perfected but is still very much a work in progress, it is unlikely that the optimal interim assessment format will be the same as the final assessment format. For example, a teacher who sets out to teach her students by the end of the year to construct well written, logical and well supported essays is unlikely to set essays every time she wants to infer her students’ progress towards that desired end goal. Instead, she will probably set short comprehension questions to check their understanding of the content that will go into the essay, or administer tests on their ability to deploy sequencing vocabulary effectively. In each of these cases, the assessment reflects the inferences about student understanding the teacher is trying to ascertain, and not confusing or conflating them with other things.

In the next post, I will outline our principles of assessment in relation to ethics and fairness. As I have repeatedly made clear, my intention is to help contribute towards a better understanding of assessment within the profession. I welcome anyone who wants to comment on our principles, or to critique anything that I have written, since this will help me to get a better understanding of assessment myself, and make sure the assessments that we ask our students to sit are as purposeful as possible.

Thanks for reading.

 

 

Principles of Great Assessment: Increasing the Signal and Reducing the Noise

Screenshot 2017-03-09 17.03.29.png

After the government abolished National Curriculum levels, there was a great deal of initial rejoicing from both primary and secondary teachers about the death a flawed system of assessment. Many, including myself, delighted in the freedom afforded to schools to design their own assessment systems anew. At the time I had already been working on a model of assessment for KS3 English – the Elements of Assessment – and believed that the new freedoms were a positive step in improving the use of assessment in schools.

Whilst I still think that the decision to abolish levels was correct, I am no longer quite so sure about the manner and timing in which they were removed. Since picking up responsibility for assessment across the school, I have come to realise just how damaging it was for schools to have to invent their own alternatives to levels without anywhere near enough assessment expertise to do so well. Inevitably, many schools simply recreated levels under a different name, or retreated into the misguided safety of the flight path approach.

I would like to think that our current KS3 assessment model, the Elements of Expectation, has the potential to be a genuine improvement on National Curriculum levels, supporting learning and providing reliable summative feedback on student progress at sensible points in the calender. Even though it is in its third year, however, it is still not quite right. One of the things that I think is holding us back is our lack of assessment literacy. I am probably one of the more informed staff members on assessment, but most of what I know has been self-taught from reading some books and hearing a few people talk.

This year, in an effort to do something about this situation and to finally get our KS3 model closer to what we want, we have run some extensive professional development on assessment. Originally, I had intended to send some colleagues to Evidence Based Education’s inaugural Assessment Academy. It looks superb and represents an excellent opportunity to learn much more about assessment. But when it became clear budget constraints would make this difficult, we decided to set up and run our own in-house version: not as good (obviously) and inevitably rough around the edges, but good enough, I think, for our KS3 Co-ordinators and heads of subjects to develop the expertise they need to improve their use of assessment with our students.

The CPD is iterative and runs throughout the course of the year. So far, we have established a set of assessment principles that we will use to guide the way we design, administer and interpret assessments in the future. In the main, these principles apply to the use of medium to large-scale assessments, where the inferences drawn will be used to inform relatively big decisions, such as proposed intervention, student groupings, predictions, reporting progress, etc. Assessment as a learning event is pretty well understood by most of our teachers and is already a feature of many of our classrooms, so our focus is more on improving the validity and reliability of our summative inferences.

I thought it might be useful and timely to share these principles over a series of posts, especially as a lot of people still seem to be struggling, like us, to create something better and more sustainable than levels. The release of Daisy Christodolou’s book Making Good Progress has undoubtedly been a great and timely help, and I intend it to provide some impetus to our sessions going forward, as we look to implement some of the theory we covered before Christmas into something practical and useful. This excellent little resource from Evidence based Education is an indication of some of the fantastic work out there on improving assessment literacy. I hope I can add a little more in my next few posts.

If we are going to take the time and the trouble to get our students to sit assessments, then we want to make sure that the information is as reliable and valid as possible, and that we don’t try and ask our assessments to do too much. The first in my series of blogs will be on our principles of assessment design, with the other two on ethics and fairness and then, finally, reliability and validity.

All constructive feedback welcome!

The Elements of Language: what we are using in place of levels

Image

In my last post I blogged about our department’s plans for a new KS3 English curriculum, which we are looking to phase in gradually starting this coming September.

This curriculum change is part of a wider set of reforms, in part a response to the shifting national picture, but in the main the result of a desire to transform the reading and writing competences of the students at our school. Changing the texts and sequence in which they are studied is a necessary first step, but this alone will not lead to significant rises in attainment if that content is not well taught or if there are not robust methods of assessment to purposefully guide instruction or to meaningfully evaluate its impact.

And so to the subject of this post: the nature of the model of assessment that we will be using to drive our ambitious plans forwards. It is not perfect, but what I am convinced about is that despite its inevitable shortcomings, it will prove to be a much better method of assessment than the ambiguous and imprecise system of levels that we are currently using. It will support learning, rather than distort it.

Formative and Summative

There are essentially two strands to this assessment model. One is concerned with measuring the progress of students’ over time (summative) the other, the more important, is a tool to support the class teacher in their ongoing understanding of student learning (formative). Michael Tidd is excellent on this distinction. The first supports the reporting process; the second supports the learners. Under this new assessment framework there will be one extended reading and writing assessment at the end of each year, which will take the form of an examination.

From these assessments students will be given an overall percentage for their performance over the two parts, which will then be compared against their starting point and their target for the end of the year. Regrettably, we think a baseline test is necessary. Whilst I sympathise with the valid arguments about retesting students at the beginning of year 7, we want to fully understand exactly what is behind the normalised numbers we will be receiving from our feeder schools. I appreciate this is not ideal, but for us, as I hope you will see, it is necessary: we want to know what our students can and cannot do so we can adapt our subsequent instruction accordingly.

The Elements of Language

The Elements of Language (see below) is the terminology that will allow us to articulate what we actually mean when we talk about effective reading and writing. Divided into 10 elements – five for reading and five for writing with corresponding assessment objectives – each element is embodied by a single word. So, for example, for writing there is A02 Control and A03 Style, whilst for reading A06 Knowledge and AO7 Interpretation. Together The Elements of Language define our notion of literacy and provide a genuine vehicle for a cross curricular focus on developing reading and writing – a shared language for talking about literacy and a practical means for understanding what it looks like.

Image

The Elements of Reading and Writing

The Elements of Language are divided into The Elements of Reading and The Elements of Writing. Each element has a corresponding Assessment Objective and has four stages of progression (see below). Within these four stages there are three clearly defined statements about the knowledge and understanding required to master. As much as possible we have tried to avoid vague skills definitions, which are unhelpfully imprecise, particularly as a means for helping students to understand next steps and to guide future instruction. This was more difficult to achieve with The Elements of Reading, which use some evaluative terminology in order to avoid an overwhelming number of specific statements.

Image

Assessment that drives learning

The creation of these three distinct objectives within each stage of progression is deliberate. It is designed to enable every unit we teach to work on one specific aspect from each overarching objective (or element) and to carry out this coverage in a coherent, systematic and rigorous manner. Across each term (we will run termly units) teachers will be focusing on teaching ten specific areas for improvement, along with responding to the learners’ needs as required. A simple tracker like the one below will help the teacher to maintain a firm grasp of whether students are learning the different objectives or not. Students will receive a 1 if they partially meet the objective criteria, a 2 if they fully meet it and a 0 if they fail to meet the criteria at all. These judgements will be made at the discretion of the individual teacher; they will not be tied to a specific piece of work.

Image

Because there are ten Elements of Language, and an on-going monitoring system that makes a 0-2 judgement, students’ progress can be easily transferred into percentages, both individually and per objective. We believe this highly visual and transparent terminology will give the teacher a clearer and more specific set of information they can act upon to inform their planning and to respond to the needs of their learners. It will also allow the co-ordinator to see if there are patterns of underachievement and if intervention is required. The specificity of our statements makes the understanding of English and how to get better at it much clearer: the students either demonstrate an understanding and application of a particular element or they don’t. This information will be available to teachers across the curriculum, particularly in the essay-based subjects as part of a shared planning model.

Interim and end of year assessment

Each term our students will complete one extended reading and one extending writing task, as well as a contextualised speaking assignment. Both extended writing tasks will be redrafted multiple times using the gallery critique model in an effort to establish a culture of excellence. Students’ work will receive regular, specific feedback; it will improve accordingly, along with their levels of motivation and self-perception. This work will not be graded. We are completely doing away with the notion of a half term assessment or APP task, believing instead that there are better ways to assess on-going knowledge and skill acquisition (see below) and that real progress takes a longer period of time to manifest– namely a year or perhaps even longer.

The Use of Multiple Choice Questions

I have already blogged here and here about the benefits of the multiple choice format, primarily as a means of informing teaching, but also as an effective method of managing the demands of marking – a real problem for so many, many teachers. As I have already outlined, the only extended pieces of writing that will be subject to specific assessment will be the end of year examination. Termly pieces will be produced but not be judged in isolation. Rather they will be used to evaluate whether a particular strand of an assessment objective has been met and if re-teaching or consolidation is required. Learning will need to be shown as secure as opposed to being performed in a one off piece.

Across a term, multiple-choice assessment will test the extent to which the focused elements have been learnt, or are on their way to be being learnt. Some aspects of reading and writing are easier to test using this format than others. The Elements of Language that perhaps lend themselves the best to multiple choice are Vocabulary (A01), Control (A02), Style (A03), Knowledge (A06) and Interpretation (A07).  Just to be clear, I am not suggesting these tests in and of themselves prove learning has occurred. They don’t. They provide an indication of the learning process and, most importantly, they provide a reliable guide for future instruction. Every class will sit these assessments and results will be used by individual teachers, as well as across the department to inform joint planning.

Limited, inconsistent, secure and exceptional

The end of year assessments (along with the year 7 baseline test) will be marked using our new KS3 mark scheme (see example for reading below). This mark scheme is broken into five different standards of performance, which we have termed ‘limited’, ‘inconsistent’, ‘competent’, ‘good’ and ‘exceptional’. These different standards – as much as humanly possible – match the four incremental phases of development within the separate Elements of Reading and Writing. I am aware that this system runs the risk of the ‘adverb problem’ as highlighted by Daisy Christodoulou here. I have wrestled with this conundrum for a while now: what is the best way to effectively judge a holistic piece of extended writing where different aspects (or elements) of English are synthesised? This mark scheme is my attempt at a response.

Image

Whilst I am not completely sure that it fully resolves the dilemma, I hope the way the standards are articulated at each level, and the relative specificity of the individual objectives, will make the marking clearer and more reliable. Obviously, robust standardisation and moderation procedures will also be necessary, as will exemplification at each standard. And this is exactly what we intend to do: exemplify what we mean by ‘exceptional’, ‘good’ and so on. To do this we plan to take the most accomplished student in the year above and use their exam response to set the standard for what is excellent’, which we can then rework downwards for ‘good’, ‘competent’, ‘inconsistent’ and ‘limited’. When a better response is produced this will become the new ‘exceptional’, thus ensuring the bar for what we expect from our students is always rising.

As with the termly tracker, at each of the five stages there are 2 marks available, 0 for not met, 1 for partially met and 2 for fully met. Again, like with the ongoing monitoring, the end of year assessments will be converted into percentages by combining the raw reading and writing marks. These final percentages will produce a transparent measure that will show the extent to which progress has been made or not been made. At this stage we are not fully decided upon what would represent a realistic, yet challenging, percentage target for the year. I expect it will be something like 10-15%, though this will most probably become clearer once we have implemented the assessment model and refined its workings.

A note on starting points

‘Exceptional’ is, of course, what we would like all of our students to be by the end of KS3. If they achieved the criteria that we have laid out then we truly would have instigated step change. Yet, we are realistic enough to know that this will not be possible for all, at least in the short term, perhaps even ever. To this end we want to make it clear that the minimum we expect our students to be is ‘good’ readers and writers, particularly those that come in at or around the normalised mark of 100 – what is deemed to constitute ‘secondary ready’. In our eyes this pretty much equates to our scale of ‘competent’. And this is why all those who come into our school at secondary ready will follow the second assessment pathway marked ‘competent’. Those below this will follow the greyed out area labelled ‘inconsistent’. There are no criteria for ‘limited’, since by its very definition ‘limited’ implies a considerable lack of requisite knowledge and understanding. We don’t need to define this.

And that is pretty much our new assessment model. It is still in draft format, so I’m sure there will be some glaring errors, typos, omissions and the like. We will also be making amendments and tweaks over the coming months.

We feel that we have come up with a model of assessment that is right for the students of our school and one that will actually help drive improvement, not get in the way of it.

I hope it is of use in some way elsewhere.

Now is the time for English curriculum redesign

Image

This post is about the draft of our new KS3 English curriculum and the rationale behind its construction. My next post will explain how this curriculum fits in with the method of assessment we have devised to replace the largely ineffectual SATS levels. It will effectively form the third part of my series about the use of multiple-choice questions in English, and will describe how we intend to use the format within a holistic system of assessment.

Like others, for a long while I have wanted to make a step change to KS3, knowing that this represents the best way to raise attainment. Intervention work, early entry or the deployment of the most experienced teachers with exam classes are all very well and are often necessary means of helping students achieve, but they also often lead to artificial, short-term gains and in many cases are effectively papering over the underlying issues. Too often the continual and disproportionate demand of examination success leaves little resource to focus on the root cause of student underachievement. Until now, that is, where national changes to exam structure and assessment measures have made it wise for us to make the time to make our key stage three curricula fit for purpose.

Despite what sometimes feels like an overwhelming amount of change and uncertainty, now really does feel like an exciting and perhaps even defining moment for the future direction of the subject: a chance to shape, particularly at KS3, what we teach our students along with the freedom to assess that learning in the manner that we best see fit. In this regard, we can acts as professionals who understand our subject and the students that we teach. I intend to take advantage of this opportunity-cum-imperative to create an ambitious curriculum, one that will inspire our students and provide them with the knowledge, skills and cultural understanding necessary to achieve success in their lives – up to and beyond their examinations.

This is not simply about choosing a bunch of hard books – though as you will see below the texts chosen are considerably challenging – but more a matter of doing what is right for our students, raising expectations through the roof and, as much as humanly possible, creating a level playing field with those who enjoy more privilege. As I suspect is the case elsewhere, at our school the best English students – the ones who have a ‘natural’ ability to write fluently and who appreciate the underlying concepts and intentions in texts – are the ones who read most widely and deeply. Our most able students are thus the ones who have often got there in spite of their schooling, not because of it, and for who reading challenging books for pleasure is normalised within the home environment. This has to be the case for all our students.

I am clearly not alone in believing that now really is an exciting opportunity for curriculum redesign. Only this morning Alex Quigley brilliantly explained why 2014 holds many reasons to be educationally cheerful. Indeed, in recent weeks and months I have read and been inspired by number of posts exploring different organising principles for new English KS3 curricula, including Alex Quigley’s ‘universal language’ of the story, Joe Kirby’s model of interleaving and revisiting cultural texts, and David Didau’s thematic and sequential curriculum that stretches back and forward across time.  All of these (and more) have helped me to devise what I believe is an inspiring and rigorous curriculum.

Here, then, is the draft version of our new KS3 curriculum.

Image

Of course, designing a new curriculum is only part of what is needed to raise attainment. Making sure that the texts chosen are taught in an effective way, and that colleagues are well supported and feel confident enough to teach them well is equally, if not more, important. It would be naïve not to expect some considerable anxiety around teaching works like The Odyssey or with spending a term on sonnets with year 7. This is why we intend to invest heavily in providing supportive wider reading material and creating opportunities for joint planning sessions in a similar vein to the lesson study model. 

We have not arrived at this curriculum overnight. Neither do we expect to begin teaching all of these texts from next September. Over the coming weeks we will agree upon the best way forward, making sure that what we implement is manageable and that it really does lead to a step change. I should perhaps make it clear that a lot of the structures and systems that will facilitate the delivery of our curriculum are already in place from previous initiatives. It is also worth noting that we have a supportive headteacher and work in a school where creative and bold solutions to problems are encouraged. I realise that this is not the case for all.

To help make some of the nuances of the draft a little clearer, I have summarised some of the thinking behind the choices taken and provided further explanations of the supporting structures in place.

The Reading lounge

One of the main resources our department has at its disposal is a Reading Lounge, a bright, funky space solely for the purposes of English lessons.  Whilst we would prefer a vibrant library (space it at a premium), having the Reading Lounge at the bottom of the English corridor enables us to ensure time is dedicated to reading for pleasure. Once a cycle year 7 and year 8 pupils will read modern stories that are in some way in dialogue with the texts in the taught curriculum. This approach will enable our students to get the best of both worlds: exposure to important, brilliantly written texts of cultural value and access to exciting contemporary fiction from authors they will already be familiar with. The Reading Lounge texts are in bold italics, and these choices give way to books to take home to read in year 9.

Unitisation

It has become increasingly clear to me that the idea of having a new topic or focus each half term is flawed. For many years this had been our approach. We would try and cram a lot into each six or seven week block and then rush through an assessment in the last couple of days of term, the very time when students were not able to produce their best work. We would then dutifully mark and level these assessments and enter the results on a spreadsheet, where they would remain until report time. A monumental waste of time!

Since September we have been experimenting with termly units at year 7 and 8. Although in its infancy, this less is more approach appears to be helping deepen our students’ understanding, as well as providing teachers with the flexibility to respond to their students’ needs. Without the pressure of constantly having to move on to the next unit or getting the assessment done in time, teachers are better able to respond to the learning needs of their classes and reteach material if necessary.

This past year we have also placed a much heavier emphasis on the process of redrafting. Influenced by some of the ideas in Ron Berger’s excellent ‘Ethic of Excellence’ our curriculum will give our students the time and space necessary to produce their very best work and to be inspired by their own excellence. How redrafting fits in to our wider system of assessment will be addressed in my next post.

Setting

This year for the first time we have started to set from year 7. Whilst I understand the arguments around mixed ability and, in principle, subscribe to the idealism of its intentions, in practice it is no longer tenable with the growing chasm in the ability profile of our incoming year 6. We were finding that at KS3 the most able were not consistently being stretched and the least able were not being sufficiently supported. On the curriculum draft the different numbers in brackets signify our four new sets, which are spread across three bands. As you can see, in some cases we feel that is appropriate for students to study different texts, though we believe that all will be challenged by what we have chosen.

Cultural capital

Whilst this term is bandied around a lot, for me it perfectly captures what I have experienced in my time as a teacher. I really believe that a lack of cultural capital is one of the most significant reasons why our students do not excel in English, but they do more in Maths and Science. I also firmly believe that cultural capital has a value outside of economic terms (see the comments at the end of Joe Kirby’s recent post on how to plan a knowledge unit for a debate around this issue).

The texts and periods we have chosen will provide a solid understanding of the journey of English literature and the development of our present identity. It is far from exhaustive and we are painfully aware that in order to achieve other aims, such as redrafting and an emphasis on explicit grammar teaching, we have had to sacrifice a great deal. Some of this will resurface in year 10, like Frankenstein and American fiction. We have also tried to provide some balance in terms of race and gender. I’m sure for some it will still seem too elitist.

Whilst our students achieve very good English results, they are far from being expert writers and readers and they could do much better. They are well supported in year 10 and particularly year 11 and make very good progress because they work hard and the exam is relatively formulaic. Many would flounder if the exam asked the question in a different format, or if it relied upon responses to more challenging material. Many of our students also struggle to make the transition to A level and almost all find it incredibly difficult to deal with unseen material. Even our brightest students – those who apply for Oxford, Cambridge or medical degrees – are often let down in their applications by their inability to express themselves coherently in the written form.

Our new curriculum is therefore the first step towards developing more articulate, genuinely independent writers and thinkers. We want our students to not be disadvantaged by background and to enjoy as much chance of success as those who attend the very best schools in the country.

This will not happen overnight.