The Elements of Language: what we are using in place of levels

Image

In my last post I blogged about our department’s plans for a new KS3 English curriculum, which we are looking to phase in gradually starting this coming September.

This curriculum change is part of a wider set of reforms, in part a response to the shifting national picture, but in the main the result of a desire to transform the reading and writing competences of the students at our school. Changing the texts and sequence in which they are studied is a necessary first step, but this alone will not lead to significant rises in attainment if that content is not well taught or if there are not robust methods of assessment to purposefully guide instruction or to meaningfully evaluate its impact.

And so to the subject of this post: the nature of the model of assessment that we will be using to drive our ambitious plans forwards. It is not perfect, but what I am convinced about is that despite its inevitable shortcomings, it will prove to be a much better method of assessment than the ambiguous and imprecise system of levels that we are currently using. It will support learning, rather than distort it.

Formative and Summative

There are essentially two strands to this assessment model. One is concerned with measuring the progress of students’ over time (summative) the other, the more important, is a tool to support the class teacher in their ongoing understanding of student learning (formative). Michael Tidd is excellent on this distinction. The first supports the reporting process; the second supports the learners. Under this new assessment framework there will be one extended reading and writing assessment at the end of each year, which will take the form of an examination.

From these assessments students will be given an overall percentage for their performance over the two parts, which will then be compared against their starting point and their target for the end of the year. Regrettably, we think a baseline test is necessary. Whilst I sympathise with the valid arguments about retesting students at the beginning of year 7, we want to fully understand exactly what is behind the normalised numbers we will be receiving from our feeder schools. I appreciate this is not ideal, but for us, as I hope you will see, it is necessary: we want to know what our students can and cannot do so we can adapt our subsequent instruction accordingly.

The Elements of Language

The Elements of Language (see below) is the terminology that will allow us to articulate what we actually mean when we talk about effective reading and writing. Divided into 10 elements – five for reading and five for writing with corresponding assessment objectives – each element is embodied by a single word. So, for example, for writing there is A02 Control and A03 Style, whilst for reading A06 Knowledge and AO7 Interpretation. Together The Elements of Language define our notion of literacy and provide a genuine vehicle for a cross curricular focus on developing reading and writing – a shared language for talking about literacy and a practical means for understanding what it looks like.

Image

The Elements of Reading and Writing

The Elements of Language are divided into The Elements of Reading and The Elements of Writing. Each element has a corresponding Assessment Objective and has four stages of progression (see below). Within these four stages there are three clearly defined statements about the knowledge and understanding required to master. As much as possible we have tried to avoid vague skills definitions, which are unhelpfully imprecise, particularly as a means for helping students to understand next steps and to guide future instruction. This was more difficult to achieve with The Elements of Reading, which use some evaluative terminology in order to avoid an overwhelming number of specific statements.

Image

Assessment that drives learning

The creation of these three distinct objectives within each stage of progression is deliberate. It is designed to enable every unit we teach to work on one specific aspect from each overarching objective (or element) and to carry out this coverage in a coherent, systematic and rigorous manner. Across each term (we will run termly units) teachers will be focusing on teaching ten specific areas for improvement, along with responding to the learners’ needs as required. A simple tracker like the one below will help the teacher to maintain a firm grasp of whether students are learning the different objectives or not. Students will receive a 1 if they partially meet the objective criteria, a 2 if they fully meet it and a 0 if they fail to meet the criteria at all. These judgements will be made at the discretion of the individual teacher; they will not be tied to a specific piece of work.

Image

Because there are ten Elements of Language, and an on-going monitoring system that makes a 0-2 judgement, students’ progress can be easily transferred into percentages, both individually and per objective. We believe this highly visual and transparent terminology will give the teacher a clearer and more specific set of information they can act upon to inform their planning and to respond to the needs of their learners. It will also allow the co-ordinator to see if there are patterns of underachievement and if intervention is required. The specificity of our statements makes the understanding of English and how to get better at it much clearer: the students either demonstrate an understanding and application of a particular element or they don’t. This information will be available to teachers across the curriculum, particularly in the essay-based subjects as part of a shared planning model.

Interim and end of year assessment

Each term our students will complete one extended reading and one extending writing task, as well as a contextualised speaking assignment. Both extended writing tasks will be redrafted multiple times using the gallery critique model in an effort to establish a culture of excellence. Students’ work will receive regular, specific feedback; it will improve accordingly, along with their levels of motivation and self-perception. This work will not be graded. We are completely doing away with the notion of a half term assessment or APP task, believing instead that there are better ways to assess on-going knowledge and skill acquisition (see below) and that real progress takes a longer period of time to manifest– namely a year or perhaps even longer.

The Use of Multiple Choice Questions

I have already blogged here and here about the benefits of the multiple choice format, primarily as a means of informing teaching, but also as an effective method of managing the demands of marking – a real problem for so many, many teachers. As I have already outlined, the only extended pieces of writing that will be subject to specific assessment will be the end of year examination. Termly pieces will be produced but not be judged in isolation. Rather they will be used to evaluate whether a particular strand of an assessment objective has been met and if re-teaching or consolidation is required. Learning will need to be shown as secure as opposed to being performed in a one off piece.

Across a term, multiple-choice assessment will test the extent to which the focused elements have been learnt, or are on their way to be being learnt. Some aspects of reading and writing are easier to test using this format than others. The Elements of Language that perhaps lend themselves the best to multiple choice are Vocabulary (A01), Control (A02), Style (A03), Knowledge (A06) and Interpretation (A07).  Just to be clear, I am not suggesting these tests in and of themselves prove learning has occurred. They don’t. They provide an indication of the learning process and, most importantly, they provide a reliable guide for future instruction. Every class will sit these assessments and results will be used by individual teachers, as well as across the department to inform joint planning.

Limited, inconsistent, secure and exceptional

The end of year assessments (along with the year 7 baseline test) will be marked using our new KS3 mark scheme (see example for reading below). This mark scheme is broken into five different standards of performance, which we have termed ‘limited’, ‘inconsistent’, ‘competent’, ‘good’ and ‘exceptional’. These different standards – as much as humanly possible – match the four incremental phases of development within the separate Elements of Reading and Writing. I am aware that this system runs the risk of the ‘adverb problem’ as highlighted by Daisy Christodoulou here. I have wrestled with this conundrum for a while now: what is the best way to effectively judge a holistic piece of extended writing where different aspects (or elements) of English are synthesised? This mark scheme is my attempt at a response.

Image

Whilst I am not completely sure that it fully resolves the dilemma, I hope the way the standards are articulated at each level, and the relative specificity of the individual objectives, will make the marking clearer and more reliable. Obviously, robust standardisation and moderation procedures will also be necessary, as will exemplification at each standard. And this is exactly what we intend to do: exemplify what we mean by ‘exceptional’, ‘good’ and so on. To do this we plan to take the most accomplished student in the year above and use their exam response to set the standard for what is excellent’, which we can then rework downwards for ‘good’, ‘competent’, ‘inconsistent’ and ‘limited’. When a better response is produced this will become the new ‘exceptional’, thus ensuring the bar for what we expect from our students is always rising.

As with the termly tracker, at each of the five stages there are 2 marks available, 0 for not met, 1 for partially met and 2 for fully met. Again, like with the ongoing monitoring, the end of year assessments will be converted into percentages by combining the raw reading and writing marks. These final percentages will produce a transparent measure that will show the extent to which progress has been made or not been made. At this stage we are not fully decided upon what would represent a realistic, yet challenging, percentage target for the year. I expect it will be something like 10-15%, though this will most probably become clearer once we have implemented the assessment model and refined its workings.

A note on starting points

‘Exceptional’ is, of course, what we would like all of our students to be by the end of KS3. If they achieved the criteria that we have laid out then we truly would have instigated step change. Yet, we are realistic enough to know that this will not be possible for all, at least in the short term, perhaps even ever. To this end we want to make it clear that the minimum we expect our students to be is ‘good’ readers and writers, particularly those that come in at or around the normalised mark of 100 – what is deemed to constitute ‘secondary ready’. In our eyes this pretty much equates to our scale of ‘competent’. And this is why all those who come into our school at secondary ready will follow the second assessment pathway marked ‘competent’. Those below this will follow the greyed out area labelled ‘inconsistent’. There are no criteria for ‘limited’, since by its very definition ‘limited’ implies a considerable lack of requisite knowledge and understanding. We don’t need to define this.

And that is pretty much our new assessment model. It is still in draft format, so I’m sure there will be some glaring errors, typos, omissions and the like. We will also be making amendments and tweaks over the coming months.

We feel that we have come up with a model of assessment that is right for the students of our school and one that will actually help drive improvement, not get in the way of it.

I hope it is of use in some way elsewhere.

Advertisements