Visual Learning: using graphics to teach complex literary terms

Three-Types-of-Learners-in-elearning.png

I have always tried to pay attention to the way that I present material to my students. Don’t get me wrong, I am not interested in style over substance, and I certainly don’t spend hours labouring away over every resource that I use in class. If there is a quicker, equally effective way of teaching something, then I will take it. I’m not a masochist.

Most of my resources now are paper copy quizzes for retrieval practice and elaboration, many of which have proved very effective at A level. I try to use the board as much as possible, whether to post the all-important learning objective model writing, record the unfolding of the lesson to ease the pressure on working memories or as a means of explaining tricky ideas or concepts more fully, often with an accompanying visual.

The problem is that I am a terrible artist. Unlike the wonderfully talented Oliver Caviglioli, whose illustrations and generosity are first class, my drawings are sad and pathetic. I would love to be Rolf Harris a great illustrator, but I can barely write legibly, let alone draw anything beyond a stick man! I remember a couple of years ago I drew a picture of a horse for a year poetry lesson, and the final product looked more like a pregnant camel with IBS than the thoroughbred I’d intended.

Fortunately, in the age of the Internet and Powerpoint (sorry, Jo), I have some pretty decent tools at my disposal to help me to make up for my artistic deficiency. As I have become increasingly aware of the power of combing words and images in boosting student learning, I have spent more of my time thinking about how images, in particular graphical representations, can be used to help with my teaching, such as in my explanations of complex literary concepts.

One of these troublesome concepts that seems to crop up whenever I teach Christina Rossetti’s ‘Goblin Market’ is allegory. ‘Goblin Market’ is a narrative poem with a familiar story: a young girl tempted into sin; her subsequence loss of innocence before salvation through sacrifice. Most people reading will get the allegory to the story of the Fall of Man. There are one or two differences in the poem – there is no Adam, only horny and grotesque Goblin men, and the saviour is a woman, not the Son of God – but the overarching parallels are pretty clear.

The problem is when it comes to explaining the concept of allegory in and of itself to students – in other words outside the context of the specific example – students really struggle. No matter how hard I try to explain allegory clearly, with examples and analogies aplenty, students just don’t seem to fully get it. Now, you might be tempted to say that I should look to hone my explanation. Trust me on this one: I have honed it to within an inch of its life. There is simply no room for any more honing.

So, this year I thought I’d take a different tact and invest a bit of time producing a graphical representation to sit alongside my verbal explanation. I don’t have any hard evidence to show that I what have done has been any more successful than usual. It seems to have made a difference, with more students being able to explain the concept than before, but then again this may well be a case of confirmation bias. Or brighter students. Or chance.

As you can see from below, the slide I have used in the past to explain allegory is pretty contemptuous – an overreaching definition which I expand and exemplify, with a bulleted breakdown of the two main types, political and the allegory of ideas. There are even a couple of token images thrown in, which I am not really convinced add any real value.

Screenshot 2017-04-22 08.32.40

My next effort is, I think, a real improvement. The graphical representations make the points of comparison between in an allegory between Text A (‘Goblin Market’) and Text B (The story of the Fall of Man) much clearer, and they have the added advantage of being able to highlight where the biblical comparison breaks down, in that some pretty big parts of the Bible story are missing in Rossetti’s poem, such as the presence of God.

Screenshot 2017-04-22 08.32.48

I then attempted to flesh out this initial explanation with an amended version of my original effort. This time I added a relational dimension to my diagram which enabled me to visualise the difference between allegory and other related literary concepts, such as fable and parable. The trouble was that whilst I had made some visual links between genres clear, I had lost the power of the previous graphic to embody the workings of allegory itself.

Screenshot 2017-04-22 08.32.56

My final version therefore combines the best elements of my previous attempts, including the graphical embodiment of the concept of allegory, the relational links to other genres and better images to exemplify examples of the different forms of allegory. The visual cues and graphical representations, along with my honed explanation, seem to have been much more successful in shifting my students’ understanding of allegory. At least, I hope that is the case.

Screenshot 2017-04-22 08.33.07Allegory is not the only literary concept I have attempted to represent graphically this way. I hope to blog about others in the future, so watch this space.

Thanks for reading.

Advertisements

Principles of Great Assessment #2 Validity and Fairness

Picture1

This is the second of a three part series on the principles of great assessment. In my last post I focused on some principles of assessment design. This post outlines the principles that relate to ideas of validity and fairness.* As I have repeatedly stressed, I do not consider myself to be an expert in the field of assessment, so I am more than happy to accept constructive feedback to help me learn and to improve upon the understanding of assessment that we have already developed as a school. My hope is that these posts will help others to learn a bit more about assessment, and for the assessments that students sit to be as purposeful and supportive of their learning as possible.

So, here are my principles of great assessment 6-10.

6. Regularly review assessments in light of student responses

Validity in assessment is extremely important. For Daniel Koretz it is ‘the single most important criterion for evaluating achievement testing.’ Often when teachers talk about an assessment being valid or invalid, they are using the term incorrectly. In assessment validity means something very different to what it means in everyday language. Validity is not a property of a test, but rather of the inferences that an assessment is designed to produce. As Lee Cronbach observes, ‘One validates not a test but an interpretation of data arising from a specified procedure’ (Cronbach, 1971).

There is therefore no such thing as a valid or invalid assessment. A maths assessment with a high reading age might be considered to provide valid inferences for students with a high reading age, but invalid inferences for students with low reading ages. The same test can therefore provide both valid and invalid inferences depending on its intended purpose, which links back to the second assessment principle: the purpose of the assessment must be set and agreed from the outset. Validity is thus specific to particular uses in particular contexts and is not an ‘all or nothing’ judgement but rather a matter of degree and application.

Picture4If you understand that validity applies to the inferences that assessments provide, then you should be able to appreciate why it is so important to make sure that an assessment gives as valid inferences about student achievement as possible, particularly when there are significant consequences attached for students taking them, like attainment grouping. There are two main threats to achieving this validity: construct under-representation and construct irrelevance. Construct under-representation refers to when a measure fails to capture important aspects of the construct, whilst construct irrelevance refers to when a measure is influenced by things other than just the construct i.e. the example of high reading age in a maths assessment.

There are a number of practical steps that teachers can take to help reduce these threats to validity and, in turn, to increase the validity of the inferences provided by their assessments. Some are fairly obvious and can be implemented with little difficulty, whilst others require a bit more technical know-how and/or a well-designed systematic approach that provides teachers with the time and space needed to design and review their assessments on a regular basis.

Here are some practical steps educators can take:

Review assessment items collaboratively before a new assessment is sat

Badly constructed assessment items create noise and can lead to students guessing the answer. Where possible, it is therefore worth spending some time and effort upfront, reviewing items in a forthcoming summative assessment before they go live so that any glaring errors around the wording can be amended, and any unnecessary information can be removed. Aside from making that assessment more likely to generate valid inferences, such as approach has the added advantage of training those less confident in assessment design in some of the ways of making assessments better and more fit for purpose. In an ideal world, an important assessment should be piloted first to provide some indication of issues with items, and the likely spread of results across an ability profile. This will not always be possible.

Check questions for cues and contextual nudges

Another closely-linked problem and another potential threat to validity is flawed question phrasing that inadvertently reveals the answer, or provides students with enough contextual cueing to narrow down their responses to particular semantic or grammatical fit. In the example item from a PE assessment below, for instance, the phrasing of the question, namely the grammatical construction of the words and phrases around the gaps, make anaerobic and aerobic more likely candidates for the correct answer. They are adjectives which precede nouns, whilst the rest of the options are all nouns and would sound odd to a native speaker – a noun followed by a noun.  A student might select anaerobic and aerobic, not because they necessarily know the correct answer, but because they sound correct in accordance with the syntactical cues provided. This is a threat to validity in that the inference is perhaps more about grammatical knowledge rather than understanding of bodily process.

Example: The PE department have designed an end of unit assessment to check students’ understanding of respiratory systems. It includes the following types of item.

Task: use two of the following words to complete the passage below

Anaerobic, Energy, Circulation, Metabolism, Aerobic 

When the body is at rest this is ______ respiration. As you exercise you breathe harder and deeper and the heart beats faster to get oxygen to the muscles. When exercising very hard, the heart cannot get enough oxygen to the muscles. Respiration becomes _______.

Interrogate questions for construct irrelevance

If the purpose of an assessment has been clearly established from the outset and that assessment has been clearly aligned to the constructs within the curriculum, then a group of subject professionals working together should be able to identify items where things other than the construct are being assessed. Obvious examples are high reading ages that get in the way of assessments of mathematical or scientific ability, but sometimes it might be harder to detect, as with the example below. To some, this item might seem fairly innocuous, but on closer inspection it becomes clear that it is not assessing vocabulary knowledge as purported, but rather spelling ability. Whilst it may be desirous for students to spell words correctly, inferences about word knowledge would not be possible from an assessment with these kinds of items in it.

Example: The English department designs an assessment to measure students’ vocabulary skills. The assessment consists of 40 items like the following:

Task: In all of the ________________ of packing into a new house, Sandra forgot about washing the baby.

  1. Excitement
  2. Excetmint
  3. Excitemant
  4. Excitmint

7. Standardise assessments that lead to important decisions

Teachers generally understand the importance of making sure that students sit final examinations in an exam hall under same conditions as everyone else taking the test. Mock examinations tend to replicate these conditions, because teachers and school leaders want the inferences provided by them to be as valid and fair as possible. For all manner of reasons, though, this insistence on standardised conditions for test takers is less rigorously adhered to lower down the school, even though some of decisions based upon such tests in year 7 and 8 arguably carry much more significance for students than any terminal examination.

I know that I have been guilty of not properly understanding the importance of standardising test conditions.  On more than one occasion I have set an end of unit or term assessment as a cover activity, thinking that it was ideal work because it would take students the whole lesson to complete and they would need to work in silence. I hadn’t appreciated how assessment is a bit more complicated than that, even for something like an end of unit test. I hadn’t considered, for instance, that it mattered whether students got the full hour, or more likely 50mins if it was set by a cover supervisor who had to spend valuable time settling the class. I hadn’t taken on board that it would make a difference if my class sat the assessment on a afternoon, and the class next door completed theirs bright and early in the morning.

It may well be that my students would have scored exactly the same whether or not I was present, whether they sat the test in the morning or in the afternoon, or whether they had 50 minutes or the full hour. The point is that I could not be sure, and that if one or more of my students would have scored significantly higher (or lower) under different circumstances, then their results would have provided invalid inferences about their understanding. If they were then placed in a higher or lower group as a result, or I reported home to their parents some erroneous information about their test scores, which possibly affected their motivation or self-efficacy, then you could suggest that I had acted unethically.

8. Important decisions are made on the basis of more than one assessment

Imagine you are looking to recruit a new head of science. Now imagine the even more unlikely scenario that you have received a strong field of applicants, which I appreciate in the current recruitment climate, is a bit of a stretch of the imagination. With such a strong field for such an important post, a school would be unlikely to make any decision on whom to appoint based upon the inferences provided by one single measure, such as an application letter, a taught lesson or an interview. More likely, they would triangulate all these different inferences about the candidate’s suitability for the role when making their decision, and even then crossing their fingers that they had made the right choice.

A similar principle is at work when making important decisions on the back of student assessment results, such as which group to place them in the following term, identifying which individuals need additional support or how much, if any, progress to report home to parents. In each of these cases, as with the head of science example, it would be wise to be able to draw upon multiple inferences in order to make a more informed decision. This is not to advocate an exponential increase in the number of tests students sit, but rather to recognise that when the stakes are high, it is important to make sure the information we use is as valid as possible. Cross referencing examinations is one way of achieving this, particularly given the practical difficulties of standardising assessments previously discussed.

9. Timing of assessment is determined by purpose and professional judgement

The purpose of an assessment informs its timing. Whilst this makes perfect sense in the abstract, in practice there are many challenges to making this happen. In Principled Assessment Design, Dylan Wiliam notes how it is relatively straightforward to create assessments which are highly sensitive to instruction if what is taught is not hard to teach and learn. For example, if I all I wanted to teach my students in English was vocabulary, and I set up a test that assessed them on the 20 or so words that I had recently taught them, it would be highly likely that the test would show rapid improvements in their understanding of these words. But as we all know, teaching is about much more than just learning a few words. It involves complex cognitive processes and vast webs of interconnected knowledge, all of which take a considerable amount of time to teach, and in turn to assess.

Picture3

It seem that’s the distinction between learning and performance is becoming increasingly well understood, though perhaps in terms of curriculum and assessment its widespread application to the classroom is taking longer to take hold. The reality for many established schools is that it is difficult to construct a coherent curriculum, assessment and pedagogical model across a whole school that embraces the full implications of the difference between learning and performance. It is hard enough to get some colleagues to fully appreciate the distinction, and its many nuances, so indoctrinated are they by years of the wrong kind of impetus. Added to this, whilst there is general agreement that assessing performance can be unhelpful and misleading, there is no real consensus of the optimal time to assess for learning. We know that assessing soon after teaching is flawed, but not exactly when to capture longer term learning. Compromise is probably inevitable.

What all this means in practical terms for schools is they to work within their localised constraints, including issues of timetabling, levels of understanding amongst staff and, crucially, the time and resources to enact the theory when known and understood. Teacher workload must also be taken into account when deciding upon the timing of assessments, recognising certain pinch points in the year and building a coherent assessment timetable that respects the division between learning and performance, builds in opportunities to respond to (perceived) gaps in understanding and spreading out the emotional and physical demands for staff and students. Not easy, at all.

10. Identify the range of evidence required to support inferences about achievement

Tim Oates’ oft quoted advice to avoid assessing ‘everything that moves, just the key concepts’ is important to bear in mind, not just for those responsible for assessment, but also for those who design the curricula with which those assessments are aligned. Despite the freedoms afforded from the liberation of levels and the greater autonomy possible with academy status, many of us have still found it hard to narrow down what we teach to what is manageable and most important. We find it difficult in practice to sacrifice breadth in the interests of depth, particularly where we feel passionately that so much is important for students to learn. I know it has taken several years for our curriculum leaders to truly reconcile themselves to the need to strip out some content and focus on teaching the most important material to mastery.

Once these ‘key concepts’ have been isolated and agreed, the next step is to make sure that any assessments cover the breadth and depth required to gain valid inferences about student achievement of them.  I think the diagram below, which I used in my previous blog, is helpful in illustrating how assessment designers should be guided by both the types of knowledge and skills that exit within the construct (the vertical axis) and the levels of achievement across each component i.e. the continuum (horizontal axis). This will likely look very different in some subjects, but it nevertheless provides a useful conceptual framework for thinking about the breadth and depth of items required to support valid inferences about levels of attainment of the key concepts.

Screenshot 2017-03-09 16.53.06

In my next post, which I must admit I am dreading writing and releasing for public consumption, is focusing on trying to articulate a set of principles around the very thorny and complicated area of assessment reliability. I think I am going to need a couple of weeks or so to make sure that I do it justice!

Thanks for reading!

 

* I am aware the numbering of the principles on the image does not match the numbering in my post. That’s because the image is a draft document.

 

Principles of Great Assessment #1 Assessment Design

Screenshot 2017-03-10 17.52.06.png

This is the first in a short series of posts on our school’s emerging principles of assessment, which are split into three categories – principles of assessment design; principles of ethics and fairness; and principles for improving reliability and validity. My hope in sharing these principles of assessment is to help other develop greater assessment literacy, and to gain constructive feedback on our work to help us improve and refine our model in the future.

In putting together these assessment principles and an accompanying CPD programme aimed at middle leaders, I have drawn heavily on a number of writers and speakers on assessment, notably Dylan Wiliam, Daniel Koretz, Daisy Christodolou, Rob Coe and Stuart Kime. All of these have a great ability to convey difficult concepts (I only got a C grade in maths, after all) in a clear, accessible and, most importantly, practical way. I would very much recommend following up their work to deepen your understanding of what truly makes great assessment.

  1. Align assessments with the curriculum 

 Screenshot 2017-03-10 17.52.48.png

In many respects, this first principle seems pretty obvious. I doubt many teachers deliberately set out to create and administer assessments that are not aligned with their curriculum. And yet, for a myriad of different reasons, this does seem to happen, with the result that students sit assessments that are not directly sampling the content and skills of the intended curriculum. In these cases the results achieved, and the ability to draw any useful inferences from them, are largely redundant. If the assessment is not assessing the things that were supposed to have been taught, it is almost certainly a waste of time – not only for the students sitting the test, but for the teachers marking it as well.

Several factors can affect the extent to which an assessment is aligned with the curriculum and are important considerations for those responsible for setting assessments. The first is the issue of accountability. Where accountability is unreasonably high and a culture of fear exists, those writing assessments might be tempted to narrow down the focus to cover the ‘most important’ or ‘most visible’ knowledge and skills that drive that accountability. In such cases, assessment ceases to provide any useful inferences about knowledge and understanding.

Assessment can also become detached from the curriculum when that curriculum is not delineated clearly enough from the outset. If there is not a coherent, well-sequenced articulation of the knowledge and skills that students are to learn, then any assessment will always be misaligned, however hard someone tries to make the purpose of the assessment valid. A clear, well structured and shared understanding of the intended curriculum is vital for the enacted curriculum to be successful, and for any assessment of individual and collective attainment to be purposeful.

A final explanation for the divorce of curriculum from assessment is the knowledge and understanding of the person writing the assessment in the first place. To write an assessment that can produce valid inferences requires a solid understanding of the curriculum aims, as well as the most valid and reliable means of assessing them. Speaking for myself, I know that I have got a lot better at writing assessments that are properly aligned with curriculum the more I have understood the links between the two and how to go about bridging them.

  1. Define the purpose of an assessment first 

 Depending on how you view it, there are essentially two main functions of assessment. The first, and probably most important, purpose is as a formative tool to support teaching and learning in the classroom. Examples might include a teacher setting a diagnostic test at the beginning of a new unit to find out what students already know so their teaching can be adapted accordingly. Formative assessment, or responsive teaching, is an integral part of teaching and learning and should be used to identify potential gaps in understanding or misconceptions that can be subsequently addressed.

The second main function of assessment is summative. Whereas examination bodies certify student achievement, in the school context the functions of summative assessment might include assigning students to different groupings based upon perceived attainment, providing inferences to support the reporting of progress home to parents, or the identification of areas of underperformance in need of further support. Dylan Wiliam separates out this accountability function from the summative process, calling it the ‘evaluative’ purpose.

Whether the assessment is designed to support summative or formative inferences is not really the point. What matters here is that the purpose or function of the assessment is made clear to all and that the inferences the assessment is intended to produce are widely understood by all. In this sense, the function of the assessment determines its form. A class test intended to diagnose student understanding of recently taught material will likely look very different from a larger scale summative assessment designed to draw inferences about whether knowledge and skills have been learnt over a longer period of time. Form therefore follows function.

3. Include items that test understanding across the construct continuum

 Many of us think about assessment in the reductive terms of specific questions or units, as if performance on question 1 of Paper 2 was actually a thing worthy of study in and of itself. Assessment should be about approximating student competence in the constructs of the curriculum. A construct can be defined as the abstract conception of a trait or characteristic, such as mathematical or reading ability. Direct constructs measure tangible physical traits like height and weight and are calculated using verifiable methods and stated units of measurement. Unfortunately for us teachers, most educational assessment assesses indirect constructs that cannot be directly measured by such easily understood units. Instead, they are calculated by questions that we think indicate competency, and that stand in for the thing that we cannot measure directly.

Within many indirect constructs, such as writing or reading ability, is likely to be a continuum of achievement possible. So within the construct of reading, for instance, some students will be able to read with greater fluency and/or understanding than others. A good summative assessment therefore needs to differentiate between these differing levels of performance and, through the questions set, define what it means to be at the top, middle or bottom of that continuum. In this light, one of the functions of assessment has to be a way of estimating the position of learners on a continuum. We need to know this to evaluate the relative impact or efficacy of our curricula, and to understand how are students are progressing within it.

Screenshot 2017-03-09 16.52.15.png

  1. Include items that reflect the types of construct knowledge

 Some of the assessments we use do not adequately reflect the range of knowledge and skills of the subjects they are assessing. Perhaps the format of terminal examinations has had too much negative influence on the way we think about our subjects and design assessments for them. In my first few years of teaching, I experienced considerable cognitive dissonance between my understanding of English and the way that it was conceived of within the profession. I knew my own education was based on reading lots of books, and then lots more books about those books, but everything I was confronted with as a new teacher – schemes of work, the literacy strategy, the national curriculum, exam papers– led me to believe that I should really be thinking of English in terms of skills like inference, deduction and analysis.

English is certainly not alone here, with history, geography and religious studies all suffering from a similar identify crisis. This widespread misconception of what constitutes expertise and how that expertise is gained probably explains, at least in part, why so many schools have been unable to envisage a viable alternative to levels. Like me, many of the people responsible for creating something new themselves been infected by errors from the past and have found it difficult to see clearly that one of the big problems with levels was the way they misrepresented the very nature of subjects. And if you don’t fully understand or appreciate what progression looks like in your subject, any assessment you design will be flawed.

Daisy Christodoulou’s Making Good Progress is a helpful corrective, in particular her deliberate practice model of skill acquisition, which is extremely useful in explaining the manner in which different types of declarative and procedural knowledge can go into perfecting a more complex overarching skill. Similarly, Michael Fordham’s many posts on substantive and disciplinary knowledge, and how these might be mapped on to a history progression model are both interesting and instructive. Kris Boulton’s series of posts (inspired by some of Michael’s previous thinking) are also well worth a look. They consider the extent to which different subjects contain more substantive or disciplinary knowledge, and are useful points of reference for those seeking to understand how best to conceive of their subject and, in turn, design assessments that assess the range of underlying forms of knowledge.

Screenshot 2017-03-09 16.53.06.png

  1. Use the most appropriate format for the purpose of the assessment

 The format of an assessment should be determined by its purpose. Typically, subjects are associated with certain formats. So, in English essay tasks are quite common, whilst in maths and science, short exercises where there are right and wrong answers are more the norm. But as Dylan Wiliam suggests, although ‘it is common for different kinds of approaches to be associated with different subjects…there is no reason why this should be so.’ Wiliam draws a useful distinction between two modes of assessment: a marks for style approach (English, history, PE, Art, etc.), where students gain marks for how well they complete a task, and a degree of difficulty approach (maths, science), where students gain marks for how well they progress in a task. It is entirely possible for subjects like English to employ marks for difficulty assessment tasks, such as multiple choice questions, and maths to set marks for style assessments, as this example of comparative judgement in maths clearly demonstrates.

Screenshot 2017-03-09 16.53.18.png

In most cases, the purpose of assessment in the classroom will be formative and so designed to facilitate improvements to student learning. In such instances, where the final skill has not yet been perfected but is still very much a work in progress, it is unlikely that the optimal interim assessment format will be the same as the final assessment format. For example, a teacher who sets out to teach her students by the end of the year to construct well written, logical and well supported essays is unlikely to set essays every time she wants to infer her students’ progress towards that desired end goal. Instead, she will probably set short comprehension questions to check their understanding of the content that will go into the essay, or administer tests on their ability to deploy sequencing vocabulary effectively. In each of these cases, the assessment reflects the inferences about student understanding the teacher is trying to ascertain, and not confusing or conflating them with other things.

In the next post, I will outline our principles of assessment in relation to ethics and fairness. As I have repeatedly made clear, my intention is to help contribute towards a better understanding of assessment within the profession. I welcome anyone who wants to comment on our principles, or to critique anything that I have written, since this will help me to get a better understanding of assessment myself, and make sure the assessments that we ask our students to sit are as purposeful as possible.

Thanks for reading.

 

 

Principles of Great Assessment: Increasing the Signal and Reducing the Noise

Screenshot 2017-03-09 17.03.29.png

After the government abolished National Curriculum levels, there was a great deal of initial rejoicing from both primary and secondary teachers about the death a flawed system of assessment. Many, including myself, delighted in the freedom afforded to schools to design their own assessment systems anew. At the time I had already been working on a model of assessment for KS3 English – the Elements of Assessment – and believed that the new freedoms were a positive step in improving the use of assessment in schools.

Whilst I still think that the decision to abolish levels was correct, I am no longer quite so sure about the manner and timing in which they were removed. Since picking up responsibility for assessment across the school, I have come to realise just how damaging it was for schools to have to invent their own alternatives to levels without anywhere near enough assessment expertise to do so well. Inevitably, many schools simply recreated levels under a different name, or retreated into the misguided safety of the flight path approach.

I would like to think that our current KS3 assessment model, the Elements of Expectation, has the potential to be a genuine improvement on National Curriculum levels, supporting learning and providing reliable summative feedback on student progress at sensible points in the calender. Even though it is in its third year, however, it is still not quite right. One of the things that I think is holding us back is our lack of assessment literacy. I am probably one of the more informed staff members on assessment, but most of what I know has been self-taught from reading some books and hearing a few people talk.

This year, in an effort to do something about this situation and to finally get our KS3 model closer to what we want, we have run some extensive professional development on assessment. Originally, I had intended to send some colleagues to Evidence Based Education’s inaugural Assessment Academy. It looks superb and represents an excellent opportunity to learn much more about assessment. But when it became clear budget constraints would make this difficult, we decided to set up and run our own in-house version: not as good (obviously) and inevitably rough around the edges, but good enough, I think, for our KS3 Co-ordinators and heads of subjects to develop the expertise they need to improve their use of assessment with our students.

The CPD is iterative and runs throughout the course of the year. So far, we have established a set of assessment principles that we will use to guide the way we design, administer and interpret assessments in the future. In the main, these principles apply to the use of medium to large-scale assessments, where the inferences drawn will be used to inform relatively big decisions, such as proposed intervention, student groupings, predictions, reporting progress, etc. Assessment as a learning event is pretty well understood by most of our teachers and is already a feature of many of our classrooms, so our focus is more on improving the validity and reliability of our summative inferences.

I thought it might be useful and timely to share these principles over a series of posts, especially as a lot of people still seem to be struggling, like us, to create something better and more sustainable than levels. The release of Daisy Christodolou’s book Making Good Progress has undoubtedly been a great and timely help, and I intend it to provide some impetus to our sessions going forward, as we look to implement some of the theory we covered before Christmas into something practical and useful. This excellent little resource from Evidence based Education is an indication of some of the fantastic work out there on improving assessment literacy. I hope I can add a little more in my next few posts.

If we are going to take the time and the trouble to get our students to sit assessments, then we want to make sure that the information is as reliable and valid as possible, and that we don’t try and ask our assessments to do too much. The first in my series of blogs will be on our principles of assessment design, with the other two on ethics and fairness and then, finally, reliability and validity.

All constructive feedback welcome!

The Future of Assessment for Learning

51i69fbowl-_sx347_bo1204203200_ 

Making Good Progress is an important book and should be required reading for anyone involved in designing, administrating or interpreting assessments involving children. Given the significant changes to the assessment and reporting landscape at every level, notably in the secondary context at KS3, this book is a timely read, and for my money it is the most helpful guide to designing effective formative and summative assessment models currently available to teachers.

I’ve heard Daisy speak at various education events over the years, and it is interesting to see how many of these individual talks have fed into the development of this book. Making Good Progress is a coherent and highly convincing argument for re-evaluating our existing understanding and approach to formative assessment and for moving away from the widespread practice of using formative assessment for summative purposes.

Life after Levels

From what I can tell, schools have responded to the abolishment of levels in three main ways. The first is business as usual, maintaining the use of levels – and thus ignoring the manifold problems associated with their misapplication – or recreating levels but in another name. Such amended approaches appear to recognise the flaws of levels and offer something different, but in reality too often they end up simply representing the same thing, changing numbers to letters or something else as equally fatuous. In many respects our first iteration after levels – the Elements of Assessment – fell foul of some of these same mistakes.

The second response to life after levels is the mastery-inspired model of assessment. In this approach subjects identify learning objectives for a student to master over the course of a year. This approach, which usually includes mapping out these myriad goals on a spreadsheet, appears more attractive in theory – what is to be learned is clearly articulated and not bundled up into a grade or prose descriptor – but in practice can prove equally unreliable and particularly unwieldy to maintain. Often the micro goals are watered down versions of the final assessment, not carefully broken down components of complex skills.

The final approach is the popular flight path model. This comes in various forms, but generally tends to focus on working backwards from GCSE grades to provide a clear ‘path’ from year 7 to year 11. I can understand the allure of this, and appreciate how such a model appears to offer school leaders a neat and tidy solution to levels. The problem is that learning is not this straightforward, and introducing the language of GCSE at year 7 seems to me to entirely miss the point of what assessment can and should be at this point of a child’s education – some five years before any terminal exam is to be sat!

As you read Daisy’s fantastic book, it becomes clear how all of these approaches to assessment are in one way or another fundamentally flawed: none of them really address the two underlying problems that ultimately did in for levels, namely the tendency for interim (or formative) assessment to always look like the final task, and for assessment to happily double up for formative and summative purposes. Making Good Progress destroys these widely held beliefs, albeit in the kind and sympathetic manner of a former teacher who understands how all this mess came to pass.

Generic Skill versus Deliberate Practice

In chapter five Daisy takes up what, from my experience, is the biggest barrier to improvement in the use of assessment in schools: how teachers conceive of their subjects in the first place. Daisy carefully unpicks the misconception that initial tests should reflect the same format as the final assessment. She outlines two very different methods of skill acquisition that account for how interim assessments are constructed – the generic skill method (where skills are transferable and practiced in a form close to their final version) and the deliberate practice method (where practice is deliberate and focused may look different in nature to the final version).

In the generic skill model, an interim assessment, such as a test of reading ability in English, will look very similar to the final assessment of reading at the end of the course, an essay or an extended piece of analysis in a GCSE exam, for instance. This approach, however, completely misunderstands how students learn such large and complex domains like reading, and prevents the opportunity for the interim assessment to be used formatively because it bundles up the many different facets of the domain and hides them in vague prose descriptors.

The alternative to this model, Daisy calls the deliberate practice model. Informed by the work of Anders Ericsson, this view of skill acquisition respects the limitations of working memory and recognises how complex skills are learnt by breaking down the whole skill into its constituent parts in an effort to build up the mental models that enable expertise. In this model very little, if any, practice tasks look like the final assessment. Sports coaches and music teachers have long understood the importance of this method, isolating specific areas of their domain for deliberate practice. As Daisy notes: ‘The aim of performance is to use mental models. The aim of learning is to create them.’

These two distinct approaches to skill development have a significant consequence for the design and implementation of assessment in the classroom. If you are a history teacher and you teach in accordance with the generic model of skills acquisition, you will tend to set your students essays when you want to check their understanding of historical enquiry. You may get the illusion of progress through your summative judgements, an emerging student might appear to become a secure student from one assessment to the next, but neither you, nor your students, will really be any the wiser of what, if anything, has improved or, more to the point, what needs to be improved in the future.

Another history teacher might share the same desire to teach her students to write coherent historical essays. This teacher, however, knows this is an incredibly complex skill that requires sophisticated mental models underpinned by a breadth and depth of historical knowledge. This teacher isolates these specific areas and targets them for dedicated practice. When she checks for understanding, she sets tests that reflect these micro components, such as setting a timeline task to show students’ understanding of chronology, or a series of multiple choices questions designed to ascertain their understanding of causality. Extended writing comes later when the mental models are secure. For now, the results from the tasks provide useful, precise formative feedback.

Koretz and Wiliam

For much of the book, Daisy draws on the work of Daniel Koretz and Dylan Wiliam to support her arguments. Koretz’s Measuring Up is another great book, which outlines the design and purpose of standardised testing and how to interpret examination results in a sensible way. Wiliam’s work is equally instructive, in particular his SSAT pamphlet Principled Assessment Design, which is a helpful technical guide for school leaders on designing reliable and valid school assessments.

Making Good Progress complements both these other works, and together the three books tell you everything you need to know about how to construct valid, reliable and ethical assessments. Like Koretz and Wiliam, Daisy considers the key technical assessment concepts of reliability and validity, and similarly exposes the uses and abuses of assessment, which she does in such a way that makes the need to assess better seem urgent and necessary. What it also offers, however, in particular through the deliberate practice paradigm, is the means through which to improve assessment and to link it to a coherent progression model of learning.

If I had one minor criticism of Making Good Progress, it would be that the closing chapters that outline this coherent model of curriculum and assessment are perhaps a little idealistic. Whilst the arguments for more widespread use of textbooks to support a coherent model of progression are sound, and the idea to create banks of subject-specific diagnostic questions for formative assessment purposes makes complete sense, the chances of either of these things happening any time soon seems to me rather remote. Both require significant agreement amongst teachers on the nature of their disciplines, some kind of consensus around skill acquisition (as Daisy notes herself, the generic skill method is pervasive) and for schools to systematically work together. Oh, and stacks of investment too. None of these things seem likely in the current education climate.

One much bigger criticism of the book, which I really must take Daisy to task about, is that it was not written several years earlier. Whilst I get that it may have taken her a while to formulate her ideas, and perhaps a good few months more to write them out, it still seems pretty remiss of her not to have co-ordinated better with the DFE. Had Making Good Progress been published in 2013 when the abolishment of National Curriculum levels was first announced (perhaps in a Waterstones 3 for 2 offer with Koretz and Wiliam), then I think that I, along with a number of other teachers, would have not wasted quite so much time and effort floundering around in the dark, trying to design something better than what went before, but often failing miserably.

Making Good Progress is a truly great read, and though its ostensible focus is on improving the use of formative assessment in schools, it covers a great deal of other ground in order to lay out the evidence to support the arguments. I enjoyed Daisy’s book immensely and commend it to anyone in the profession in any way involved with assessment, which is pretty much everyone!