Why didn’t you tell me? 5 things I wish I had been told sooner

Like many others, there are things I have learned in recent years that it would have been really helpful to have been told about earlier on in my career. Knowing about the relative ineffectiveness of marking stacks of books, the power of retrieval practice and the importance of background knowledge, for instance, would have all helped me be a much better teacher.
But whilst insights like these are crucial to improving learning and managing workload, they are not my focus here. Implementing the principles of retrieval practice, for instance, requires a great deal of strategic thought and collaboration. Instead, I wanted to share a few simple things before the start of the new term that I wish someone had taken me to one side and explained – things I think teachers can take on board relatively easily to improve their teaching.

1. Don’t talk over students whilst they work

Others have written eloquently and in detail about the theoretical reasoning why this is such a bad idea, but in essence it should be pretty obvious to all of us anyway. We can all think of situations where we are trying to concentrate on something and somebody is talking in the background. I hate, for example, the incessant messages given out on trains when you are trying to read. You either ignore the message (and maybe your station) or you get distracted from your book to listen to some tedious automated announcement.

Unless it is critical to the task, once your students are working, just leave them too it. However helpful you might think you are being – clarifying your instructions, giving time warnings, providing further examples, etc. – you are not. You are getting in the way of their learning and being annoying!

2. The whiteboard is your friend: use it!

My handwriting is dreadful. Think a doctor’s scrawl after a twelve-hour shift. Writing on the board was one of the main anxieties I had coming into the profession; Powerpoint seemed ready made for me. And yet, I have come to realise that the whiteboard is in fact the most underused, underrated and most utterly brilliant tool at our disposal. If it were up to me, I would rip out all the ‘interactive’ boards in my school and replace them with good old-fashioned whiteboards. Relying too heavily on prepared slides restricts our ability to respond to learners’ need and runs the risk of turning us into presenters.

Whiteboards allow you to do all of the following and more:

  • record your instructions
  • model and exemplify work
  • track the lesson
  • write down key vocabulary
  • provide prompts for writing
  • provide cues for oral contributions
  • break down tricky concepts in stages
  • sketch little diagrams to explain abstract concepts
  • mock up how you want students to present their work

3. Resist the urge to constantly help 

It is soooooo tempting when you set your class off on a task, to dash from desk to desk to attend to the poor souls who have put their hands up to signal their confusion. I see it all the time: almost immediately a class has been told what to do the teacher scours the room, looking for students to ‘help’. It’s almost as if we need to justify ourselves by crouching down next to a desk with a pen in our hand and a battery of examples at the ready.

And yet most of the time, we are probably not really helping at all. At least not in the long term, where we are inadvertently creating a culture of dependency. If students really do need our help immediately after we have set them a task, then either our instructions were unclear or the task we set was too hard. Both are ultimately undesirable, and both warrant something other than manic fire fighting, such as repeating instructions to the class or modelling examples for all.

4. Don’t try and squeeze things in to the end of a lesson

I really loved Columbo – the scruffy, laconic detective with the dirty mac and the habit of using an apparent aside to check mate the criminal. The ‘just one more thing’ strategy worked for Colombo but it has never worked for me, and I doubt it works for you either. You know the situation: there are still a couple of minutes left in the lesson, and you really want to finish your point, or share one more quick example. You think it will help, but it never really does. No one is listening; minds are elsewhere. Less is always more, and the surest way to create a chaotic ending to your lesson is to try and shoehorn in one final task.

5. Try to avoid saying daft things to motivate

Whilst you may be sceptical of some of the more extravagant claims made about Growth Mindset – I know I am – you’d have to be pretty cynical to entirely dismiss the idea that what we say to students and how we say it can have a significant impact on their self-conception. Praising left, right and centre for even the most modest of responses – or even for just responding – cannot help anyone. Lavish praise sets such a low bar for achievement, and from my experience students know they are being patronised. In a similar vein, spur-of-the moment comments designed to motivate, such as ‘top set students don’t behave like way or ‘A grade students really should know this’ are unhelpful and damaging. Be alert to any coded messages in your motivational aside and reprimands.

I did have a much longer list of titbits to share, but I figured I would heed my own advice and stop here.

Thanks for reading.

Advertisements

Quietly confident (thanks to the new A levels!)

 

Obviously, this is an ironic representation. I much prefer white wine!*

Next week my year 13 class sit their first literature exam – two short analytical essays on Hamlet, and a comparison of A Doll’s House and Christina Rossetti poetry. For the first time in long while – perhaps ever – I have not run any one to one sessions or taught any additional after school revision classes. My students have not written hundreds of essays, or emailed me constantly in my holidays with questions or additional work to mark.

And yet, by Jove, I think they are ready.

Obviously, time will tell, and I am aware of the hubris I am inviting by publicly asserting my confidence in their readiness. It may well be that Kris will underperform, or that Rose will not fulfil her potential. In either eventuality, however, I don’t think I will feel any regret about my teaching or the approach that I have taken. They are all ready; I don’t think there is anything more I could have done!

Things have not always been this way, though, and I have not always felt quite so calm at this time of year. There are probably two reasons why I am feeling sanguine. The first is experience. This is my 13th A2 class and with each passing year, I become a little less caught up in exam season frenzy. I care a great deal about my students, but I care much more about my own children. I do what I can with the time I have available, which has decreased since I have become a dad and get more tired.

The second, arguably more significant reason for my relative confidence is, believe it or not, down to the linear nature of the new examinations, and, in particular, our school’s decision not to bother with any interim AS exams. For maybe the first time in my career – I had two year 11 classes, a year 12 class and a year 13 group in my NQT year! – I have been able to teach the curriculum properly and with fidelity to the principles of how students learn best.

Most years I pick up exam classes and have the (dubious) pleasure of preparing students for exams in only a few months’ time. There are usually stacks of poems to learn and lots of coursework to get through. What I believe about student learning goes out the window, in favour of short-term performance wins. Even with year 12, I am often unable to teach like a research champion because of the reductive nature of unit assessment.

Last year, I wrote of the joy I was experiencing with the greater freedoms afforded by linearity, and this has only continued since. I have been able to properly embed a range of strategies and for once feel like, along with the reduction in the number of texts on the syllabus, there is enough time to properly explore texts, as well as get meaningfully into contextual factors, different theatrical interpretations and theoretical approaches.

Knowledge

Take Hamlet. Under the previous modular system, in one term there would only be enough time to read the text together once as a class, simultaneously trying to get to grips with characters, events and emerging themes, whilst also analysing key passages and relating ideas to contextual details. Talk about cognitive overload.

This time, and with my present year 12 class too, I have been able to read the play multiple times and got to watch several different interpretations. On each sweep, I have been able to focus on particular things: character, plot and basic ideas first time round; close analysis of key scenes the next; wider interpretations and theoretical readings in later readings. We finished the course at Easter, and have been revisiting ever since.

Spacing and Interleaving

As well as being able to return to the texts multiple times, the new linear A level has provided opportunities to space out readings and interleave them with other content. So, for example, after reading Hamlet for plot and character, we were able to study some Rossetti poems and make a start on the coursework. Returning to each set text – with frequent quizzing in between – seems to have strengthened student understanding.

Quizzing

Without the pressure of rushing through lots of content – or worse, missing out swathes – there has been time to build in systematic quizzing. At the start of every lesson I am able to test students on their knowledge and understanding, creating regular retrieval practice as well as opportunities for valuable formative assessment. Crucially, I have had the time to address any misconceptions and explain things again if necessary.

Deliberate Practice

By far the biggest impact the new two-year A Level has had on my teaching is the time it has provided for developing the quality of students’ writing. For quite a while now, I have been delaying getting students to write. Long gone are the days of reading a couple of scenes or a few chapters and then manufacturing an exam-style exam just so students get to do an essay. It’s a written subject, so there must be lots of extended writing, right?

Actually, no. As the experience of the last few years has shown me – particularly with my current cohort – endless essay writing does not maketh the literature student. What it does maketh is a mountain of substandard work for the downtrodden teacher who has to then dutifully mark it, often to little or no avail. Whilst there were in year 12, I hardly set my students any essays, focusing instead on developing their knowledge base and engaging in deliberate practice of specific sentence types, such as thesis statements.

Only in the last few months have my class been writing whole essays. What has struck me is how quickly their essays have developed. Usually, it would be quite a while before I would see an uplift in style, argument and depth of analysis, but this year, my students have made much more progress much more quickly. I genuinely think that knowing more about the texts has increased their confidence and allowed them to articulate themselves more coherently. The depth of their arguments is noticeable.

Final word

I don’t want to overplay things. I am certainly not suggesting my students will get extraordinary results because of anything extraordinary that I have done. Some will do very well; some will do as expected; others may end up disappointed. ‘Twas ever thus.

What I think, and hope, is different this time, is that my students will have got their results without having to complete endless mock examinations, come back every week after school for weeks on end, or knock out an unrealistic amount of essays. I also think that a lot more of what they have learnt will last beyond the exam, which I am not sure I can say, hand on heart, has always been the case.

More than anything, though, the changes to specification and linearity have meant that I have been able to teach in such a way that is efficient and sustainable, for my students and for me. Much of their success will come down to how well they have applied themselves and, of course, to how well things go on the day itself. This things are largely beyond my control, and whilst I will naturally be disappointed for any that underachieve, I will not have any regrets about how well I have prepared them.

I have done my best for other people’s children, without having had to sacrifice valuable time with my own.

This is what teaching should be like for all teachers, whether parents or not.

 

* image taken from: http://www.altonivel.com.mx/42105-13-personajes-que-no-debes-contratar/

 

 

Principles of Great Assessment #3: Reliability

Screenshot 2017-05-03 18.37.45.pngThis is the third and final of my three part series on the principles of great assessment. In the first post I focused on the principles of assessment design, and in the second on principles relating to issues of fairness and equality. This final post attempts to get to grips with principles relating to issues of reliability and making assessments provide useful information about student attainment. I have been putting off this post because whilst I recognise how important reliability is in assessment, I know how hard it is to get to grips with, let alone explain to others. I have tried to do my best to synthesise the words and ideas of others. I hope it helps lead to the better use of assessment in schools.

Here are my principles of great assessment 11-16

11. Define standards through questions set

The choice of the questions set in an assessment are important as they ultimately define the standard of expectation, even in cases where the prose descriptors appear secure. Where there is variation between the rigour of the questions set by teachers, problems occur and inaccurate inferences are likely to be drawn. The following example from Dylan Wiliam, albeit extreme, illustrates this relationship between questions and standards.

Task: add punctuation to the following sentence to make it grammatically correct

John where Paul had had had had had had had had had had had a clearer meaning.

This question could feasibly be set to assess students’ understanding of grammar, in particular their knowledge of how commas and apostrophes are used to clarify meaning, which on the surface seems a relatively tight and definitive statement. Obviously, no right-minded teacher would ever set such an absurdly difficult example, which most of us, including English teachers, would struggle to answer correctly*. But what it highlights is the problems that can arise when teachers deploy their own understanding of the required standards independently.

A teacher setting the above question would clearly have sky-high expectations of their students’ grammatical understanding, or supreme confidence in their own teaching! More realistically, a question assessing for students’ grammatical ability would look more like the example below, which requires a far lower grammatical understanding.

Task: add punctuation to the following sentence to make it grammatically correct

John went to the beach with his towel his bucket his swimming trunks and his spade.

All this is yet more reason why summative assessments should be standardised. It simply cannot be that the questions some students face demand significantly greater knowledge and understanding than others who have been taught the same curriculum. The questions used in tests of this nature should be agreed upfront and aligned with the curriculum to remain stable each year. This is, of course, in practice really difficult: teachers may start teaching to the test, and thus invalidate the inferences from the assessment, or the question set one year is not of the same standard as the ones previously, thus making year on year comparisons difficult.

12. Define standards through exemplar pupil work

Screenshot 2017-05-02 19.49.53As well as defining standards through questions, standards can also be defined through student work. Using examples of work to exemplify standards is far better than defining those same expectations through the abstraction of rubrics. As we have seen, not only do rubrics tend to create artificial distinctions between levels of performance, but the descriptions of these performances are more often than not meaningless in isolation. One person’s notion of detailed and developed analysis, can easily be another’s highly sophisticated and insightful evaluation. As Hamlet says of Polonius’ speech, they are just ‘words, words, words’. They only mean something when they are applied to examples.

Whether we like it or not, we all carry mental models of what constitutes excellence in our subject. A history teacher knows when she sees a great piece of historical enquiry; she doesn’t need a set of performance descriptors to tell her it demonstrates sound understanding of the important causes and effects explained in a coherent way. She knows excellence because she has seen it before and it looked similar. Perversely, performance descriptors could actually lead her to lower the mark she awards, particularly if it is too formulaic and reductive, which seems to be the problem with KS2 mark schemes: the work includes all the prescribed functional elements, but the overall piece is not fluent, engaging or ambitious.

Likewise, the same history teacher knows when something has fallen short of what is required because it is not as good as the examples she has seen before that did, the ones that shape the mental model she carries of what is good. On their own rubrics really don’t tell us much, and though we may think they are objective, in reality we are still drawing upon our mental models whenever we make judgements. Even when the performance descriptors appear specific, they are never as specific as an actual question being asked, which ultimately always defines the standard.

If objective judgement using rubrics is a mirage, we are better off spending our time developing mental models of what constitutes the good, the bad and the ugly in terms of exemplar work rather than our misunderstanding abstract prose descriptors. We should also look to shift emphasis towards the kinds of assessment formats that acknowledge the nature of human judgement, namely that all judgements are comparisons of one thing with another (Laming, 2004). In short, we should probably include comparative judgement in our assessment portfolio to draw reliable judgements about student achievement and make the intangible tangible.

13.  Share understanding of different standards of achievement

Standardisation has been a staple of subject meetings for years. In the days of National Curriculum Levels and the National Literacy Strategy English teachers would pore over numerous examples of levelled reading and writing responses. At GCSE and a Level in other subjects, I am sure many department meetings have been given over to discussing relative standards of bits of student work. From my experience, often these meetings are a complete waste of time. Not only do teachers rarely agree on why one piece of writing with poor syntax and grammar should gain a level 5, but we rarely alter our marking after the event anyway. Those that are generous remain generous, and those that are stingier continue to hold back from assigning the higher marks.

The main problem with these kinds of meeting is their reliance on rubrics and performance descriptors, which as we have seen fail to pin down a common understanding of achievement. The other problem is that they fail to acknowledge the fundamental nature of human judgement, namely that we are relativist rather than absolutist in our evaluation. Since we are probably never going to fully agree on standards of achievement, such as the quality of one essay over another, we are probably better off looking at lots of different examples of quality and comparing their relative strengths and weaknesses directly rather than diluting the process by recourse to nebulous mark schemes.

Out of these kinds of standardisation meetings, with teachers judging a cohort’s work together, can come authentic forms of exemplified student achievement – ones that have been formed by a collective comparative voice, rather than by a well-intentioned individual attempting to reduce the irreducible to a series of simplistic statements. Software like No More Marking is increasingly streamlining the whole process, and the nature of the approach itself lends itself much better to year on year standards being maintained with more accuracy. Comparative judgement is not fully formed just yet, but as today’s report into the recent KS2 trial, there is considerable promise for the future

14.  Analyse effectiveness of assessment items

As we have established, a good assessment should distinguish between different levels of attainment across the construct continuum. This means that we would expect a marks for difficulty assessment to include questions that most students could answer, and others that only those with the deepest understanding could respond to correctly. Obviously, there will always be idiosyncrasies. Some weaker students sometimes know the answer to more challenging questions, and likewise some stronger students do not always know the answer to the simpler questions. This is the nature of assessing from a wide domain.

What we should be concerned about in terms of making our assessments as valid and reliable as possible, however, is whether, in the main, the items on the test truly discriminate across the construct continuum. A good assessment should contain harder questions that discriminate students with stronger knowledge and understanding. If that is not the case then something probably needs to change, either in the wording of the items or in realigning teacher understanding of what constitutes item difficulty.

How to calculate the difficulty of assessment items:

Step one: rank items in order of perceived difficulty (as best you can!)

Step two: work out the average mark per item by dividing the total marks awarded for each item by the number of students.

Step three: for items worth more than 1 mark, divide the average score per item by the number of marks available for it.

Step four: all item scores should now have a metric of between 0 and 1. High values indicate the item is relatively accessible whilst low values indicate the item is more difficult.

This is the formula in Excel to identify the average score of an individual item:

=SUM(B3:B8)/(COUNT(B3:B8)*B9)

screenshot-2017-05-02-19-51-161.pngOn an assessment with a large cohort of students we would expect to see a general trend of average scores going down as item difficulty increases i.e. a lower percentage of students are answering them correctly. Whilst it would be normal to expect some anomalies – after all, ranking items on perceived difficulty is not an exact science and is ultimately relative to what students know – any significant variations would probably be worth a closer look.

How to calculate item discrimination

There are different ways of measuring the extent to which an item distinguishes between more and less able students. Perhaps the easiest of these uses the discrimination index.

Step One: Select two groups of students from your assessment results – one with higher test scores and one with lower test scores. This can either be a split right down the middle, or sample at both extremes, so one group in the top third of total results, and one group in the bottom third.

Step Two: Divide the total of the sum of the range of the chosen high test score group minus the chosen low test score group by the number of students in the high score group multiplied by the marks available for the question

This is the formula to use in Excel:

=(SUM(B5:B7)-SUM(B8:B10))/(COUNT(B5:B7)*B11)

screenshot-2017-05-02-19-51-231.pngThe discrimination index is essentially the percentage of students in the high test score group who answer the item correctly minus the percentage of the students in the low test score who do not. It operates on a range between -1 and +1 with values close to +1 indicating the item does discriminate well between high and low ability students for the construct being assessed.

Values near zero suggest that the item does not discriminate between high and low ability students, whilst values near -1 suggest that the item is quite often answered correctly by students who do the worst on the assessment as a whole and conversely incorrectly by those who score the best results on the overall assessment. These are therefore probably not great items.

15.  Increase assessment reliability (but not at the expense of validity)

Screenshot 2017-05-03 18.45.36

Reliability in assessment is about consistency of measurement over time, place and context. The analogy often used is to a pair of weighing scales. When someone steps on a pair of scales, whether in the bathroom or the kitchen, they expect the measurement of their weight to be consistent from one reading to the next, particularly if their diet is constant. This is the same as reliability in assessment: the extent to which a test produces consistent outcomes each time it is sat. In the same way you wouldn’t want your scales to add or take away a few pounds every time you weigh in, you wouldn’t want a test to produce wildly different results every time you sat it, especially if nothing had changed in your weight or your intelligence.

The problem is that in assessment it is impossible to create a completely reliable assessment, particularly if we want to assess things that we value, like quality of extended written responses which we have already discussed can be very subjective, and we don’t want our students to sit hundreds of hour’s worth of tests. We can increase reliability but it often comes at a price, such as in terms of validity (assessing the things that we believe represent the construct), or in time, which is finite and can be used for others things, like teaching.

What is reliability?

Screenshot 2017-05-03 18.33.39There are two mays of looking at the reliability of an assessment – the reliability of the test itself, or the reliability of the judgements being made by the judges. Reliability can be calculated by comparing two sets of scores for a single assessment (such as rater scores with comparative judgement) or with two scores from two tests that assess the same construct. Once we get these two sets of scores, it is possible to work out how similar the results are by using a statistical term called the reliability coefficient.

The reliability coefficient is the numerical index used to talk about reliability. It ranges from 0 to 1. A number closer to 1 indicates a high degree of reliability, whereas a low number suggests some error in the assessment design, or more likely one of the factors identified from the Ofqual list below. Reliability is generally considered good or acceptable if the reliability coefficient is in or around .80, though as Rob Coe points out (see below), even national examinations, with all their statistical know how and manpower, only get as high as 0.93! And that was just the one GCSE subject.

How to identify the reliability of an assessment?

There are four main ways to identify the reliability of an assessment, each with their own advantages and disadvantages and each requiring different levels of confidence with statistics and spreadsheets. The four main methods uses are:

  • Test–retest reliability
  • Parallel forms reliability
  • Split-half reliability
  • Internal-consistency (Cronbach’s alpha)

Test-retest reliability

Screenshot 2017-05-02 19.50.26This approach involves setting the same assessment with the same students at different points in time, such as at the beginning and end of a term. The correlation between the results that each student gets on each sitting of this same test should provide a reliability coefficient. There are two significant problems with this approach, however. Firstly, there is the problem of sensitivity of instruction. It is likely that students would have learnt something between the first and second administrations of the test, which might invalidate the inferences that can be drawn and threaten any attempt to work out a reliability score.

The other, arguably more, significant issue relates to levels of student motivation. I am guessing that most students would not really welcome sitting the same test on two separate occasions, particularly if the second assessment is soon after the first, which would need to happen in order to reduce threats to validity and reliability. Any changes to how students approach the second assessment will considerably affect the reliability score and probably make the exercise a complete waste of time.

Parallel forms reliability

Screenshot 2017-05-02 19.50.34One way round these problems is to design a parallel forms assessment. This is basically where one assessment is made up of two equal parts (parallel A and parallel B), with the second half (parallel B) performing the function of the second assessment in the test-retest approach outlined above. As with test-retest, correlations between student results from the parallel A and parallel B parts of the test can provide a reliability figure. The problem now is that, in reality, it is difficult to create two sections of an assessment of equal challenge. As we have considered, challenge lies in the choice of a question, and even the very best assessment designers don’t really know how difficult an item really is until real students have actually tried answering them.

Split-half reliability

Screenshot 2017-05-02 19.50.41Perhaps the best way to work out the reliability of a class assessment, and the one favoured by Dylan Wiliam, is the split-half reliability model. Rather than waste time attempting the almost impossible – and create two forms of the same assessment of equal difficulty – this approach skirts round the problem, by dividing a single assessment in half and treating each half as a separate test.

There are different ways the assessment can be divided in half, such as straight split down the middle or creating two parts by separating out the odd and even numbered items. Whatever method is used, the reliability coefficient is worked out the same way: by correlating the scores on the two parts and then taking account of the fact that this only relates to half the test by applying the Spearman-Brown formula**. This then provides a reasonable estimate of the reliability of an assessment, which is probably good enough for school-based assessment.

The formula for applying Spearman-Brown in Excel is a little beyond the scope of my understanding. Fortunately, there are a lot of tools available on the Internet that make it possible to work out reliability scores using Spearman-Brown’s formula. The process involves downloading a spreadsheet and then inputting your test scores into cells containing pre-programmed formulas. The best of these is, unsurprisingly, from Dylan Wiliam himself, which is available to download here. Rather handily, Dylan also includes some super clear instructions on how to use the tool. Whilst there are other spreadsheets available elsewhere that perform this and other functions, they are not as clean and intuitive as this one.

Internal-consistency reliability (Cronbach’s alpha)

Screenshot 2017-05-03 18.35.26

At this point, I should point that I am fast approaching the limits of my understanding in relation to assessment, particularly with regards to the use of statistics. Nevertheless, I think I have managed to get my head around internal-consistency reliability enough to use some of the tools available to work out the reliability of an assessment using Cronbach’s alpha. In statistics Cronbach’s alpha is used as an estimate of the reliability of a psychometric test. It provides an estimate of internal consistency reliability and helps to show whether or not all the items in an assessment are assessing the same construct or not. Unlike the easier to use – and understand – split-half reliability, Cronbach’s alpha looks at the average value of all possible split- half estimates, rather than just the one that has been split in half.

It uses this formula:

Screenshot 2017-05-03 18.36.04

If like most people, however, you find this formula intimidating and unfathomable, seek out one of the many online spreadsheets set up with Cronbach’s alpha and ready for you to enter your own assessment data into the cells. Probably the most straightforward of these can be found here. It is produced by Professor Glenn Fulcher and it allows you to enter assessment results for any items with a mark of up to 7. There are instructions that tell you what to do and are quite easy for the layman to follow.

Make sure everyone understands the limitations of assessment

Given that no school assessment which measures the things we value or involves any element of human judgement is ever likely to be completely reliable, the time has probably come to be more honest about this with the people most impacted by summative tests, namely the students and their parents. The problem is that in reality this is incredibly hard to do. As Rob Coe jokes, can anyone imagine a teacher telling a parent that their child’s progress, say an old NC level 5, is accurate to a degree of plus or minus one level? Most teachers probably haven’t even heard about standard measurement of error, let alone understand its impact on assessment practice enough to explain it to a bewildered parent.

The US education system seems rather more advanced than ours in relation to reporting issues of error and uncertainty in assessment to parents. This is a consequence of the Standards for Educational and Psychological Testing (1999). These lay out the extent to which measurement uncertainty must be reported to stakeholders, which US courts follow in their rulings and test administrators account for in their supplementary technical guides.

A 2010 report commissioned by Ofqual into the way assessment agencies in the US report uncertainty information when making public the results of their assessments showed an impressive degree of transparency in relation to sharing issues of test score reliability. Whilst the report notes that parents are not always directly given the information about assessment error and uncertainty, the information is always readable available to those who want it, providing of course they can understand it!

‘Whether in numbers, graphics, or words, and whether on score reports, in interpretive guidelines (sometimes, the concept is explained in an “interpretive guide for parents”), or in technical manuals, the concept of score imprecision is communicated. For tests with items scored subjectively, such as written answers, it is common, too, to report some measure of inter-rater reliability in a technical manual.’

To my knowledge we don’t really have anything like this level of transparency in our system, but I think there are a number of things we can probably learn from the US about how to be smarter with sharing with students and parents the complexity of assessment and the inferences that it can and cannot provide us with. I am not suggesting that the example below is realistic for an individual school to replicate, but I like the way that it at least signals the scope for grade variation by including confidence intervals in each of its assessment scores.

Screenshot 2017-05-03 18.49.39

There is clearly much we need to do to educate ourselves about assessment, and then we may be better placed to educate those who are most affected by the tests that we set.

The work starts now.

*  The answer to the questions is: John, where Paul had had ‘had’, had had ‘had had’. ‘Had had’ had had a clearer meaning

** The Spearman–Brown prediction formula, also known as the Spearman–Brown prophecy formula, is a formula relating psychometric reliability to test length and used by psychometricians to predict the reliability of a test after changing the test length.

 

Visual Learning: using graphics to teach complex literary terms

Three-Types-of-Learners-in-elearning.png

I have always tried to pay attention to the way that I present material to my students. Don’t get me wrong, I am not interested in style over substance, and I certainly don’t spend hours labouring away over every resource that I use in class. If there is a quicker, equally effective way of teaching something, then I will take it. I’m not a masochist.

Most of my resources now are paper copy quizzes for retrieval practice and elaboration, many of which have proved very effective at A level. I try to use the board as much as possible, whether to post the all-important learning objective model writing, record the unfolding of the lesson to ease the pressure on working memories or as a means of explaining tricky ideas or concepts more fully, often with an accompanying visual.

The problem is that I am a terrible artist. Unlike the wonderfully talented Oliver Caviglioli, whose illustrations and generosity are first class, my drawings are sad and pathetic. I would love to be Rolf Harris a great illustrator, but I can barely write legibly, let alone draw anything beyond a stick man! I remember a couple of years ago I drew a picture of a horse for a year poetry lesson, and the final product looked more like a pregnant camel with IBS than the thoroughbred I’d intended.

Fortunately, in the age of the Internet and Powerpoint (sorry, Jo), I have some pretty decent tools at my disposal to help me to make up for my artistic deficiency. As I have become increasingly aware of the power of combing words and images in boosting student learning, I have spent more of my time thinking about how images, in particular graphical representations, can be used to help with my teaching, such as in my explanations of complex literary concepts.

One of these troublesome concepts that seems to crop up whenever I teach Christina Rossetti’s ‘Goblin Market’ is allegory. ‘Goblin Market’ is a narrative poem with a familiar story: a young girl tempted into sin; her subsequence loss of innocence before salvation through sacrifice. Most people reading will get the allegory to the story of the Fall of Man. There are one or two differences in the poem – there is no Adam, only horny and grotesque Goblin men, and the saviour is a woman, not the Son of God – but the overarching parallels are pretty clear.

The problem is when it comes to explaining the concept of allegory in and of itself to students – in other words outside the context of the specific example – students really struggle. No matter how hard I try to explain allegory clearly, with examples and analogies aplenty, students just don’t seem to fully get it. Now, you might be tempted to say that I should look to hone my explanation. Trust me on this one: I have honed it to within an inch of its life. There is simply no room for any more honing.

So, this year I thought I’d take a different tact and invest a bit of time producing a graphical representation to sit alongside my verbal explanation. I don’t have any hard evidence to show that I what have done has been any more successful than usual. It seems to have made a difference, with more students being able to explain the concept than before, but then again this may well be a case of confirmation bias. Or brighter students. Or chance.

As you can see from below, the slide I have used in the past to explain allegory is pretty contemptuous – an overreaching definition which I expand and exemplify, with a bulleted breakdown of the two main types, political and the allegory of ideas. There are even a couple of token images thrown in, which I am not really convinced add any real value.

Screenshot 2017-04-22 08.32.40

My next effort is, I think, a real improvement. The graphical representations make the points of comparison between in an allegory between Text A (‘Goblin Market’) and Text B (The story of the Fall of Man) much clearer, and they have the added advantage of being able to highlight where the biblical comparison breaks down, in that some pretty big parts of the Bible story are missing in Rossetti’s poem, such as the presence of God.

Screenshot 2017-04-22 08.32.48

I then attempted to flesh out this initial explanation with an amended version of my original effort. This time I added a relational dimension to my diagram which enabled me to visualise the difference between allegory and other related literary concepts, such as fable and parable. The trouble was that whilst I had made some visual links between genres clear, I had lost the power of the previous graphic to embody the workings of allegory itself.

Screenshot 2017-04-22 08.32.56

My final version therefore combines the best elements of my previous attempts, including the graphical embodiment of the concept of allegory, the relational links to other genres and better images to exemplify examples of the different forms of allegory. The visual cues and graphical representations, along with my honed explanation, seem to have been much more successful in shifting my students’ understanding of allegory. At least, I hope that is the case.

Screenshot 2017-04-22 08.33.07Allegory is not the only literary concept I have attempted to represent graphically this way. I hope to blog about others in the future, so watch this space.

Thanks for reading.

Principles of Great Assessment #2 Validity and Fairness

Picture1

This is the second of a three part series on the principles of great assessment. In my last post I focused on some principles of assessment design. This post outlines the principles that relate to ideas of validity and fairness.* As I have repeatedly stressed, I do not consider myself to be an expert in the field of assessment, so I am more than happy to accept constructive feedback to help me learn and to improve upon the understanding of assessment that we have already developed as a school. My hope is that these posts will help others to learn a bit more about assessment, and for the assessments that students sit to be as purposeful and supportive of their learning as possible.

So, here are my principles of great assessment 6-10.

6. Regularly review assessments in light of student responses

Validity in assessment is extremely important. For Daniel Koretz it is ‘the single most important criterion for evaluating achievement testing.’ Often when teachers talk about an assessment being valid or invalid, they are using the term incorrectly. In assessment validity means something very different to what it means in everyday language. Validity is not a property of a test, but rather of the inferences that an assessment is designed to produce. As Lee Cronbach observes, ‘One validates not a test but an interpretation of data arising from a specified procedure’ (Cronbach, 1971).

There is therefore no such thing as a valid or invalid assessment. A maths assessment with a high reading age might be considered to provide valid inferences for students with a high reading age, but invalid inferences for students with low reading ages. The same test can therefore provide both valid and invalid inferences depending on its intended purpose, which links back to the second assessment principle: the purpose of the assessment must be set and agreed from the outset. Validity is thus specific to particular uses in particular contexts and is not an ‘all or nothing’ judgement but rather a matter of degree and application.

Picture4If you understand that validity applies to the inferences that assessments provide, then you should be able to appreciate why it is so important to make sure that an assessment gives as valid inferences about student achievement as possible, particularly when there are significant consequences attached for students taking them, like attainment grouping. There are two main threats to achieving this validity: construct under-representation and construct irrelevance. Construct under-representation refers to when a measure fails to capture important aspects of the construct, whilst construct irrelevance refers to when a measure is influenced by things other than just the construct i.e. the example of high reading age in a maths assessment.

There are a number of practical steps that teachers can take to help reduce these threats to validity and, in turn, to increase the validity of the inferences provided by their assessments. Some are fairly obvious and can be implemented with little difficulty, whilst others require a bit more technical know-how and/or a well-designed systematic approach that provides teachers with the time and space needed to design and review their assessments on a regular basis.

Here are some practical steps educators can take:

Review assessment items collaboratively before a new assessment is sat

Badly constructed assessment items create noise and can lead to students guessing the answer. Where possible, it is therefore worth spending some time and effort upfront, reviewing items in a forthcoming summative assessment before they go live so that any glaring errors around the wording can be amended, and any unnecessary information can be removed. Aside from making that assessment more likely to generate valid inferences, such as approach has the added advantage of training those less confident in assessment design in some of the ways of making assessments better and more fit for purpose. In an ideal world, an important assessment should be piloted first to provide some indication of issues with items, and the likely spread of results across an ability profile. This will not always be possible.

Check questions for cues and contextual nudges

Another closely-linked problem and another potential threat to validity is flawed question phrasing that inadvertently reveals the answer, or provides students with enough contextual cueing to narrow down their responses to particular semantic or grammatical fit. In the example item from a PE assessment below, for instance, the phrasing of the question, namely the grammatical construction of the words and phrases around the gaps, make anaerobic and aerobic more likely candidates for the correct answer. They are adjectives which precede nouns, whilst the rest of the options are all nouns and would sound odd to a native speaker – a noun followed by a noun.  A student might select anaerobic and aerobic, not because they necessarily know the correct answer, but because they sound correct in accordance with the syntactical cues provided. This is a threat to validity in that the inference is perhaps more about grammatical knowledge rather than understanding of bodily process.

Example: The PE department have designed an end of unit assessment to check students’ understanding of respiratory systems. It includes the following types of item.

Task: use two of the following words to complete the passage below

Anaerobic, Energy, Circulation, Metabolism, Aerobic 

When the body is at rest this is ______ respiration. As you exercise you breathe harder and deeper and the heart beats faster to get oxygen to the muscles. When exercising very hard, the heart cannot get enough oxygen to the muscles. Respiration becomes _______.

Interrogate questions for construct irrelevance

If the purpose of an assessment has been clearly established from the outset and that assessment has been clearly aligned to the constructs within the curriculum, then a group of subject professionals working together should be able to identify items where things other than the construct are being assessed. Obvious examples are high reading ages that get in the way of assessments of mathematical or scientific ability, but sometimes it might be harder to detect, as with the example below. To some, this item might seem fairly innocuous, but on closer inspection it becomes clear that it is not assessing vocabulary knowledge as purported, but rather spelling ability. Whilst it may be desirous for students to spell words correctly, inferences about word knowledge would not be possible from an assessment with these kinds of items in it.

Example: The English department designs an assessment to measure students’ vocabulary skills. The assessment consists of 40 items like the following:

Task: In all of the ________________ of packing into a new house, Sandra forgot about washing the baby.

  1. Excitement
  2. Excetmint
  3. Excitemant
  4. Excitmint

7. Standardise assessments that lead to important decisions

Teachers generally understand the importance of making sure that students sit final examinations in an exam hall under same conditions as everyone else taking the test. Mock examinations tend to replicate these conditions, because teachers and school leaders want the inferences provided by them to be as valid and fair as possible. For all manner of reasons, though, this insistence on standardised conditions for test takers is less rigorously adhered to lower down the school, even though some of decisions based upon such tests in year 7 and 8 arguably carry much more significance for students than any terminal examination.

I know that I have been guilty of not properly understanding the importance of standardising test conditions.  On more than one occasion I have set an end of unit or term assessment as a cover activity, thinking that it was ideal work because it would take students the whole lesson to complete and they would need to work in silence. I hadn’t appreciated how assessment is a bit more complicated than that, even for something like an end of unit test. I hadn’t considered, for instance, that it mattered whether students got the full hour, or more likely 50mins if it was set by a cover supervisor who had to spend valuable time settling the class. I hadn’t taken on board that it would make a difference if my class sat the assessment on a afternoon, and the class next door completed theirs bright and early in the morning.

It may well be that my students would have scored exactly the same whether or not I was present, whether they sat the test in the morning or in the afternoon, or whether they had 50 minutes or the full hour. The point is that I could not be sure, and that if one or more of my students would have scored significantly higher (or lower) under different circumstances, then their results would have provided invalid inferences about their understanding. If they were then placed in a higher or lower group as a result, or I reported home to their parents some erroneous information about their test scores, which possibly affected their motivation or self-efficacy, then you could suggest that I had acted unethically.

8. Important decisions are made on the basis of more than one assessment

Imagine you are looking to recruit a new head of science. Now imagine the even more unlikely scenario that you have received a strong field of applicants, which I appreciate in the current recruitment climate, is a bit of a stretch of the imagination. With such a strong field for such an important post, a school would be unlikely to make any decision on whom to appoint based upon the inferences provided by one single measure, such as an application letter, a taught lesson or an interview. More likely, they would triangulate all these different inferences about the candidate’s suitability for the role when making their decision, and even then crossing their fingers that they had made the right choice.

A similar principle is at work when making important decisions on the back of student assessment results, such as which group to place them in the following term, identifying which individuals need additional support or how much, if any, progress to report home to parents. In each of these cases, as with the head of science example, it would be wise to be able to draw upon multiple inferences in order to make a more informed decision. This is not to advocate an exponential increase in the number of tests students sit, but rather to recognise that when the stakes are high, it is important to make sure the information we use is as valid as possible. Cross referencing examinations is one way of achieving this, particularly given the practical difficulties of standardising assessments previously discussed.

9. Timing of assessment is determined by purpose and professional judgement

The purpose of an assessment informs its timing. Whilst this makes perfect sense in the abstract, in practice there are many challenges to making this happen. In Principled Assessment Design, Dylan Wiliam notes how it is relatively straightforward to create assessments which are highly sensitive to instruction if what is taught is not hard to teach and learn. For example, if I all I wanted to teach my students in English was vocabulary, and I set up a test that assessed them on the 20 or so words that I had recently taught them, it would be highly likely that the test would show rapid improvements in their understanding of these words. But as we all know, teaching is about much more than just learning a few words. It involves complex cognitive processes and vast webs of interconnected knowledge, all of which take a considerable amount of time to teach, and in turn to assess.

Picture3

It seem that’s the distinction between learning and performance is becoming increasingly well understood, though perhaps in terms of curriculum and assessment its widespread application to the classroom is taking longer to take hold. The reality for many established schools is that it is difficult to construct a coherent curriculum, assessment and pedagogical model across a whole school that embraces the full implications of the difference between learning and performance. It is hard enough to get some colleagues to fully appreciate the distinction, and its many nuances, so indoctrinated are they by years of the wrong kind of impetus. Added to this, whilst there is general agreement that assessing performance can be unhelpful and misleading, there is no real consensus of the optimal time to assess for learning. We know that assessing soon after teaching is flawed, but not exactly when to capture longer term learning. Compromise is probably inevitable.

What all this means in practical terms for schools is they to work within their localised constraints, including issues of timetabling, levels of understanding amongst staff and, crucially, the time and resources to enact the theory when known and understood. Teacher workload must also be taken into account when deciding upon the timing of assessments, recognising certain pinch points in the year and building a coherent assessment timetable that respects the division between learning and performance, builds in opportunities to respond to (perceived) gaps in understanding and spreading out the emotional and physical demands for staff and students. Not easy, at all.

10. Identify the range of evidence required to support inferences about achievement

Tim Oates’ oft quoted advice to avoid assessing ‘everything that moves, just the key concepts’ is important to bear in mind, not just for those responsible for assessment, but also for those who design the curricula with which those assessments are aligned. Despite the freedoms afforded from the liberation of levels and the greater autonomy possible with academy status, many of us have still found it hard to narrow down what we teach to what is manageable and most important. We find it difficult in practice to sacrifice breadth in the interests of depth, particularly where we feel passionately that so much is important for students to learn. I know it has taken several years for our curriculum leaders to truly reconcile themselves to the need to strip out some content and focus on teaching the most important material to mastery.

Once these ‘key concepts’ have been isolated and agreed, the next step is to make sure that any assessments cover the breadth and depth required to gain valid inferences about student achievement of them.  I think the diagram below, which I used in my previous blog, is helpful in illustrating how assessment designers should be guided by both the types of knowledge and skills that exit within the construct (the vertical axis) and the levels of achievement across each component i.e. the continuum (horizontal axis). This will likely look very different in some subjects, but it nevertheless provides a useful conceptual framework for thinking about the breadth and depth of items required to support valid inferences about levels of attainment of the key concepts.

Screenshot 2017-03-09 16.53.06

In my next post, which I must admit I am dreading writing and releasing for public consumption, is focusing on trying to articulate a set of principles around the very thorny and complicated area of assessment reliability. I think I am going to need a couple of weeks or so to make sure that I do it justice!

Thanks for reading!

 

* I am aware the numbering of the principles on the image does not match the numbering in my post. That’s because the image is a draft document.