top of page

2020 - an examiner’s view from the sidelines

It is difficult to know where to start with the fall out from A level results and presumably a similar impending car crash course at GCSE. (Although 6 more changes may have happened by the time you read this). Having quietly fumed at some of my own students results and how our large college in a poorer postcode seemed to have been treated, it didn’t seem worth adding to the noise but as a senior examiner there is something that may be worth saying particularly to counter the notion that the system is always unfair and that ultimately grades are always handed out by some sort of algorithm.


Yes - there is an Awarding Algorithm

One common misconception is that examiners give grades. They do not. Examiners give each question or each paper a mark. The examiners job is to follow the standard they have been given and mark each script. Essentially examiners rank order the candidates. Only once this is done are grades awarded. This is the statistical bit. On a large paper, we can be fairly confident that cohorts will remain the same. So for instance on my paper, each year there are over 8000 candidates. If say 15% got an A last year we would expect the A boundary to be set so that roughly the same percentage get A this year. This keeps the standards comparable. This is fine when we are dealing with large numbers. On smaller papers the stats are less reliable. Some years ago I ran a small paper that only 300-400 students nationally sat. Here we would rely more on our professional judgement in setting boundaries. It was pointless to worry about the percentages and stats. So yes, there is always a grading algorithm and yes, having smaller numbers does allow more discretion to depart from the algorithm.


Difference - Examiners mark 'blind' and produce real data at an individual level

However unlike the computer generated results of 2020, the grades that this algorithm awards is not based on one or two limited pieces of data. Each candidate will have sat 3 exams made up of various questions. It may be that during the course of sitting their A levels, candidate’s work may have been divided up and seen by many examiners - certainly at least 3 independent standardised examiners.

Even more importantly, examiners do this 'blind.' What I mean by this is that there is absolutely no information about the candidate on the script as seen. They do not know the school, the candidates gender, ethnicity or anything. As teachers it has been hard for us to rank order and grade students because we know them, we may be more prone to benefit of the doubt than we care to admit. By operating on a centre by centre basis, the ofqual algorithm is making judgments about the quality of institutions not individuals. By allowing very small centres to keep their grades, but stating that overall standards must be largely the same, it is inevitable that larger centres (eg sixth form colleges and FE colleges, who are more likely to be in larger poorer areas) will carry the weight of adjustment. Yes, it may well be that 70-80% of grades are more or less right but that‘s no consolation if you are in the downgraded 20%


Difference - Standardisation normally happens first with quality control throughout

The government has quite rightly said that there needs to be some standardisation or moderation of teacher grades. However for examiners standardisation happens at the start of the process - we mark some together and the senior team explain the criteria before then checking that examiners have got it. Teachers have in effect marked all their scripts without any guidance and then an attempt has been made to standardise! Inevitably the horse has already bolted before the stable door has been shut. I wonder if schools and colleges could have submitted data and had some initial feedback - almost as a dialogue before final grades were agreed. Exam boards and Ofqual had 2 months between submission and publication of results.

In addition for examiners there is quality control throughout a normal examination session with random sampling of marking. At the end of the process there is an agreed procedure for students to challenge results and get papers reviewed by a senior examiner. Where a university place is at stake - we will have the candidate's exam reviewed within 2 weeks of results day. Normally by now, 3 days after A level results, I would have already done some reviews. Currently we still don’t know what the process will be.


And in 2021??

I am not for one minute suggesting that the examination system is perfect, it isn’t and much of my work each summer involves spotting and trying to sort the things that are going wrong - hopefully before results day but occasionally afterwards. Whilst myself and the senior team do not enjoy contacting examiners and telling them that they can no longer mark because they are significantly out of tolerance, we nevertheless do this for the sake of the overall standard and to ensure as many students as possible get the right marks as possible.


We hope to be back in action as examiners next summer. We will be human and will make some mistakes but I think we can guarantee that there will be significantly less chaos than this year.

31 views0 comments

Recent Posts

See All
bottom of page