Two things worth noting that the article doesn't go into...<p>1)
<i>"The year-on-year rise in proportion of students achieving A or A+ grades was much higher at independent schools than state comprehensives"</i><p><a href="https://www.theguardian.com/education/2020/aug/13/england-a-level-downgrades-hit-pupils-from-disadvantaged-areas-hardest" rel="nofollow">https://www.theguardian.com/education/2020/aug/13/england-a-...</a><p>Essentially, students lucky enough to be in private education were more likely to see their grades go up / stay the same, compared to students receiving free education seeing grades going down.<p>Has been explained away as a byproduct of a sensible algorithm, but considering the government's track record and private school backgrounds (85% of the UK's Prime Ministers have been educated at Eton, arguably the country's most exclusive private school) it's hard not to be sceptical that this wasn't intentional and class-motivated.<p>2)
<i>"Appears that Ofqual's algorithm caused today's A-level chaos. Ofqual chair Roger Taylor, also chairs the Centre for Data Ethics & Innovation (CDEI). Cummings' fave AI consultants - Faculty, have some juicy contracts with CDEI. And Faculty's COO Richard Sargeant is on CDEI board"</i><p><a href="https://twitter.com/milesking10/status/1293886007771893762?s=21" rel="nofollow">https://twitter.com/milesking10/status/1293886007771893762?s...</a><p>(Ofqual is the department that oversees exams and responsible for this situation, they report to Gavin Williams the government's Education Secretary. Cummings is a controversial special adverse to the Prime Minister, and Faculty is a firm he used when he was leading one of the two primary pro-Brexit campaigns before the referendum. Subjective statement: Cummings is a vile, evil man.)
There's no good answer here, the unfairness is an inevitable side effect of lockdown, along with people dying due to stopped cancer treatments and a host of other issues.<p>Using teacher's predicted grades is stupid: teachers use predictions in whatever way they think will best motivate their students. If the student is lazy, they'll under-predict to shock them into revising more. If the student is not very confident, they'll over-predict to encourage them.<p>Unless the students are actually allowed to take an exam, all methods are unfair. Using mock results is at least tangentially related to the students' abilities.
"In Scotland the accusations of unfairness prompted a switch to using teachers' predicted grades. These predictions were collected in England too - but were discounted as being the deciding factor, because they were so generous that it would have meant a huge increase in top grades, up to 38%. There were also doubts about the consistency and fairness of predictions and whether the cautious and realistic could have lost out to the ambitiously optimistic."<p>As always, the question when using an algorithm or test is, 'compared to what?'
This was so foreseeable when they anounced the plans.<p>One possible solution could have been if they do the same statistical estimation game and then offer a choice to each student: you can take your estimated grades or go for an exam. If you take the exam you get what you het there. It might go lower or higher than the estimate. This would have incentivised that only those who are certain of themselves and their estimated grades are insuficient towards their goals will go for the tests. And since these are outliers already there will be a lot less of them than the general population, thus it is easier to organise socially distanced exams for them.<p>Of course nothing is easy at scale and many details need to be ironed out but thats what we keep the Miniszry of Education for.
I've been listening to talk radio here in the UK, and it's a constant stream of disappointed kids/teachers/parents phoning in. Some of the stories are horrendous, people expecting AAA getting CCD and similar. It seems unlikely that you'd ever have three teachers predicting you to get top marks, but you end up with that on results day.<p>But about the algorithm, it seems to be an impossible task. You cannot satisfy the requirement of the gross statistics being roughly the same, ie little grade inflation on average, while also identifying talented individuals.<p>The outline of the algorithm seems to be that you basically take the gross stats for earlier years from a school, and then you take the teacher's guesses about who will do well, and make up a distribution of grades around that. This is probably what most people would do if forced into such an exercise. And you can add wrinkles to it like a bit of grade inflation, adjustment for overconfident predictions, and so on.<p>But you can't get that piece of information that you really need, which is a mix of signal and noise about the true distribution of the kid's abilities. If you have a talented year in a low performing school, you'll never know. And vice versa.<p>The year before I finished the IB, two kids got the top 45/45. That's normally achieved by something like 1/400 kids worldwide, and it was a class of maybe 30 kids. That would never have been awarded without examinations.<p>In addition the other presumed goal, of having the grades as close to the predictions as possible, give the algo designer an incentive well known from machine learning: guess safely. Squish the distribution on the edges, that way you're not off much. This is bad for people who shoud be getting As, because many will get Bs. And it's good on the other end, though I don't know where the lump ends (C or D?).
Couldn't the UK just have a regular exam, just with the social distancing measures implemented? The exams were held in many European countries with these measures and nothing bad happened and the children has received a fair examination for the University
An obvious point not discussed is that the exams system was already biased to begin with. Previous years were already unfair as great kids at bad schools have much fewer chances than less great kids for which the parents can pay a great school. England especially is awful at achieving a fair system (that gives everyone a fair chance at good grades) because schools are so stratified - by design of funding rules, etc.
Are we to believe that teacher's predictions are less fraught with bias and inequity?<p>(<a href="https://www.bbc.com/news/education-53776938" rel="nofollow">https://www.bbc.com/news/education-53776938</a>)
Can someone explain to me the significance of A-level grades? If I understand Wikipedia correctly, students apply to universities using predicted grades anyways, and receive offers that are conditioned on the final grades. How about just make all offers unconditional? And if you are not going to universities, do anyone actually care about your A-level grades?
“There have been two key pieces of information used to produce estimated grades: how students have been ranked in ability and how well their school or college has performed in exams in recent years.”<p>“there was no direct connection between an individual's prior achievement and their predicted grade.”<p>Who wrote this algorithm and can we see it?
<a href="https://www.ucas.com/file/292726/download?token=wswAnzge" rel="nofollow">https://www.ucas.com/file/292726/download?token=wswAnzge</a><p>"In 2019, 21% (31,220) of accepted 18 year old applicants met or exceeded their predicted grades, a decrease of 3 percentage points. In addition, 43.2% of accepted applicants had a difference of three or more A level grades – an increase of 5 percentage points (7,190 applicants more) since 2018."<p>Teacher predicted grades are almost useless.<p>Now Scotland has caved to political pressure and wants to inflate grades<p><a href="https://www.bbc.co.uk/news/uk-scotland-scotland-politics-53723734" rel="nofollow">https://www.bbc.co.uk/news/uk-scotland-scotland-politics-537...</a><p>"However, these grades, taken overall, would represent a significant improvement on previous years - including a jump of 20 percentage points in the pass rate for pupils from the most deprived areas."<p>"If the results had purely been based on the estimates from teachers, pass rates at grades A-C would have increased by 10.4 percentage points for National 5, by 14 percentage points for Higher and by 13.4 percentage points for Advanced Higher.<p>These estimated results would have led to a higher annual change than had ever been seen before in Scottish exam results if they had not been moderated by the SQA."<p><a href="https://www.bbc.co.uk/news/uk-scotland-53636296" rel="nofollow">https://www.bbc.co.uk/news/uk-scotland-53636296</a><p>A really difficult situation when what you have to work from is already inaccurate.
You can curve the scores on a meaningless exam and give the impression that some have learned and some have not, or I guess, you can skip the exam entirely and go to the end product.<p>Pirsig would have something to say here.
While these events are terrible, I am pleased that the decisions made by Ofqual are open to judicial review—and that the Good Law Project are considering issuing a claim to force Ofqual to fix this mess.
So is "controversial" the same as unfair or bad? Journalists often use "controversial" to discredit something that is not in accordance to their ideology. Controversial politicians (which politician isn't controversial?), controversial university professors, controversial speeches, controversial thoughts...
A big pile of 'meh'. They are trying to make a statistical guess based on priors because the pandemic prevented the exams from being written. They are going to get things wrong. If they have an appeals process for things that are completely out line (e.g. a passing student getting a failing grade), then that's the best you can hope for given the circumstances.<p>In the big picture, the good students are going to get good markers, average students are going to get average markets, and bad students will get bad markets. Again, it is what it is.