Andrew Laird argues that the recent exam debacle shouldn’t cause us to turn our back on the positive power of data analytics. A version of this article was originally published in the MJ on 4th September 2020.
‘mutant algorithm’ which caused so much controversy over exam results – to the ‘algorithm’ in the new Christopher Nolan film, Tenet, which may or may not (no spoiler) have destroyed the entire world!
It has definitely been a tough few weeks for algorithms.
Let’s look at the exam results algorithm. This was not a ‘mutant’ algorithm which independently devised a dastardly plot to rob young people of their exam grades. It was a relatively simple equation which did exactly what its human designers told it to do. It took into consideration pupils’ historic performance but it also took into consideration the historical performance of the school itself which had the impact of pushing down the grades of talented pupils in disadvantaged schools and inflating the grades of poorly performing students in high achieving schools. It probably did a pretty good job of predicting the overall spread of grades across the country – but it was ferociously unfair at an individual level.
Setting aside the school bias, common sense fairness says you should not apply a macro level trend to an individual in something as important and as variable from pupil-to-pupil as exam results. Anyone who has ever sat GCSEs/O-levels or A-levels will know that a huge amount can happen between mocks and the actual exams – some children work harder, some don’t. If my final results had been based on my mock exams, I would have been in a very different place right now – not necessarily a worse place – but definitely different.
Back to the algorithm. As we all know, it was ditched and replaced with teacher predicted grades. All good? Well, what we have ended up doing is simply replacing algorithmic bias with potential human bias. There is subjectivity involved in teacher predicted grades and no human being can be expected to exclude all bias (unconscious or otherwise) from such a process. It’s not as if teachers wanted this. It is grossly unfair on teachers to suddenly have their predicted grades rubber stamped as final results and to now be left agonising over whether they got it right.
Moving away from exam results, the real danger is in the potential backlash against using data, algorithms and analytics more generally.
Driven by the exam fiasco, some councils have started scrapping the use of algorithms in areas such as benefit fraud. Just to be clear, the algorithms weren’t entrusted with the final decision – the job was simply to flag suspicious claims to staff so they could look into it in more detail. Benefits teams on their own would have no chance of searching through the vast amounts of data associated with flagging a suspicious case – so those councils may now have to live with more fraud.
Research from the Cardiff Data Justice Lab (CDJL) found that some councils have stopped using data analytics to help predict which children are at risk of neglect and abuse. Again, the ‘algorithm’ wasn’t making any decisions without human input. It was simply doing what no human could possibly do – analysing the huge amounts of data available which when sliced, diced and drilled into can give a pretty good indication if a family is reaching crisis point. The human social workers could then focus their efforts on cases where children may be at risk as opposed to these cases potentially going unnoticed until further down the line when something really bad happens.
A key problem with the exam result algorithm was that it made the final decision without any human moderation of individual results. In sensitive public services such as pupil examination, benefits or social care, as a general rule, the data and the algorithms which try to make sense of it should guide not decide. It is also essential that their use and impact can be explained to the people the ultimate decision affects. It cannot be a black box – it must be as transparent as possible and final decisions on things as critical as benefits and social care must be ‘owned’ by a real person. The job of the data, and any associated algorithms or analytical processes, is to augment the human professional’s work and ensure that all relevant information is taken into account.
Many people object to the use of data and algorithms for understandable reasons, particularly when their use is not transparent. There is no doubt that data can hold historical biases driven by the type of data collected and from whom it is collected. There are inevitably some human assumptions involved at the start in defining the ‘question’ to be answered.
But rather than turning our back on algorithms and the positive power of data, the task is to make the algorithms and analytical processes better and the data they use more balanced and properly representative.
Both the amount of information on how we live our lives and the computing power to help make sense of it are increasing exponentially. Using algorithms and other analytical processes to help make sense of it all is not something to be feared.
Used with transparency and accountability, this can help make public services fairer and make sure that scare public resources are focused in the right places.
Comments