Online Courses

Intelligent decision making: what can we learn from the A-level algorithm failure?

In August this year, machine bias dominated the headlines as a ‘mutant algorithm’ – so named by Prime Minister Boris Johnson – wreaked havoc upon A-Level exam results.

UK students, unable to sit their exams due to COVID-19, received computer allocated grades that, for almost 40% of students,1 were lower than predicted. This in turn meant that many of these students subsequently missed out on university places. Coupled with the fact that the UK government capped university growth at 5% for the 2020/21 intake,2 the resulting situation was an unmitigated disaster for many who had hoped to go onto further education this year.

After a public outcry, students were eventually awarded the grades that their tutors had predicted rather than the computer-generated assessment, but we have a lot to learn from the situation.

What happened and what went wrong?

Predicting human performance using technology requires multiple sets of data and an extremely complicated algorithm, so there was a large margin for error to begin with.

According to Dr Daan Kolkman, Research Fellow at the Technical University of Eindhoven, “Several people identified issues with Ofqual’s algorithm from a technical report that was released by the UK government. Among other things, experts criticized the low accuracy of the algorithm and lack of uncertainty bounds for the resulting grades. Meanwhile, public outcry centred on the algorithm’s unfair results. For instance, if no one from your school has gotten the highest grade in the past three years, it’s extremely unlikely—if not impossible—for anyone from your school to attain that grade this year.”3

This was the result of what we call Artificial Intelligence (AI) or Machine Bias – where an algorithm produces results that are systemically prejudiced due to erroneous assumptions in the machine learning process.4

“In selecting which computational model to use, Ofqual decided that the standardisation model would place more weight on the statistical data available (such as the school’s historical performance) rather than the teacher’s estimated scores for pupils” explains Laurie Clarke for the New Statesman. “[This was] because the latter would ostensibly lead to an intolerable level of grade inflation and more disparities in fairness between different schools.”5

The algorithm also gave more weight to predicated grades for classes where there were 15 or less pupils, meaning that students at smaller schools were more likely to benefit from grade inflation than those from larger cohorts. In reality, this bias meant that pupils studying at independent schools, where classes are generally smaller, had an unfair advantage:

“This approach reinforces existing inequalities, as one analysis showed that the proportion of A* and A’s awarded to independent (fee-paying) schools rose by 4.7 percentage points—more than double the rate for state comprehensive schools”6 says Kolkman.

What can data analysts learn from Ofqual’s grading algorithm? The role of a critical audience

The A-Level results fiasco was not the first high profile case of AI failure. In the US, an algorithm used to risk assess the likelihood of criminals reoffending was found to be biased against black offenders7 and in 2015, AI used in Amazons hiring programme was found to be biased against women.

In cases like these, where the use of AI has a direct, negative impact on the public, the decisions are often reconsidered once they are scrutinised by a ‘critical audience’ - a concept we can also apply in business.

When conducting a data analysis, consultation and transparency are essential to ensure that the data and algorithms are robust.

[In the case of the A-Levels] it would have helped to include the voices of parents and children at universities and the teachers, and of education specialists and experts and statisticians and data scientists throughout that process,” says Jeni Tennison, vice president of the Open Data Institute. “Part of the problem here is that these issues came out only after the grades were given to students, when we could have been having these discussions and been examining the algorithm and understanding the implications of it much, much earlier.”

Can you trust your data?

Being able to trust your data is a critical factor. However, although 61% of CEOs see building trust as a top three priority for their organisation, just 35% of IT decision-makers have a high level of trust in their organisations’ analytics.8

“In today's environment - trust isn't just about the quality of an organisation's brands, products, services and people. It's also about the trustworthiness of the data and analytics that are powering its technology” says Brad Fisher, Global Head of KPMG Lighthouse Data.

Poor quality, non-objective or low volumes of data will result in inaccurate predictions; colloquially referred to as ‘garbage in = garbage out’.

In a recent report, KPMG also warn that in order to be able to have trust in our data, we must ensure that we have measures in place to regulate its quality:

“Amid the unprecedented use of [data and analytics] to drive decisions and the growing human-machine workforce, the governance of machines must become more strategic and integrated with governance of the entire enterprise”9

Data should inform – not dictate

If there is one thing to be learnt from the A-Level algorithm it is that data should inform decisions – not dictate them.

Although we have more access to data than ever before, the challenge lies in knowing what to do with it. Technology plays a key role in the construction of information and knowledge to support interorganisational decision-making and value creation – but we must be strategic and critical in our application.

However, businesses can only do this if they are able to effectively analyse their data.

Incredibly, it is estimated that between 60% and 73% of all data within an enterprise goes unused for analytics10 - and this is often due to lack of data analysis skills.

This data literacy gap is impacting organisations’ ability to thrive in our data-driven world. According to a recent report from Accenture, despite nearly all employees recognising data as an asset, only 25 percent believe they are fully prepared to use data effectively, and just 21 percent feel confident in their data literacy skills.11

Our Online Master of Business Administration (MBA) with Data Analytics can help you to fill this gap and secure your future in big data.

By studying business management with a specialism in data, you will gain a suite of highly transferable skills, plus the confidence to make informed, technology-driven decisions that drive growth and productivity in your organisation.

For more information on how we can help you achieve your career goals, please visit our course information page or call UK: 0800 032 1180 or Intl: +44 (0)115 941 8419.

References

  1. COUGHLAN, S (2020) Why did the A-level algorithm say no? (Online) Available at: <https://www.bbc.co.uk/news/education-53787203> [Accessed 24.09.20]
  2. KOLKMAN, D (2020) “F**k the algorithm”?: What the world can learn from the UK’s A-level grading fiasco (online) Available at: <https://blogs.lse.ac.uk/impactofsocialsciences/2020/08/26/fk-the-algorithm-what-the-world-can-learn-from-the-uks-a-level-grading-fiasco/> [Accessed 24.09.20]
  3. KOLKMAN, D (2020) “F**k the algorithm”?: What the world can learn from the UK’s A-level grading fiasco (online) Available at: <https://blogs.lse.ac.uk/impactofsocialsciences/2020/08/26/fk-the-algorithm-what-the-world-can-learn-from-the-uks-a-level-grading-fiasco/> [Accessed 24.09.20]
  4. ROUSE, M (2020) machine learning bias (AI bias) (Online) Available at: <https://searchenterpriseai.techtarget.com/definition/machine-learning-bias-algorithm-bias-or-AI-bias> [Accessed 24.09.20]
  5. CLARKE, L (2020) How the A-level results algorithm was fatally flawed (Online) Available at: <https://tech.newstatesman.com/public-sector/how-the-a-level-results-algorithm-was-fatally-flawed> [Accessed 24.09.20]
  6. KOLKMAN, D (2020) “F**k the algorithm”?: What the world can learn from the UK’s A-level grading fiasco (online) Available at: <https://blogs.lse.ac.uk/impactofsocialsciences/2020/08/26/fk-the-algorithm-what-the-world-can-learn-from-the-uks-a-level-grading-fiasco/> [Accessed 24.09.20]
  7. ANGWIN, J. ET AL (2016) Machine Bias (Online) Available at: <https://www.propublica.org/article/machine-bias-risk-assessments-in-criminal-sentencing> [Accessed 24.09.20]
  8. TERENCE, S (2020) Real-life Examples of Discriminating Artificial Intelligence (Online) Available at: <https://towardsdatascience.com/real-life-examples-of-discriminating-artificial-intelligence-cae395a90070> [Accessed 24.09.20]
  9. KPMG International Data & Analytics (2020) Guardians of trust - Who is responsible for trusted analytics in the digital age? (Online) Available at: <https://assets.kpmg/content/dam/kpmg/xx/pdf/2018/02/guardians-of-trust.pdf> [Accessed 24.09.20]
  10. KPMG International Data & Analytics (2020) Guardians of trust - Who is responsible for trusted analytics in the digital age? (Online) Available at: <https://assets.kpmg/content/dam/kpmg/xx/pdf/2018/02/guardians-of-trust.pdf> [Accessed 24.09.20]
  11. GUALTIERI, M (2016) Hadoop Is Data’s Darling For A Reason (Online) Available at: <https://go.forrester.com/blogs/hadoop-is-datas-darling-for-a-reason/> [Accessed 25.09.20]
  12. ACCENTURE ET AL (2020) The Human Impact of Data Literacy (Online) Available at: <https://thedataliteracyproject.org/files/downloads/Qlik_Accenture_Human_Impact_of_Data_Literacy.pdf> [Accessed 25.09.20]