I once believed that technology was neutral, but learning about algorithmic bias has changed my view. Algorithms are not objective, they can reinforce social inequalities, quietly aggravate discrimination based on gender, race, skin colour, and behaviour.

First, algorithmic bias stems from data bias. Friedman (1996) pointed out that all technical systems carry social values because data comes from human society, and society itself is unequal. The study by Buolamwini and Gebru (2018) showed that commercial facial recognition systems achieved 99% accuracy for white men but only 65% for black women. This is not because machines choose to discriminate. There were few samples of black women in the training data, so the model could not make accurate judgements. The machine only reinforced the bias already present in society.
*image from: https://time.com/5520558/artificial-intelligence-racial-gender-bias/ [Accessed: 29 Nov 2025].

Second, selective coding by technology companies is another key source of bias. Hall (1980) reminded us that the media encodes values from the outset, leaving audiences hard to be aware of the underlying ideology. Algorithmic recommendation functions as such a hidden power. Noble (2018) further argued that algorithms are not only mathematical logic. They are strategic choices made by platforms based on commercial interests. In 2025, the French regulator found that Meta’s job advertising algorithm showed mechanic roles more often to men and childcare roles more often to women, which created indirect gender discrimination. This was not an accident. It was a valuable choice made by the system in deciding who becomes visible.
*image from: https://en.acatech.de/allgemein/the-future-council-of-the-federal-chancellor-discusses-impulses-for-germany-as-an-innovation-hub/ [Accessed: 29 Nov 2025].

Third, algorithmic bias arises from the ways social inequality is reproduced and exacerbated. O’Neil (2017) emphasised that the danger of algorithms lies not in the errors themselves but in the way these errors grow under automation and scale. A typical example is the COMPAS risk assessment model used in the United States. ProPublica (Angwin et al., 2016) found that the system often marked black people as high risk while white people were more often marked as low risk. In this vicious cycle, the police increased their patrols in black communities, and the data then showed a higher crime rate among black people. Thus, the algorithm model believed that these communities were more dangerous. Ultimately, the algorithm turned these judgements into objective data and made social bias look like scientific truth.
*screenshot from: https://www.propublica.org/article/machine-bias-risk-assessments-in-criminal-sentencing [Accessed: 29 Nov 2025].
Overall, the answer to “why do algorithms become biased” is not in technology but in social structures. Algorithms are not independent intelligence. They are a re-encoding of human data, decisions, and values. As technology is applied to commercial optimisation or content distribution, the underlying bias intensifies. Understanding algorithmic bias requires examining how technology and power interact: who designs the algorithms, who benefits from their bias, and who bears the costs.
To reduce algorithmic bias, we need more transparent data sources, stricter model checks and more diverse training data. What an algorithm becomes depends on the values we put into it and whether we are willing to question it. Avoiding algorithmic bias is, at its core, about avoiding bias in our real life.
References:
Angwin, J., Larson, J., Mattu, S. and Kirchner, L. (2016) Machine Bias. ProPublica. Available at: https://www.propublica.org/article/machine-bias-risk-assessments-in-criminal-sentencing [Accessed: 29 Nov 2025].
Buolamwini, J. and Gebru, T. (2018) ‘Gender Shades: Intersectional Accuracy Disparities in Commercial Gender Classification’, Proceedings of Machine Learning Research, 81, pp. 1–15. Available at: https://proceedings.mlr.press/v81/buolamwini18a.html?mod=article_inline&ref=akusion-ci-shi-dai-bizinesumedeia [Accessed: 29 Nov 2025].
Friedman, B. (1996) ‘Value-sensitive design’, Interactions, 3(6), pp. 17–23. Available at: https://dl.acm.org/doi/pdf/10.1145/242485.242493 [Accessed: 29 Nov 2025].
Featured image: https://www.vox.com/recode/2020/2/18/21121286/algorithms-bias-discrimination-facial-recognition-transparency [Accessed: 29 Nov 2025].
Hall, S. (1980) ‘Encoding/decoding’, in Hall, S. et al. (eds) Culture, Media, Language. London: Hutchinson, pp. 128–138.
Noble, S.U. (2018) Algorithms of Oppression: How Search Engines Reinforce Racism. New York: New York University Press.
O’Neil, C. (2017) Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy. New York: Crown.
The Guardian (2025) ‘Facebook job ads algorithm is sexist, French equality watchdog rules’. Available at: https://www.theguardian.com/world/2025/nov/05/facebook-job-ads-algorithm-is-sexist-french-equality-watchdog-rules [Accessed: 29 Nov 2025].
