Data Discrimination: Unmasking Algorithmic Bias in AI

Recent advancements in AI of all types have taken the world by storm and will undoubtedly change many aspects of how we live our lives. As we embrace the promises of AI, a disconcerting reality has emerged, however; the pervasive and often hidden biases against minority communities deeply ingrained in these algorithms.

While AI is celebrated for its potential to enhance efficiency, productivity, and innovation, it is crucial to acknowledge that it is far from infallible. In fact, AI systems have demonstrated a disturbing tendency to perpetuate, and in some cases, exacerbate, the biases that have plagued societies for generations.

A quick history lesson is in order here. In 1956, a meeting at the Dartmouth Mathematics Department in New Hampshire was widely seen as the starting point for the development of AI. Only 100 or so people in the entire world were working on this at the time, and as the institutional racism, sexism, elitism, et cetera, we see today was much more prevalent back then, all of the attendees at Dartmouth were white males.

When you have a wildly unrepresentative data set from which to learn, an AI model, which is trained by analysing and spotting patterns in these data sets, is only going to produce unrepresentative results.

Putting this into today’s context, we have a situation where under 14% of all AI researchers in the US, which is the only real Western player nation in the global AI race at the moment, are women. The total workforce of AI is also made up of 67% white people, so on two major fronts there is an underrepresentation and skewing of the data pools from which AI models learn.

A well known example of this was Microsoft’s “Tay” chatbot that was launched in 2016, lasting just 16 hours online after it quickly learned racism, misogyny, and other irrational hatreds from an average day on Twitter.

Other, more subtle, examples of how these algorithms are used on a daily basis and impact our lives would be the ones employed by credit card companies, healthcare providers, and all companies looking at job applications.

One particular case in the job applications side of things was with Amazon’s hiring algorithm they deployed in 2014. The idea behind it was that they would throw in the hundred, or however many, job applications they had received for a particular position and it would come back with the top five, one or all of whom would then be hired. However, by the very next year, Amazon was forced to abandon it, as it was found to have rejected every single female candidate it had come across.

It has been widely reported that healthcare providers in the States have also utilised AI, and that the African-American community has been unfairly impacted by it. Some hospitals, which serve an estimated 200 million Americans, decided to use an AI algorithm to try and predict which patients would require extra medical attention in the future by analysing their historic healthcare costs. This meant that, because less money is spent on Black patients for the same level of need due to unequal access to healthcare, the algorithm incorrectly determined that they were less likely to need treatment in future.

Facial recognition technology is also an extremely influential and potentially dangerous set in which biased AI can have awful real-world implications, as it’s much more widespread than you might think.

Particularly in the case of China — an extreme example, but recent UK deployments of facial recognition technology are taking us closer to that end of the spectrum than ever before — this type of technology is used in almost every facet of life.

Be you riding public transport, doing your weekly shop at the supermarket (yes, you can also pay just by scanning your face), or even simply entering your block of flats, facial recognition is what will allow you to do these things.

Chinese citizens are tracked using facial recognition technology at all times, and the Chinese Communist Party has been clear about this with its citizens. They have been told that they each have a “social credit score” which goes up and down depending on your behaviour — for example, if you are caught jaywalking or playing your music too loud on public transport, this could limit your options in terms of purchasing transport tickets in future.

To take this one step further, if you’re found to be saying something negative about the Communist Party, this will affect both yours and your close family and friends’ scores by association.

This all sounds very dystopian, and we in the West love to decry these policies as something straight out of a George Orwell novel. However, in reality, we have exactly the same scoring system in our economies, but they are run by private enterprises, and it’s the very algorithms we’re talking about which determine the outcomes of these scoring systems.

The difference between China and the West on this issue is that China is being more transparent about it and is running it at a state level, rather than involving the private sector.

So, what can be done?

I realise that we may sound a bit like a broken record when it comes to tackling these multinational existential issues in that regulation is the way forward, but it absolutely is. There is no other mechanism we have for controlling large-scale, influential industries, and the current GDPR regulations in the UK and EU are at least providing some consumer protection against the misuse of data.

The principle it’s based on is that individuals have the right to access, control, and accountability to determine how their data is used. This is not the case in the US, where these laws do not exist and private enterprises have almost unfettered access to peoples’ data. This is then used to profile individuals and create the data sets from which these algorithms learn, which is perpetuating these biases even further.

GDPR laws, or more advanced versions thereof, should be implemented across the Western World if private enterprises are to continue to hold the power in the forms of their algorithms. There is no other way of protecting consumers, and it must be done quickly in order for the AI revolution to be the beneficial force it can be.

Previous
Previous

We are UK Agency Award Winners!

Next
Next

Cutting Costs vs Cutting Corners: Your Shortcut to Success