Algorithms and AI have been used to make life easier as our lives continue to become more and more centered around technology. But are they really making life easier?
Algorithms decide almost everything we see on the Internet and social media. From what’s trending on Twitter to finding profitable marketing strategies, algorithms play a role. We take for granted that these resources that fill our social media feeds with content and advertisements are created fairly and unbiased, but in reality, they’re far from it.
In a nutshell, algorithms are programs that break down information mathematically and use that information to statistically analyze data. However, as wonderful and sophisticated algorithms are built to be, there is no mathematical code for fairness, as Wired pointed out in this article.
Bias and racism have sadly been entrapped in our society longer than any of us have been around. So, unfortunately, there’s no stopping mathematical algorithms from being built with the same biases and prejudices the developer already exhibits.
The story of unfair algorithms predates Facebook or Twitter, though. Wired reported that throughout the late 60s and 70s, insurance companies used technical algorithms and statistics to redline inner-city neighborhoods that housed minorities. These companies hid behind the complicated mathematics of the algorithms, claiming that the choice was a purely technical one, and that it didn’t consist of moral judgement.
Beginning to sound familiar, right? You may remember during the height of the 2016 election season, Facebook claimed they couldn’t prevent fake news from running rampant throughout the platform. Instead, the sites algorithms were to blame, and it wasn’t Facebook’s responsibility to stop them.
It’s hard to argue with solid data and numbers, even if those numbers justified discrimination. In a ProPublica report, Julia Angwin found many disparities in the algorithms used by the criminal justice system to complete risk assessments on criminals. These scores given to inmates usually determined the likelihood of a release from prison. However, like the algorithms used by insurance companies’ years prior, these risk assessments were far from perfect. In fact, Angwin found that the formula unfairly flagged black defendants as future criminals at double the rate of white defendants.
These risk assessment scores reflect poorly on black defendants, creating a feedback loop around them being labeled ‘higher risk’, more police patrolling black neighborhoods, more black people getting arrested because more police are patrolling, and the cycle continues. All of this is supported by the little mathematical algorithm that determines a defendant’s risk upon release.
So, clearly, even though algorithms are created with the claim that they are backed by mathematics and data, that doesn’t necessarily mean they are right. Social media platforms are taking a similar stance. How bad can it really be? Well, definitely not great. The algorithms used to fill our Facebook, Twitter, Instagram, and even TikTok feeds all come with bias too.
Buzzfeed published an article that shed light on how TikTok’s recommendations algorithm is prompting some worry. When AI researcher Marc Faddoul created a new TikTok account, he noticed a strange pattern emerge—the suggested accounts to follow all looked very similar. If a user followed a white man with a beard, only white men with beards appeared in TikTok’s recommendations. If a user followed a black woman, only black women would appear. A woman wearing a hijab, you guessed it, TikTok would recommend other women, also wearing hijabs.
While this was a casual experiment, it shows that the collaborative filtering algorithm that TikTok employs is less than perfect, by leading to echo chambers and a racially stagnant feed. While it may not be the intention of the algorithm, it can reproduce bias that is already found in people’s behavior, Faddoul explained to Buzzfeed. This can lead to a lack of diversity in people’s feeds and make it challenging for minority creators to gain popularity.
This isn’t a singular case, either. In one instance, Twitter tried to combat hate speech across the social media site. A seemingly noble act. The idea is that AI and algorithms will be able to flag racist remarks far faster than any human, making the platform safer and more enjoyable for all users. It didn’t exactly turn out as planned, though.
In fact, Vox reported that while trying to mend the problem of hate speech on Twitter, the AI and algorithms employed amplified racial bias. Research showed that African American tweets were one and a half times more likely to get flagged than white tweets. The problem is mathematical equations don’t understand social context. They also can’t escape the faults of out society. Thus, using an algorithm to decipher if an offensive term is actually being used offensively, is more difficult than Twitter originally thought.
While we think artificial intelligence and algorithms are impartial and unbiased, most of the time this isn’t the case. They learn from their creators, humans, and if the human who created the algorithm has biases—guess what? The product of their creation does too.
It’s easy to believe that a machine that’s driven by data and math sees the world fairly. Computers are supposed to be accurate. That’s why you use a calculator to check your less than perfect math skills. Computers are always learning, but there’s some things that computers shouldn’t be taught: racism. It’s not that algorithms or AI are inherently racist or discriminatory, either. But they perpetuate whatever biased human assumptions the person who made them held. When creating AI and algorithms, programmers need to account for these issues. Instead of trying to fix a biased algorithm with more algorithms, maybe take a step back to understand the role of human error in the situation.
There’s no doubt that algorithms will continue to run the world. We’re there. The technology isn’t leaving. But becoming aware of the inherent problem algorithms have, resulting from human behavior is important, and hopefully in the future we can change it. We can begin to fill the void of algorithmic fairness by hiring more people of color to build them. But to truly change the way AI and algorithms work, there are much deeper problems entangled in society that must be addressed. We need to train these machines to reflect the progress we aim to make as a society, instead of training them to support the past norms we’re trying to escape from.