The world is changing and we’re relying more and more on Artificial Intelligence (AI) to help us make decisions. Today, algorithms are a part of our everyday life. And whether we want it or not, our reality is shaped by invisible and intelligent systems. AI can be an efficient tool to get data-driven predictions. Which increasingly is used for recruitment purposes. But what is the outcome if we feed a system with biased data? Well, it creates an unfair AI that inadvertently discriminates against women and minorities.
Risk of Discrimination
Personal data has become one of the most valuable commodities during the last decade because it is an effective tool to predict future behavior. But according to a report written for the Anti-discrimination department of the Council of Europe, there are many risks of discrimination caused by algorithmic decision-making. Because today, we can import big amounts of data and let algorithms analyze it to make a prediction. This is also called “profiling” and today it is used as a legitimate ground to make important decisions about people.
But the lack of moral and ethical guidelines has made room for systems that discriminate and reinforce social inequality. And even though a software can’t be biased in itself, if we provide data from today’s (unequal) society, the algorithm will predict a future that reflects that. Thus, making the rich richer, and the poor poorer.
“The data showed that black defendants were twice as likely to be incorrectly labeled as higher risk than white defendants.”
When AI is Reinforcing Social Inequality
There have been several examples of systems that have predicted unfair and even discriminatory results. And it is crucial to understand that there always is potential for bias to creep in overtime. Because who is determining if the system is doing well after a couple of years?
One example is from the public sector where a system actually reinforced social inequality. The notorious “Correctional Offender Management Profiling for Alternative Sanctions” (COMPAS). Which was used to predict if defendants would re-commit a violent crime. A great idea in theory, but when the results were analyzed the algorithm repeatedly discriminated against one group. The system misclassified black defendants to be twice as likely as white defendants to commit a crime again. Even though the likelihood couldn’t be proved. The incorrect prediction was based on a mathematical miscalculation. And a group of Standford researchers have now proven that “It’s actually impossible for a risk score to satisfy both fairness criteria at the same time.” Thus, creating a fair algorithm is not as easy as just providing data.
Another well-known example is from the private sector. Amazon stopped using an AI system for screening job applicants because the system was biased against women. It became clear that the algorithm wasn’t rating candidates in a gender-neutral way. Instead, the results were based on historical training data and since the data showed a majority of successful men. And therefore, the system taught itself that male candidates were preferred.
Validating AI Tools
With AI playing a more important role in organizations, it’s becoming clear that we need moral and ethical guidelines. Therefore, transparency and testing AI will be key to build predictive systems that won’t reproduce or even amplify discrimination. And by using diverse and cross-disciplinary teams you can get a better evaluation. So having a team of software developers, recruiters, marketing specialists, and mathematicians who all are involved in the development. And who can share their diverse perspectives, will increase the chance of creating a less biased system.
Keep a lookout here on the blog to read about how Tengai is developing fair software to give all candidates the same opportunity. In an interview with our CTO Vanja Tuvfesson. Who’s been working with questions like equality, diversity, and inclusion to change the perception of who can be a developer.