Is AI the New Form of Discrimination?

Adheesh Soni
3 min readNov 29, 2024

--

Introduction: When Technology Mirrors Our Flaws

I used to think of Artificial Intelligence as the ultimate problem solver. After all, how could something built on math and logic possibly discriminate? It’s fair, impartial, and data-driven — or so I thought. But one day, I stumbled upon a news article about an algorithm used in a school district to allocate resources. Instead of ensuring equal distribution, it systematically favoured schools in wealthier neighbourhoods over those in underprivileged areas.

That’s when it hit me: AI isn’t just math; it reflects our choices, biases, and priorities. And sometimes, it amplifies these issues in ways we never expected.

So, is AI becoming the new form of discrimination? Let’s unpack this question together.

1. How AI Inherits Bias: The Invisible Problem

AI systems learn from data. If the data is biased, the AI doesn’t know better — it simply learns the bias. This means the very tools we trust to make fair decisions can end up perpetuating discrimination.

Real-World Example: Hiring Bias

Take AI-powered hiring tools. These systems often analyze historical hiring data to predict who would make a good candidate. But if the company’s past hires were predominantly male, the AI might learn to favour male applicants. That’s exactly what happened with Amazon’s hiring algorithm — it started penalizing resumes with terms like “women’s chess club captain.”

The Bigger Picture

AI bias often feels invisible because it operates quietly in the background. Unlike overt human discrimination, it hides behind a facade of neutrality. But its impacts can be just as harmful.

2. High-Stakes Decisions: AI in Justice and Healthcare

Some of the most troubling examples of AI bias come from high-stakes fields like criminal justice and healthcare, where biased decisions can have life-changing consequences.

Criminal Justice

Predictive policing algorithms, which analyze crime data to forecast where crimes might occur, often target historically over-policed communities. This creates a vicious cycle where certain neighborhoods are unfairly monitored, while others remain unchecked.

Did you know? A study found that a widely-used recidivism prediction tool in the U.S. was almost twice as likely to falsely flag Black defendants as high-risk compared to white defendants.

Healthcare

In healthcare, algorithms have been shown to underestimate the severity of illnesses in Black patients. Why? They’re often trained on data that reflects existing disparities in who has access to medical care. This can result in unequal treatment and outcomes for marginalized communities.

3. Why This Matters: The Automation of Inequality

In my opinion, the scariest thing about AI bias is how easily it scales. Unlike human discrimination, which might be limited to one person or situation, an algorithm’s decisions can impact millions at once.

A Thought-Provoking Question

Imagine applying for a mortgage, a job, or even emergency healthcare, only to be denied because of an algorithm’s biased prediction. How would you even know? Unlike human decision-makers, algorithms don’t explain themselves.

Supporting Data

According to the World Economic Forum, algorithms are now responsible for decisions affecting more than 60% of the global workforce. That’s a staggering amount of influence, and if we’re not careful, it could deepen existing inequalities.

4. What Can Be Done: Towards Fairer AI Systems

Fixing AI bias is no easy task, but there are steps we can take to make these systems fairer and more accountable.

Diverse Data

The first step is diversifying the datasets used to train AI. For example, facial recognition systems trained on more inclusive datasets have shown significant improvements in accuracy across all demographic groups.

Transparency

We need to demand transparency from AI developers. Explainable AI systems, which show how and why decisions are made, can help identify and correct biased patterns.

Regulation

Governments and organizations must step in to regulate high-stakes AI systems. Policies like the European Union’s AI Act aim to ensure accountability in areas like healthcare, policing, and employment.

Conclusion: A New Era of Responsibility

What I’ve learned is that AI isn’t inherently good or bad — it’s a tool. But like any tool, its impact depends on how we use it. If we ignore its potential for harm, we risk creating a world where discrimination becomes automated, invisible, and harder to fight.

So, is AI the new form of discrimination? In some ways, yes. But it doesn’t have to be. With the right awareness, effort, and accountability, we can build systems that don’t just mirror our world — they improve it.

Sign up to discover human stories that deepen your understanding of the world.

Free

Distraction-free reading. No ads.

Organize your knowledge with lists and highlights.

Tell your story. Find your audience.

Membership

Read member-only stories

Support writers you read most

Earn money for your writing

Listen to audio narrations

Read offline with the Medium app

--

--

Adheesh Soni
Adheesh Soni

Written by Adheesh Soni

Painting the journey of a leader, with the soul of an unconventional artist.

No responses yet