Good morning, everyone!
You’ve just launched a new AI project. You’ve invested time, resources, and heart into making it the next big breakthrough. But then, like many before, the project fails—joining the staggering 80% of AI initiatives that do not succeed, according to research from the RAND Corporation.
That's double the failure rate of non-AI tech projects, mainly due to misunderstood objectives, insufficient data, over-reliance on shiny new technologies instead of solving real-world problems, and, most importantly, not following our newsletter.
As we explore the world of “biases” in AI, it's also worth noting that the very term "bias" carries a negative connotation (mainly because of its historical usage). Indeed, we have our own bias toward the word "bias"—isn’t that ironic? However, in AI, biases are not inherently good or bad; they simply represent a tendency that impacts model decision-making. Whether that impact is positive or negative depends entirely on how they’re managed.
Today, we will explore the world of bias in AI—one of the many challenges that can make or break an AI system. Whether you're deeply entrenched in AI development or guiding a company's AI strategy from above, understanding bias is key to creating accurate, ethical, and effective systems.
Removing all biases is not a solution or a fix. All algorithms are biased-based, and some cause the system to work.
What Exactly is Bias?
The word "bias" isn't new—it's been used in psychology, statistics, and everyday language for decades. Bias in AI refers to the skewing of an algorithm’s outputs due to bias in data or the influence of the human engineers who develop it. AI systems learn from the data we feed them, and if that data is tainted with human biases—no matter how subtle—those biases become part of the model’s knowledge. And here's where things get tricky: biases can stem from everywhere—data collection, algorithmic design, human oversight, or even the project's objectives.
What makes bias particularly problematic is that it often goes unnoticed. It only reveals itself after using the model for a while, and by the time it becomes clear, the damage may already be done.
The Good
Not all biases are harmful. Sometimes biases are actually what we want the model to learn. Think about your favourite music streaming service. How does it always recommend that perfect next song? It's all about bias—the system is biased toward your personal preferences, constantly learning what you like and what you don’t.
The same is true for ChatGPT’s memory feature. As you interact with it, the system saves your preferences, remembers them, and tailors future interactions accordingly.
In healthcare, biases can save lives. AI systems trained on patient data may "bias" their predictions toward certain risk factors, detecting dangerous conditions faster than human doctors can. In these cases, bias isn’t a flaw—it’s a life-saving tool.
The Bad
Here’s where things get messier. Biases can reinforce existing inequalities, and with powerful models, the consequences can be exponentially worse.
Gemini was designed to generate images. Initially, the system was criticized for predominantly producing images of white people, so the developers corrected this by introducing a bias for diversity. But the correction went too far. Soon, users began generating images of historical figures—like America’s founding fathers—only to receive depictions of black men, even when contextually inappropriate. What started as a well-intentioned fix spiralled into a distortion of history.
And it's not just image generation. Even everyday interactions with AI systems, like ChatGPT, reveal subtle biases. Have you ever noticed how hard it is to avoid bullet-point answers? The feedback from users—preferring concise, list-based answers—has skewed the model’s output. It’s a subtle bias that shapes your experience in ways you may not even realize.
… And the Ugly
Bias becomes genuinely "ugly" when it’s taken to extremes—either by pushing too many biases or too few. Let’s revisit the Amazon AI recruitment tool incident in 2018. Trained on past hiring data, the system began favoring male candidates for technical roles because the resumes it learned from were predominantly from men. The model wasn’t consciously discriminating—it simply reflected the world it had been trained to navigate. Despite attempts to "fix" the bias, it was so deeply embedded that Amazon eventually scrapped the entire project.
And then there’s Microsoft’s Tay, a chatbot that went from innocent banter to spewing racist slurs in less than 24 hours. Why? Tay learned from user interactions, and without guardrails in place, it absorbed the biases of the worst parts of the internet. This was bias unchecked—an AI gone rogue.
*This is a soft example of the type of slurs Tay tweeted.
The Necessary Bias: Why We Can’t Live Without It
Here’s the catch: bias is not just inevitable—it’s essential. Without bias, AI models wouldn’t learn. It’s through recognizing patterns (which are biases) that AI becomes capable of making predictions, whether it’s about the weather or who’s likely to buy your product.
However, the challenge lies in managing bias effectively. As humans, we have our own biases when designing systems (most of which we are unaware of). From the data we choose to use to the problems we prioritize solving, we are constantly shaping the biases of our AI systems.
As tech leaders, we ensure our teams recognize and mitigate harmful biases while leveraging the ones that drive better outcomes. The goal is not to eliminate bias—it’s to manage it. A well-balanced system can be both practical and ethical, and that’s the sweet spot we aim for.
So, next time you work on an AI project, remember that bias isn’t just a flaw. It’s a tool. But like any tool, how you wield it makes all the difference.