Artificial Intelligence is infiltrating everywhere: our phones, our cars, our Netflix recommendations, our daily lives… and even our hiring processes. It must be said that it really makes our lives easier every day!
But surprise (or not): this great technological innovation has a big problem—it is biased, and not just a little. Why? Because it is mostly designed by men with databases that reflect existing inequalities. The result? Algorithms that recycle clichés older than the Minitel…
When AI becomes sexist with complete ease
AI algorithms are not born sexist, no, no, we teach them our biases. If we give them biased data, they do an XXL copy-paste and apply these biases as absolute truths. And then we are surprised by the damage!
“Sorry, you don’t exist”
Take facial recognition: in 2024, a study conducted by NIST (National Institute of Standards and Technology)revealed that these systems were up to 10 times more likely to make mistakes on female and non-white faces. Great. Are you a Black woman? The algorithm might completely deny your existence. Convenient, especially if we start using it for border control or suspect identification.
“Sorry, Aya, but Steven is a better profile”
Amazon tried in 2018 to implement AI recruitment, but it didn’t work… Their AI quickly understood that in tech, male resumes had historically been more successful: it naturally blacklisted resumes containing the word “woman.” The subtlety of a jackhammer.
And in 2024, history repeats itself: a Harvard study proved that AI systems still favor male applications 85% of the time because they are trained on ultra-gendered hiring histories. Congratulations guys, you have a new ally in your career!
“Alexa, be nice and obey”
Why do almost all virtual assistants have a female voice? Easy: because a soft and feminine tone is considered more “pleasant,” so here we have AIs perpetuating the image of the helpful and docile woman.
Until 2019, these assistants even responded to sexist jokes and insults with “thank you for the compliment.” Fortunately, after a big scandal, Apple, Google, and Amazon finally adjusted their settings!
Why are we still here? Spoiler: Tech has a diversity problem
Quick question: Who designs these AIs?
Answer: 88% men (AI Now Institute study, 2025).
Translation: diversity is not their strong suit. And inevitably, when you code among buddies who all have the same background, you don’t always think about checking if AI works well for other profiles.
And then, there’s also the big trend of “we don’t really know how the algorithm works, but let’s trust it.” Easy to turn a blind eye! AI models are often black boxes, meaning that when they go wrong, it sometimes takes years to understand why.
How to prevent AI from becoming the perfect “Mansplainer”?
Fortunately, we can still fix it! Well, easier said than done, and there’s a lot of work ahead, but it’s up to us to roll up our sleeves and make things change!
Technical solutions
There are technical solutions to minimize biases, but it is always important to know that there is no magic wand for now, and when we try to minimize a social bias with tech… we might create another one! This is the case with SMOTE, for example.
Give it a fairer playing field
An AI is like a student: if we only give it biased examples, it will take them as absolute truth. To avoid this, we need to enrich its learning environment with more diverse and better-balanced data.
Data augmentation
So that AI stops believing that an “engineer” is necessarily a white man in a suit, we can enrich its training databasesby artificially creating more diversity. This can be done through:
- Geometric transformations (rotation, image cropping)
- Color modifications (contrast, brightness)
- Synthetic data generation with neural networks like GANs (Generative Adversarial Networks)
Rebalancing datasets
Currently, the datasets on which AIs are trained resemble a class where some students have 100 times more lessons than others. The result? AI always favors the same profiles. To correct this, we can:
- Undersample over-represented classes to reduce their influence.
- Oversample under-represented classes to better balance learning. A method like SMOTE (Synthetic Minority Over-sampling Technique) allows creating new examples from minority data.
Database audits and monitoring
Like a teacher reviewing homework, we need to regularly check what AI is learning. Automated analyses can detect hidden imbalances and biases, and tracking dashboards allow real-time adjustments to data diversity.
Reprogram AI so it stops making snap judgments
If we let an AI learn without supervision, it will mainly amplify existing inequalities. Fortunately, there are techniques to impose safeguards and prevent it from becoming a 1950s recruiter.
Fairness-Aware Learning
The idea is to integrate fairness directly into its learning. Several approaches exist:
- Reweighing: Adjusting data weights to prevent AI from favoring certain profiles simply because they are more numerous in historical data.
- Adversarial debiasing: A bit like an integrated bias detector, this method trains AI to “forget” certain discriminatory characteristics (gender, origin, etc.) by opposing two neural networks, one tasked with spotting biases.
Model regularization
We can impose constraints on algorithms to limit their reliance on certain sensitive variables. The goal? Prevent AI from overly relying on historical trends that push it to discriminate.
Post-processing
Even after an AI has made a decision, its biases can still be corrected:
- Score recalibration: Adjusting final evaluations to prevent systematic under-evaluation of a category of candidates.
- Adjusting decision thresholds: Modifying acceptance criteria to ensure fairness across different categories.
Human solutions
Fighting AI’s gender biases is not just a technical problem. It’s also a human issue, and it starts with the design of algorithms. Because an AI reflects the people who program it, if the teams behind it remain homogeneous, biases will remain deeply ingrained.
Diversifying Human Teams
Currently, artificial intelligence is primarily developed by white men from similar academic backgrounds. This results in AIs that consider male candidates more qualified or struggle to recognize female and non-white faces. Integrating more women and minorities into the tech industry allows for diverse perspectives and helps anticipate biases from the outset. And let’s be clear: this isn’t just an ethical issue—it’s about efficiency. An AI that better understands the diversity of the real world will inevitably be more relevant and effective!
Raising Awareness and Providing Training
An algorithm is just a tool. But if those who design it aren’t aware of the biases they introduce, that’s where the problems begin. Engineers and data scientists need to be trained on algorithmic biases, understand how they arise, and learn how to correct them. Because coding without questioning biases is like cooking without tasting the food—you risk serving up a disaster. That’s why it’s essential to integrate dedicated modules on AI ethics and fairness into training programs. Without this, we’ll continue building technologies that reinforce inequalities instead of addressing them.
Holding Companies Accountable
Big tech companies need to go beyond empty rhetoric and implement real inclusion policies and bias monitoring. This means conducting regular AI audits, establishing ethics committees with independent experts, and setting clear diversity goals in recruitment. Because if we keep waiting for things to change on their own, we’ll just end up with algorithms explaining to us that inequalities are, after all, “mathematically logical.”
Bottom line? If we want AI to be truly intelligent, we need to surround it with people who don’t all think alike!