Services
Consultancy Project-Based Approach Support Training Academy
Accelerators
AI Data Analyst Artificial Intelligence CDP as a Service Customer Data Architecture Data Ideation Workshops Data Maturity Assessment Job Analytics Server Side Tracking
Cases
News
Events
About
Our Values Our Team Our Toolkit Careers
Contact Us
  • News
22 March 2022
4 min read

Jim Stolze and Nicolas Lierman in conversation: The problem of biased algorithms

In this 5-part series, two innovation heavyweights go head-to-head to discuss the current and future state of affairs. AI entrepreneur and author Jim Stolze and MultiMinds’ Head of Innovation Nicolas Lierman have an in-depth conversation on innovation and technology. Part 3: are algorithms biased? And how can we resolve that?

Many philosophers rejoice in the rise of AI. Finally, they have something useful to think about. Ethical dilemmas concerning the development of AI technology, for example. And no wonder: AI poses massive ethical questions. So companies are turning to ethicists to deliberate on the development of this technology. AI entrepreneur Jim Stolze and Nicolas Lierman discuss one of the most pressing ethical issues in AI today: biased algorithms.

Siegert Dierickx
Co-Founder, Managing Partner Siegert Dierickx +32 491 33 11 11 siegert.dierickx@multiminds.eu

Jim, in a contribution to the Dutch newspaper De Volkskrant, you argue that tech companies only have ethicists on their payroll for show. Could you elaborate on that?

Jim Stolze: “In the article, I compare AI with those trick mirrors that deform reality, making you look cartoonish. If AI training data isn’t handled carefully, algorithms can deform reality in just the same way. AI companies are aware of this danger, but most of them are going about it all wrong. The ethical experts they hire are not involved from the start, but called up afterwards to add some superficial ethical afterthoughts. I call it ‘ethics washing’. They’re toothless watchdogs.”

So what would be a better approach to ethical reflection in the digital world?

Jim: “I think we need independent supervision to challenge the companies. Who collected the data? What’s the goal of the algorithm? Who built it, and did they do a thorough job with the data? These questions are critical and should be judged by independent experts.”

The ethicists are not at the drawing board in AI companies. I call it ethics washing. They're toothless watchdogs.

Nicolas, do you agree that we need independent ethical supervision?

Nicolas Lierman: “I agree with Jim that ethics in big tech are often more about PR. But I don’t think an external committee will solve the problem. The issues are rooted in the fact that the people making the algorithms are a very homogenous group: white, heterosexual men.”

“This is a problem in engineering in general, but it’s amplified dramatically in AI. Having another ethical committee of the same homogenous group won’t change much. I believe the answer lies in promoting diversity in tech companies. We’re fighting hard for the inclusion of minorities almost everywhere, but the tech world is still lagging behind.”

The issues are rooted in the fact that the people making the algorithms are a very homogenous group: white, heterosexual men.

Jim: “It’s true that algorithms often reflect society, problems and biases included. Diversity is certainly an issue. This is exactly the point of the mirror metaphor. One example illustrates this perfectly. A university recently experimented with a facial recognition algorithm to grant access to the buildings. The system worked perfectly … for white people.”

“There was one black professor who couldn’t get in. It turns out they fed the algorithm with loads of data of white faces, so it could only distinguish white people. There are plenty of examples of AI chatbots making racist or sexist remarks because it’s what they pick up from human interactions.”

Biases in our data should be an opportunity to reflect. You shouldn't get mad at the mirror for having a bad hair day.

Nicolas: “I saw another example recently of a vision API that had to describe profile pictures used on resumes. The most common description for men was ‘professional’. For women, it was ‘smiling’. The engineers recreate their own worldview in the algorithms.”

Don’t these biases also teach us something about ourselves?

Jim: “Precisely. We should use this to our advantage. We are confronted with biases that we may not have been aware of. If we see a bias in our data, it should be an opportunity to reflect. The HR department at Amazon used an algorithm to help preselect who to hire based on resumes. It turned out the algorithm was biased against women.”

“What did Amazon do? They just decided to kill the project. Such a missed opportunity! What they should have done was go back to the drawing board and figure out why the algorithm favoured resumes of male candidates over those of female candidates. It’s a learning process. You shouldn’t get mad at the mirror for having a bad hair day. Fix the problem instead!”

Ready to activate your data?

Ready to embark on a journey of success? Contact us today to discuss your needs. Let's work together to turn your vision into reality.

Reach out, and let's chat.
pencil drawing of two men
  • Contact us
  • Hertshage 10
    9300 Aalst, Belgium
  • welcome@multiminds.eu
  • +32 491 33 11 11
  • Our services
  • Consultancy
  • Project-Based Approach
  • Support
  • Training
  • Our accelerators
  • CDP as a Service
  • Customer Data Architecture
  • Data Ideation Workschops
  • Data Maturity Assessment
  • Server Side Tracking
  • Job Analytics
  • AI Data Analyst
  • Artificial Intelligence
  • Our newsletter
  • Subscribe to our newsletter for the latest news and upcoming workshops.
  • Thank you for subscribing!

©2026 MultiMinds. All rights reserved.

Cookie Policy Privacy Policy

We’re an analytics agency. You know what’s coming.

Honestly? We just want to see how you move through our site so we can make our charts look beautiful and our insights even sharper. It's like a science experiment, and you're our favourite variable.

Necessary cookies help make a website usable by enabling basic functions like page navigation and access to secure areas of the website. The website cannot function properly without these cookies.

anonymous
2 year | HTTP Cookie
Stores the user's cookie consent state for the current domain.
_cfuvid
Session | HTTP Cookie
This cookie is a part of the services provided by Cloudflare - Including load-balancing, deliverance of website content and serving DNS connection for website operators.
_cfuvid
Persistent | HTML Local Storage
This cookie is used to distinguish between humans and bots.

Preference cookies enable a website to remember information that changes the way the website behaves or looks, like your preferred language or the region that you are in.

Analytical cookies help website owners to understand how visitors interact with websites by collecting and reporting information anonymously.

_ga#
1 year | HTTP Cookie
This cookie is a Google Analytics persistent cookie which is used to distinguish unique users.

Marketing cookies are used to track visitors across websites. The intention is to display ads that are relevant and engaging for the individual user and thereby more valuable for publishers and third party advertisers.