Navigating AI bias: How to be aware of and limit bias

24/4/2024
PMO & Governance
AI
Product & Tools
Erik David Johnson
Chief AI Officer at Delegate
Aki Antman
CEO & Founder of Sulava
Peter Kestenholz
Founder - Projectum

Accept there is a bias

Just as everyone reading this text has bias based on their knowledge, experience, and opinions, AI models will inevitably have bias based on the data they have been fed.

For example, if you ask a generative AI model to create an illustration of a doctor, it will likely result in a white male. This is how doctors have typically been depicted in Western history and culture – and thus in the data on which the model is built. So, if you ask AI to hire similar people you already have in your payroll, AI will keep the same biases you might currently have.

In other words, when biases, such as those related to skin color, sexual orientation, or education level, are hidden in the input that we feed artificial intelligence with, they manifest in the output. One of the risks, therefore, is not only that AI can perpetuate biases and stereotypes but also reinforce them.

The key is to be very aware that bias always exists and take the necessary precautions.

It IS possible to limit AI-bias

Essentially, as a user of AI tools, you need to be critical of your sources – just as you should be when gathering information on the internet. Information, and us as humans, will always be biased to some extent. That is the case with your newspaper – if you still read one – your search engines and social media, which tailor content to your (and often commercial partners') preferences.

In fact, generative AI will often be less biased overall because it inherently has no interest in promoting commercial facts or opinions. Additionally, you, even as an ordinary user without programming skills, can actually instruct AI models on what you want and do not want, which is much more complicated in, for example, a Google search. In the same way, you can demand that the model is transparent about what the source of the output is.

Additionally, guardrails or content filters can help establish ethical boundaries and ensure that AI models are not misused, for example, to provide a recipe for making a bomb, as the classic example goes. But it's not without challenges. What if you ask your AI assistant for a good recipe to cook a horse? Should the guardrail kick in and explain that it's universally wrong to eat these animals because that's the case in Western culture, while it may not necessarily be the case in other parts of the world?

AI as your assistant

When starting a conversation with a large language model, it's about setting the context by being specific and priming and framing your question. That way, the response or solution will be less prone to bias. At the same time, we should be open about the fact that what an AI model produces may not necessarily be the complete truth.

That's one of the reasons why, at least for now, we should see our AI tools as assistants. They're welcome to think independently and make suggestions, but fundamentally, they need to be trained in the organization's values, ways of working, communicating, etc. Just like a new employee needs a good leader.

And when the AI does something wrong, it's important to report it so the model can adjust its way of functioning and to fine-tune continuously so it becomes better and better at handling and minimizing its bias.

”It's a good thing that we're discussing bias in AI, but it is even better that we can actually control it. I think we have an upside in being better at managing bias today than we have been able to do for the last 10 years in technology.”
Peter Charquero Kestenholz, Founder, Head of Innovation & AI at Projectum

This post is part of a communal effort in The Digital Neighborhood and was originally posted by Delegate.

No items found.
No items found.
No items found.
No items found.
No items found.
AUTHOR
Erik David Johnson
Chief AI Officer at Delegate

With a background in AI research, Erik David Johnson is a researcher and speaker on artificial intelligence and language technology. He's passionate about demystifying AI and putting it to good use in organizations.

AUTHOR
Aki Antman
CEO & Founder of Sulava

President of AI & Copilot at The Digital Neighborhood

AUTHOR
Peter Kestenholz
Founder - Projectum

Peter Kestenholz is a successful entrepreneur and business leader with 20 years of experience from founding and growing the company Projectum. Peter is a recognized Microsoft MVP for 13 years straight, Fast Track Architect for the second year in a row the second year and a member of the Forbes Technology Council.

Read next...

show us where it hurts.

Get in touch with our brilliant team and share your current challenges and we'll share how we think we can help.
Get in touch