The Role of Bias in AI

Artificial Intelligence (AI) has evolved significantly these past few years that we now are witnessing its use across all areas – from healthcare to criminal justice. However, how can we be sure that these systems are making fair and unbiased decisions?

The role of bias in AI has always been an important debate. There is the question of whether AI’s decisions can be less biased than humans’ ones, knowing that human decision-making is shaped by their own personal and societal experience. But then, wouldn’t AI systems risk making these biases even worse?

Hence, we have asked experts in the industry to share their knowledge and try to shed some light on the topic.

 

What does AI bias mean?

For Cibe Sridharan, Senior Data Scientist at Providence India, AI bias is some form of prejudice or discrimination that we inherently assume while solving a particular use case.

Ansgar Koene, Global AI Ethics and Regulatory Leader at EY, adds that the issue of bias is that the AI system is generating outcomes that are different for different parties but the way in which this difference is happening is not actually based on something that would make sense for that particular group.

Indeed, we need to distinguish between the fact that not everyone is going to get the same kind of response because, for some people, a particular response would be more appropriate than for others. The fact that there is a difference in responses doesn’t have a justification for the kind of task that is being done. For instance, there might be different responses for men and women in job applications while there is no rational reason why there should be a difference between the two.

But it’s important to know that when we’re talking about AI bias, it’s all about the kind of differentiation that were seeing in the outcomes has a clear justification within the task or not. Trying to make completely unbiased AI without clarity of what we mean by bias is senseless because it means that we want a system that just gives us random outputs that are randomly distributed. But that’s not what we want!

We want the system to make a differentiation but we want to make that differentiation to be based on what is relevant for that particular task.

 

The types of AI biases

According to Cibe, the common occurring biases are mainly:

  • Population Bias,
  • Gender bias,
  • Automation bias.

Population Bias occurs when the data is not sampled correctly which is implicitly biased to one particular group of targets in sampling.

Gender bias occurs when there is a bias related to the gender groups like males and females.

Automation bias occurs when the automation system is even prone to errors.

Ansgar emphasizes that there is a question as to whether we are looking at a bias that is focused on a separate individual or a demographic population. So, whether we are looking to make sure we get equal people within one demographic and another to get the same kind of outcomes or we are looking at individuals with pre-input relative to the task that has the same kind of outputs.

There are many issues to consider such as existing societal and economic differences. For instance, access to higher education or social-economic classes. Should we take this into account or not? There are also differences in the point of view of what is the source of the bias. Is it bias that relates to prior assumptions? And so, are we choosing the inputs parameters that are biased as they are preferential to one group than another? Or is it a bias that comes out of historic data and we are reproducing the differences in outcomes?

There is an issue in how we formulate the task, to begin with. This will impact bias and the people involved, as well as the performance of the system.

There is quite a range of types of bias and that is one of the challenges. We can’t just reach for one technical type of fix because not all of these issues are technical. Bias has to do with the fact that our current society in itself is biased so if we are simply automating the way we are doing things, we are encouraging even more bias into the system.

 

The impact of AI bias

For Ansgar, this really comes to the question of how we are using this kind of tech.

Indeed, the impact can be significant if we are applying these systems in hiring or access to education. It can have a great impact on the life and future of people but also on how we use systems in the public sectors – such as if governments use AI systems in assessing whether or not someone is likely to be committing benefit fraud or someone should qualify for certain benefits.

Indeed, Cibe also points out that gender bias is extremely important in many domains.  For instance, female buyers’ propensities are comparatively different from male buyer propensities and this is usually due to a gender bias.

Hence, the levels of significance are tied to the kind of impact of the application it is used for. The issue is thinking through what we are doing in the context; and not the immediate outcomes but the broader outcomes of how it is going to be impacted by the system.

Ansgar also underlines that we need to think about the fact that our current way of doing things through human decision-making is not free of bias. So, the question is then is it better to continue with the current system knowing that with humans it is easily remedied through reassigning or should we change things?

On the other hand, if it’s introduced to an AI system, it becomes easier to observe bias and if you put effort to mitigate the untended bias issue, bias can be minimized. You then need to consider if this is beneficial to use the AI or continue with the human system.

 

Is human intervention vital to limit AI bias?

Humans’ oversight is very important because AI systems do not have an actual understanding of the world.

Indeed, Ansgar points out that they only have a limited ability to assess the data they receive but the data is much more limited than your own experience to engage with the world and your experience as being a human being.

We have a system that cannot empathize and understand what it means for certain decisions to be applied to humans. Hence, Cibe believes that human interventions are needed to limit AI bias. There will always be manual intervention needed at each stop to control the AI application behaving from bias.

On the other hand, Ansgar continues, the systems cannot hold a grudge against something like that. This is why you should be able to use the two together: use tech to provide data-driven analysis but also keep humans in the loop to provide assessments on what the actual implication is.

Human oversight is then vital but it is also important to recognize the limitations of human oversights, especially if we are thinking about systems that are running at scale and properly for 95% of the time but there is the occasional issue that needs to be addressed. Humans are very bad at continually tracking something that is deemed to work fine and then pick up that case where it doesn’t work fine.

What we want to do is to combine AI systems for their abilities to identify patterns and data with a non-AI kind of safety system that would be able to identify if the system is behaving in strange ways. Humans need to gain access to the outcomes of the system so as to mitigate against the weaknesses and be able to provide the safeguard around the non-linear properties of AI systems.

One of the big challenges around AI is that they are not linear so it is very difficult to guarantee the kind of behavior that the system is going to show for a certain use case. For non-linear systems, it can behave completely differently and there are no guarantees with that.

 

Minimizing the impact of bias in AI  

Ansgar underlines that to minimize the impact, you need to have good documentation about the systems as well as be aware that you are going to impact a lot of different populations and not everything will work the same way for everyone. Then, you need to know why you’re making that decision so you can justify it rationally. You need a clear documented development process and ongoing monitoring.

You need to recognize that systems will change and there is no guarantee that the systems won’t become biased in the long run. So, continuous monitoring is vital to assess that the outcomes are different for different kinds of groups. You also need to be careful of automated machines as they don’t have an actual understanding of what’s going on so they can’t be aware if they are going to create unconscious bias.

In order to minimize the impact of bias in AI, you need to put an effort into trying to understand the problem you are solving as well as assess if the outcome still matches the requirements.

Hence, he advises to:

  • Have a continuous feedback loop
  • Go back and make adjustments when needed
  • Have justifications for adjustments you are making

Cibe adds that doing extensive research about the domain can also help reduce these biases.

 

Will we ever be able to prevent AI systems from being biased?

Ansgar believes that we will never be able to prevent AI systems from being biased as the bias is in the eyes of the holder. Hence, different things are always going to be justified in different ways and people won’t see the same things depending on the context. The outcomes are different depending on the groups.

We cannot say that something is going to be without bias but we try to make sure that we have clear reasons why the system is the way it is. We must discuss if these reasons are acceptable to all of the parties that are being impacted by it

Societies change all the time so we need to evolve and make sure that these criteria are still acceptable or if they need to change.

On the other hand, Cibe thinks that we will be able to reduce the bias, but there are a lot of permutations and combinations we need to think about for each use case before fixing it. He adds that it will evolve into a different field in near future.

 

Special thanks to Cibe Sridharan and Ansgar Koene for their insights on the topic!

More
articles