Artificial Intelligence: ‘ethics are always relevant, just differentiated’

I had the pleasure of talking to Bogdan Grigorescu, Head of Quality Assurance at Afiniti, about the ethics and biases in Artificial Intelligence (AI).  

 

Ethics in AI

AI-enabled technology has many use cases.

In art, computer vision and machine learning help in the detection of fakes. In the military, the same type of technology is used to detect war crimes by analyzing the images taken in the field of banned ordnance prohibited by international conventions. Pictures and images can be taken with great details and then compared and analyzed.

But for individuals, the main question arising is: what happens with my data?

All these data don’t belong to those who collect them or process them. It belongs to someone else i.e. to the data subject from where it was obtained. That is where the Ethics questions come to light.

What happens with the data? Where does it end up? Who sees it? Who does what with it? Will it be kept forever? Or for a limited time? So many questions and no one to answer all incoherent, easy-to-understand ways.

It doesn’t apply equally to every single use case, of course. They are cases where it is critical – in the medical and legal field, for instance – and others where it is important but not that critical. For instance, if a company possesses your contact details but no financial information, it is still important but not that critical.

But the Ethics case is always relevant. It’s differentiated by case, but always relevant.

For example, if we have a conversation as in consumer-to-business, I need to know if a record is kept of our interaction – be it a phone call, video conference, chat, email, text messages, etc. If on the other end of the line is an automated system through an automated system (like a bot – chatbot, voice bot, etc), I as a consumer should be informed about that. If I’m not talking to a human but to an automated system, I should be informed upfront before the start of the conversation.

This matters a lot. Especially in written contact as it’s almost impossible to figure out from the start that you are not talking to a human. Thus, customers should always be informed upfront that it is an automated system they are talking to, and if they are sent to a human adviser at some point. That is an ethical act and many businesses have implemented necessary mechanisms and abide by this principle.

So Ethics are always relevant. Just in a differentiated way from one context to another.

 

Ethical concerns with AI

There are many ethical concerns with AI.

Human agency is starting to give away more and more power of decision, even if in limited, capacity, to automated systems. That’s not good.

It does depend on what the decision is. Sometimes it can be a low-level technical decision, with no real impact. But if the decision is life-impacting, then that’s not good. Same if it’s difficult to understand. Most people don’t understand why the output was generated i.e. why that and not something else – and they just apply it blindly because it’s “computer says so”. But in our connected and globalized world, there are unforeseen impacts that could be life-impacting.

For example, your loan has been denied. Why? The advisor will say ‘I looked into this and that and then I got that data’. Even if you try to challenge it by saying that it doesn’t look right, they still won’t be able to explain because there’s a lack of explainability of the output. The decision was made using your data but you, the data subject, are the one affected and you cannot know why it’s so.

So although you are the one impacted, you can’t know why. You can’t even challenge it in a court of law because they can’t explain it. Even if they want to explain it, they have to ask ‘who owns that input?’ A question that most times cannot be answered.

Who owns the output of the AI system?  Is it the company that uses it, or the company that devises the system, or a third party like an external provider? Or is it somebody else?

The company that devises the AI system can be a software company and there are the service providers that actually use the output of the AI system and provides services to businesses. Then, there is the business you deal with as a customer. So, three different entities. Hence the ownership is not clear. Who are you going to sue if your consumer rights are breached?

Ownership is not clear because there are no standards, laws, and regulations.

At present, there are no laws, no regulations, no agreed standards. Work is in progress but we have some way to go before the first regulation is issued.

This makes possible life-impacting situations. For example, people ending up with criminal records because the judicial decision is based partly on the output of AI systems. However, the judge cannot explain the decision in full as the entity/company owning the AI system invokes IP rights. This is essentially a breach of human rights and is possible because of the absence of laws and regulations for AI systems.

But what standards to follow?

Do you see many companies apply their own ethics? It doesn’t really happen. You can run your internal audits but in the case of AI systems there is unforeseen, far-reaching impact and there’s should be independent audits of these systems.

If, for example, there is significant bias in a process, then introducing AI systems without having an Ethics framework will essentially automate the bias, even scale it. This is one of the major reasons why Ethics is important in every business and in every domain.

There are so many discussions around it but what comes of it?

Over the past two to three years, there were a lot of discussions about issuing standards so that you can actually measure and detect problems in a coherent way across jurisdictions. Some discussions end with practical actions but there are a lot of other discussions that don’t result in anything, unfortunately.

It’s time to do something and take action!

Practical actions take time but you have to start somewhere and keep at it.

 

Bias in AI-driven technology

Bias cannot be eliminated. Some biases are good and necessary to have. You don’t want to treat criminals the same way you treat law-abiding citizens. You have to be biased against unlawful behaviors or low standards. There are use cases where bias is good and others where it is bad.

Bias cannot be eliminated totally but it can always be reduced. It can be reduced continuously but never to zero.

 

To reduce bias:

First of all, you have to define the problem to solve.

To do that, you have to talk to the stakeholders. The majority of stakeholders are not the obvious ones. There are the sponsors and the people that design the solution. There are also the people that implement and use it – the engineers and the operational staff that serve customers. But customers are also stakeholders that actually benefit from those services. Vendors too can be stakeholders in certain contexts.

You need to and engage with as many stakeholders as possible in order to define the problem to solve well.

You need to have inclusion and diversity.

People from different backgrounds are affected in different ways by the output of AI systems. So, you need inclusion and diversity – not because only because it is good in general but also because it brings better results.

Monitoring what comes out of what you do is also beneficial. Getting feedback helps to understand what problems there are and do the necessary changes (or understand that things go well so keep going).

 

A future for a safe and trustworthy AI?

I don’t think AI will ever be safe. 100% safety doesn’t exist. Technology will never be completely safe because nothing will ever be.

Can it be safer? Yes!

Can it be totally safe? No.

It is all driven by context and most of the time, people don’t understand the context. The better you understand the context, the safer it is what you do. Context is the driving force.

The general consensus on AI systems is that if they’re not used as a tool, then people will end up being a tool of AI systems. Use it as a tool to help you do a better job and continuously improve it.

Use AI as a tool for good to help to make life better for all.

More
articles