AI can be trained to avoid bias using algorithms

Ahead of previous criticisms that AI has faced regarding bias set deep within its code, a study coming out of California has revealed that using new algorithms, machines can be trained to avoid prior prejudice.

Stanford University, the institution that conducted the research, worked alongside the University of Massachusetts Amherst to work on “unfair” or “unsafe” outcomes of algorithms and AI results.

How it works

To deal with the issues, the universities developed mathematical formulas to create algorithms that avoided prejudice for race and gender.

This method avoids developing predictions that systematically overestimated or underestimate GPAs in gender.

Zach Jarvinen, head of product marketing, AI and Analytics, OpenText, said, “As we move into an era in which organizations rely more and more on machine-enabled decision making, we must confront the ethical questions raised by AI head on.”

Jarvinen added: “The best way to prevent bias in AI systems is to implement ethical code at the data collection phase. This must begin with a large enough sample of data to yield trustworthy insights and minimise subjectivity. Thus, a robust system capable of collecting and processing the richest and most complex sets of information, including both structured data and unstructured, including textual content, is necessary to generate the most accurate insights.”

A bias that runs deep

Not only have the machines received criticism for bias, but the developers, too, have also been condemned for their lack of diversity. This implies that the subject is perhaps an issue that needs to be dealt with on a much deeper level. It also raises questions over both who is creating AI and also who they are designing it for.

“Data collection principles should be overseen by teams representing a rich blend of views, backgrounds, and characteristics (race, gender, etc.). In addition, organizations should consider having an HR or ethics specialist working in tandem with data scientists to ensure that AI recommendations align with the organization’s cultural values.” Jarvinen continues.

However, the head of product marketing also adds: “Of course, even a preventive approach like the one outlined above can never safeguard data entirely against bias. It is therefore critical that results are examined for signs of prejudices. Any noteworthy correlations among race, sexuality, age, gender, religion and similar factors should be investigated. If a bias is detected, mitigation strategies such as adjustments of sample distributions can be implemented.

“With the stakes so high, it is vital that those in the industry start out with a clear goal that aligns to ethical values and routinely monitor AI practices and outcomes.”

 

More
articles