AI Ethics and Governance

We live in an era when our reliance on the Digital Economy, which is driven and nurtured by Digital Technology has grown deeper since the pandemic pushed the majority of businesses into the adoption of the new transformative technologies, especially Artificial Intelligence (AI).

This accelerated adoption and development have raised the importance of retaining and expanding public trust in the machines and their decision-making abilities, through a clear and uniform standard of accountability and ethical mandates.

In the year 2020, the global AI community – spanning across several alliances and borders – started accelerating discussion and agreements on how to envision and enforce them in existing and future AI technologies.

The actual manifestation of ethical and well-governed AI needs a proper and thorough assessment and evaluation of AI’s social, civil, and socio-economic impacts on human rights, welfare, and mental health. The next definitive steps should be to propose, test, and analyze the needed responses to the identified risks and threats in parallel to strengths and opportunities created by AI. Such responses and mandates require a well-balanced mix of binding and non-binding international guidelines and regulations enforced by international ruling bodies.

It is the duty of every government to safeguard and uphold human rights, democracy, and welfare for their constituents. This needs inclusive and transparent domestic regulations and compliance requirements in accordance with the international and regional mandates and recommendations created for AI governance and ethical standards.

AI has been around for several decades but up until 15 years ago, it was mostly limited to experimental projects and sci-fi movies. At that point, and on a quickly growing path, the presence of the new generation of super-strong processing power, combined with large amounts of training data required to raise AI’s power to replicate humans’ cognitive abilities and decision-making process, propelled its development into unprecedentedly high elevations and raised its application and risk into a globally significant ranking.

Perhaps the most prominent issues with AI today stem from a lack of fairness and equality. Currently, AI lacks empathy and does not go beyond the cold and heartless logic of a machine making decisions solely based on the algorithm provided to it and the data used to train it.

The algorithm and the data feed comes from the data scientists and machine learning engineers who bring the AI into existence and positions them as the direct influencers and guides to how the model will perform its inference and what biases – or lack thereof – will exist in the outcome.

 

Lack of Visibility and Transparency on data usage

One key grey area that continues to be problematic today, is the foggy line-of-sight into how the collected data on customers (or constituents) are truly used and whether the claimed compliant declaration accurately reveals if the data can be used to track people (health, activities, trends they follow, etc.) once it comes to an algorithm from separate seemingly scrubbed data sources – and to what extent.

In many cases, this is left to the organizations’ internal compliance discretions to establish proper guidelines and monitoring and whistleblowing measures to keep the big data that is collected on the clients, in line with the transparency expectations. This is while historically we have seen many examples of corporate cover-ups and smoke screens when something goes wrong, and these data have been subject to breaches by hackers.

We have seen several regulatory attempts at curtailing these shortcomings, but they are still in infancy or have shown partial success across several jurisdictions (e.g., OECD’s Artificial Intelligence Principles).

After a proper set of regulations are established, their level of enforceability at the regional and global levels is the next complex hurdle they need to overcome. The feasibility and practicality of each row item in these regulations are a deciding factor in their successful adoption by the organizations.

As mandatory as they come, if they are not pragmatic and realistic, they would end up becoming too restrictive and damaging to the organization under their domain or would encourage some of those organizations to look for ways to go around them.

 

The Inherent Bias

Algorithms are created with the intention of replicating human’s ability in understanding patterns in the data and providing decision support based on that. Even the most sophisticated algorithms, are dependent on good quality data to practice (aka training) and tune their internal parameters towards the highest achievable accuracy level.

The data comes from humans, or as the outcome of the work of other algorithms that were made by humans. The quality of data goes beyond its relevancy, integrity, and completeness. It must also be free from biases and the way that data is collected has a direct impact on it.

We have seen many cases where the data did not have a balance between the genders, or ethnicities or income levels or color of skin, or many other variations, and as a result, provided the wrong training for the algorithm causing a variety of discriminative outcomes, some of which quite shocking.

Algorithms are written and data is collected by humans which may have a variety of psychological sub-conscious biases that creep into the work they do. That is why extreme care must be made to run numerous systematic checks on the working models to search for hidden biases that may not be dramatically visible upfront.

  • At its debut, Google Vision AI ended up labeling people’s images based on the tone of their skin. As the pandemic concerns led to establishing checkpoints for passenger’s skin temperature measurement at airports and train terminals, images of the staff holding thermometers against a passenger’s face were identified as holding a gun if the staff had darker skin tones. Google rushed to fix that issue, but this example will always serve as a learning example of how biases can lead to faulty outcomes.
  • Another example would be Microsoft’s tweeter chatbot that was initiated in 2016 to learn from observing people’s conversations on that social media platform. That direct exposure and – training on the job – proved to be a very raw idea as it started picking up abusing language and racial insults and Microsoft had to stop it to avoid further embarrassment.
  • We have seen the rise of AI-enhanced recruiting support solutions, sifting through resumes and conducting personality and technical selections showing bias against female applicants, all due to the smaller number of female cases in the training data used for their tuning.

It is also important that algorithms may be flawed and inaccurate and regardless of the health of the training data and the high quality of the data that are fed to them once they are in the production area, they result in faulty conclusions which may hurt people if they are in a critical decision making position.

It can also go the opposite way. When the models are too complex to be intelligible to human brains (which is quite the norm for “Deep Learning” models), humans affected by the outcome may raise objections and translate that as discriminatory behaviours.

 

Empowering the Abuse

The ever-accelerating rise of computing power and the abundance of data has led to much stronger AI products and through that have also empowered governments and agencies that would be willing to abuse their positions to use for invasion of privacy and tracking and controlling their citizen’s behaviour and taking away their freedoms.

The statistics on that are far more worrying than may readily come to mind. More than half of the democratic governments and over 150 countries in the world have used or had plans to use this power to monitor their citizens at some level.

As legitimate as many of their intentions are – from preventive acts of identifying dangerous criminals and terrorists to predicting an upcoming dangerous situation for the people in an area – they are always prone to stepping outside their legal boundaries and undermining human rights.

We have already seen the rise and fall of face recognition software used by the police and other law enforcement agencies which still continue to function in many countries.

 

The tightrope balance

There are several questions yet to be properly answered, such as:

  • Where should we draw the line between the benefits of AI and the loss of privacy and community coherence? 
  • How do we set the balance between using algorithms and data in improving the quality of services, in comparison to respecting the constituents’ privacy and dignity?
  • Where do we put the border between customization of services and spoon-feeding everything to our customers, versus allowing them to develop their personal skills and abilities? 
  • What is the balance between our algorithms making the right decisions and our decisions being enriched with empathy and humanity? 
  • Where do we stop at automating everything to make it cheaper and faster for organizations to provide goods and services to their customers while letting go of their employees who are also there and someone else’s customers?
  • If we ruin the consumer base of a country, who is going to spend money and get the wheels of the economy to roll forward?
  • How are we planning to address the resulting personal and psychological burdens of the lack of employment and loss of personal sense of worth and its aggregated effect on the fabric of society?
  • Which regulatory body should put the needed regulations and enforcement to prevent that chaos from taking effect?

 

Breaking Bad

When we are using AI in a cognitive capacity, any hidden flaw in its structure can lead to troubling outcomes.

Such AI function can be considered unreliable since it is not serving its designated purpose to the quality expected and can propagate its misalignment further into harmful results.

We are still a few long years away from self-aware AI that would make decisions about itself and have the tendency of going rogue and rebelling against its “human masters”. But thinking of how AI is getting incorporated in defense and advanced weapons, the connection of the two would not envision a very desirable future for humans, if AI is let loose to follow its unregulated path.

 

The shortcomings of existing global governing attempts

At present we have several national, regional, and continental regulatory bodies trying their luck in creating relevant and practical regulations for enforcing ethical and fair AI practices, yet they have failed to establish one coherent governance structure at the global level due to a lack of unity and uniformity and conclusive decisions on how to monitor, measure and enforce their mandates.

They also traditionally suffer from a lack of consistency in the implementation of their regulations, leaving the door open for misinterpretation by the smaller jurisdictions and the organizations.

Missing clear international standards and policies is adding to the list of impediments since the focus of international bodies is mostly on raising operational quality and efficiency and they are poorly equipped to focus on provisioning good and practical answers to the concerns about ethical issues of AI in a fast-developing Digital world.

 

Considerations for building a governance framework for Ethical AI

So far, we reviewed a number of problems that lack of proper governance and ethical / fairness regulations would allow to creep into everything we have established as a society and a global digital economy, and would lead to creating public distrust, resentment, social imbalance, and even chaos.

Let us now take a look at considerations for establishing the much-needed framework to properly guide, monitor, and utilize the enormous power of AI in helping humans towards a better life experience.

If we sort the negative impacts of an uncontrolled approach to using AI from the order of magnitude of their damaging effects, perhaps we will want to first focus on the globally destructive effect of taking away human jobs and pushing the entire population into chaos. As a result, our governance framework should have measures to protect humans from losing their jobs as a result of short-sighted corporate profit-seeking plans.

In our globally fluid and interconnected market, this consideration would only work if it were mandated at the international level with severe and direct penalties against countries who would allow their domestic businesses and manufacturing organizations to use full-automation as leverage to destroy their global competitors who are bound by these regulations and cannot simply lead their human workforce out of their doors and into the streets.

This framework should also consider guidelines and metrics to assess and measure existing bias, unfairness, and discrimination in AI functions and have mechanisms in place to assist organizations with removing these flaws from their algorithm building and training pipelines.

Next, would be setting clear boundaries on where the benefits of AI are going to be outweighed by the damages from their invasion of people’s privacy, dignity, and self-actualization. This will throttle the misconception of pushing forward for more territory “just because we can”.

A global division of the United Nations with cascading regional and continental jurisdiction hubs that have live delegatory relationships with national and segmented regulatory bodies should be able to observe, assess, respond, adjust, inform, enforce, and mandate the needed governance and compliance expectations.

This should also resonate with technical best practices (or mandates) when it comes to coding and testing of AI models by all organizations regardless of their field of expertise or market sector.

As is the case with any other regulation and governance, special measures must be put in place to address grey areas where it is hard to find a clear path that adheres to what the international and local laws require. In these cases, to err on the side caution would be advised to avoid causing harm to humans in our rush to embrace the future.

To encourage and facilitate the implementation of such compliance requirements and regulatory mandates, we need to do comprehensive and thorough work in tuning them for maximum sustainability and practicality. They should also be designed to avoid creating discrimination and unfairness against organizations or countries in trying to save humans from those anti-patterns.

As the digital world keeps developing and AI becomes more sophisticated and complex, so should be the power, coverage, and depth of our governance framework as a living and breathing entity. We also cannot expect to have a perfect and all-encompassing framework as of day one.

Perfection is the enemy of good and we want to start with the best we can achieve at the starting point in time while incrementally improving into higher states and better coverage.

 

Conclusion

Artificial Intelligence left the lab environment in the first decade of this century to become an ever-developing staple and core success factor in almost every aspect of our lives and there is no ending for its expanding service and benefit in the foreseeable future.

This is while AI is still in its young days with many historical and inherent flaws which can translate into the unethical and unfair treatment of humans through the outcome of their decisions.

The growing reliance on AI to help us envisioning and creating a better future for humans also calls for a proper framework to establish the needed governance and considerations to safe keep humans and societies from the un-intentional and malicious outcomes that would cause long-lasting harms at national and global levels.

This will have the best positive impact when it evolves beyond a regulatory mandate into a cultural shift across industry sectors and through the organizations and weave into their fabric and sit at the base of “how they do” things.

 

Article written by Arman Kamran, CTO of Prima Recon, Professor of Transformative Technologies, and Enterprise Transition Expert in Scaled Agile Digital Transformation

More
articles