WHO report reveals inequalities of applying AI models to different income countries

A new report by the World Health Organization (WHO) revealed the inequalities of applying AI models using data from high-income earning countries to other low-and-middle-income countries.

Indeed, it was stated that AI systems should always try to reflect the diversity of socioeconomic and healthcare settings as well as be accompanied by training in digital skills, community engagement, and awareness-raising. Hence, systems that are applied in rich countries won’t have the same results for people in other circumstances.

The report then noted that countries should invest in AI and support infrastructure in order to build effective healthcare systems and avoid AI that encodes biases. If these measures are not taken, the AI models could deliver healthcare services in unregulated contexts and by unregulated providers, which might then create significant challenges for government oversight of healthcare.

Besides, this could also create an unethical collection and use of health data; bias being encoded in algorithms; as well as big risks to patient safety, cybersecurity, and the environment.

The report however showed that AI has a great potential to help improve healthcare and medicine worldwide, only if ethical considerations and human rights are taken into account.  Indeed, AI could improve diagnosis and clinical care by enhancing health research and drug development.

 

More
articles