Contribution by: Kudzayi Chipidza, non-executive director on the IITPSA Board of Directors, member of the IITPSA Social & Ethics Committee, Professional Member of the IITPSA and a Cloud Support Engineer for one of the leading international Cloud Service Providers. He discusses racial and gender bias in artificial intelligence systems.

Things tend to move very quickly in the world of tech and a lot of it is propelled by hype and buzz words. As expected, governments across the world tend to take a reactionary response to technological advancements rather than proactively participate and facilitate their development. Just a few days ago, the US government released a nine-point strategic plan for research and development of Artificial Intelligence(AI), the most popular subject in the ICT industry nowadays.

What I found interesting in this strategic plan was the emphasis on regulation and “limiting social risk and misuse”. US Legislators have called for the developers of AI-driven applications such as Google and OpenAI (who are in partnership with Microsoft) to reveal their data models and how the back-end algorithms have been trained. As soon as you have read that, it is immediately obvious that the prospect of those private companies adhering to such a call is unlikely.

So why are training models important? Why are the large amounts of datasets critical in regulating AI? The answer is akin to how children take-on personalities of those around them who nurture them as they grow. Just as children learn from their elders, similarly, AI-driven applications ‘learn’ from the data models and datasets they are trained on. In all their newfound brilliance and uncanny spontaneous human-like capabilities, even AI language models like OpenAI’s ChatGPT have had to be ‘trained’.

It has long been a societal assumption that machines are neutral, that they do not possess any assumption, preconceived perceptions, any predetermined view or propensities. But as I conducted a brief research, I realised that this no longer holds true particularly for AI-driven applications. I came across an interesting study conducted in 2018 by computer scientists Joy Buolamwini, Timnit Gebru, “Gender Shades”1. Even during the infancy of AI, the study uncovered that AI systems displayed clear racial and gender bias. It was observed that the AI systems, particularly facial analysis and recognition technologies involved in the study, propagated their worldview based on the datasets they had been trained on.

In a recent Harvard Advanced Leadership Initiative interview, Dr. Alex Hanna from Distributed AI Research Institute(DAIR), intimated a statement that caught my attention.

“…gender and racial biases are the function of a few different things in AI systems. If you talk to computer scientists, they will say the problem is with the data, which is a reductive answer because it does not provide the full story. AI bias also occurs because of who is in the room formulating and framing the problem”2

This brought to mind one of the IITPSA Code of Ethics general principles3. Ethical Principle 1.1 prescribes that a computing professional should “contribute to society and to human well-being, acknowledging that all people are stakeholders in computing.” Furthermore, it presses on avoidance of harm most importantly, Ethical Principle 1.4. focuses on fairness. Proactively enacting controls not to discriminate, upholding the values of equality, tolerance and respect for others. Ethical Principle 1.4 further espouses the importance of incorporating processes in systems design to mitigate any bias.

I came to the realisation that even as technological advancements are somewhat improving our lives, they are still being engineered by human beings who may inadvertently mirror their views and preferences onto them. In addition to the humans writing the code, the large amounts of data gathered over time can reflect a previously held societal perception which may or may not hold true in the present day. But AI systems are not socially discerning, at least not yet. So when trained on data models which contain these ‘hidden’ socially undesirable prejudices, they are likely to adopt them.

Lastly, I will conclude with Joy Buolamwini’s warning “If we fail to make ethical and inclusive artificial intelligence, we risk losing gains made in civil rights and gender equity under the guise of machine neutrality”4

References

 Buolamwini, J. &amp; Gebru, T.. (2018). Gender Shades: Intersectional Accuracy Disparities in Commercial Gender Classification. <i>Proceedings of the 1st Conference on Fairness, Accountability and Transparency</i>, in <i>Proceedings of Machine Learning Research</i> 81:77-91 Available from https://proceedings.mlr.press/v81/buolamwini18a.html.

 Lazaro, G. (2023, May 17). Understanding Gender and Racial Bias in AI. Harvard Advanced Leadership Initiative Social Impact Review. https://www.sir.advancedleadership.harvard.edu/articles/understanding-gender-and-racial-bias-in-ai

 Institute of Information Technology Professionals South Africa(IITPSA) (2021, July). IITPSA Code Of Ethics https://www.iitpsa.org.za/wp-content/uploads/2022/08/IITPSA-Code-of-Ethics-July-2021-Final.pdf

 Buolamwini, J. (2018) Gender Shades ‘How well do IBM, Microsoft, and Face++ AI services guess the gender of a face?’ http://gendershades.org/