Artificial Intelligence (AI) is not inherently dangerous. It’s the bias humans inject that make it so. And there are ways on how to minimize these biases so that AI won’t reflect them.
This is the central message that Salesforce Chief Scientist Richard Socher wanted to share during the World Economic Forum Annual Meeting last January 2019.
This article will quickly summarize the ways on how to manage human bias and its threats on artificial intelligence, as Mr. Socher had put it.
How Does Bias Look Like in AI?
AI is a tool used by humans and is therefore morally neutral. What makes it potentially devastating is that its algorithms have the possibility of reflecting human biases and then perpetuating them as they continue “learning” about that data.
Take for instance a cosmetics company who wants to predict whether their lipstick shades suit a particular customer. But this company mostly allowed sales from lighter-skinned women before. AI can pick up on these cues and the algorithms can conclude that women of color do not fit in these lipstick shades, and therefore should not be permitted to purchase from the company. This is simply because there are fewer data about women of color purchasing lipsticks from this company.
Proactively Identifying Biases
One key to managing potential AI biases is identifying these biases right away and training the AI systems to do the same.
In our cosmetics company example above, the company can remove skin color and race from their AI systems. This will help take away the biases when the algorithms check the suitability of a woman to a particular lipstick shade.
Brands must also be on the lookout for hidden biases that may still continue to propagate stereotypes and the like.
Summing It Up
Biased artificial intelligence reflects the personal biases of the humans manipulating it. The best way to remove bias from an AI system is to thoroughly spot the bias itself, along with hidden biases that may result in a flawed algorithm. That way, AI systems will produce recommendations that embrace diversity and shun stereotyping.