Artificial intelligence is destined to power some of our most important services, but there is growing concern that it could repeat much of the prejudice that humans have about race, gender and more because of the way it is built. When artificial intelligence is trained with biased data, it can make biased decisions.
By way of example, facial recognition systems from IBM and Microsoft were recently shown to have struggled to properly recognize black women, while software used to help courts predict criminality has skewed towards black men.
An experiment by Carnegie Mellon university also showed back in 2015 that "significantly fewer" women were being shown online ads for jobs paying more than $200,000.
All of these systems were powered by machine-learning, which to most people would appear to be at the cutting edge of technology.
But there is a growing risk of prejudiced AI permeating more mainstream software, and perpetuating discrimination, because tools for building software powered by machine learning has become much more accessible in the last few years - what some are already calling the "democratization of AI."
Google, for example, has made a machine-learning software library called TensorFlow free for anyone to use. Amazon and Microsoft also recently released a deep learning library called Gluon, which developers across the world can use to build neural networks for mobile apps.
DeepMind, one of the world's leading artificial intelligence firms which was bought by Google's Alphabet in 2014, wants to help solve the problem, and doing so involves going right into the nitty gritty of teaching machines how to make decisions.
The London-based company led by Demis Hassabis recently published a paper on arXiv, a repository of scientific papers, suggesting a better way to design algorithms that don't discriminate on gender, race, and other "sensitive" attributes.
Titled "Path-Specific Counterfactual Fairness," it was submitted in late February by DeepMind researchers Silvia Chiappa and Thomas Gillam.
Their suggestion works off of other research submitted in the past year that also look at the subject of "counterfactual fairness," and is a prime example of how deeply engineers must tease apart the complex mechanics of how computers make choices.
Counterfactual fairness refers to a known method of decision-making for machines. In it, computers can deem a judgement about a person as 'fair' if it would have made the same judgement in an imaginary world where that person was in a different demographic group along unfair 'pathways' - in other words, if in a parallel universe, a woman were actually a man, or a white man was actually black.
DeepMind's contribution to the discussion is subtle but potentially significant, given that Google's cloud-based tools are used by so many developers and its suggestions on the subject will hold weight.
Without getting too deep in the technicalities, it suggests that instead of removing an attribute like gender when re-imagining a decision, an algorithm should correct it, by leaving the attribute in its original form for the decision pathway that it eventually deems fair.
At a recent conference on AI organized by Re:Work in London, Chiappa herself explained that "instead of constraining the system, we train the system naturally, but just modify it by correcting all the variables descended from the sensitive attribute, eg. race, along unfair pathways."
To an extent, research papers like these must bump up against the definition of what is truly "fair," which is no easy philosophical conundrum.
Fortunately DeepMind also has a new 'ethics' division for this, established in October 2017, which is well-timed given the amount of concern being put forward about the importance of programming diversity and unbiased-decision making into artificial intelligence.
The division has set up its own set of principles - for example, that AI should benefit society - and it includes several third-party partners like the AI Now Institute at NYU and Britain's Royal Society.
One of the problems with AI is that neural networks like the sophisticated one that DeepMind built to play a world's Go champion last year, are like black boxes. They are difficult to decipher by looking at the code, and they're also self-learning.
"Understanding the risk of bias in AI is not a problem that that technologists can solve in a vacuum," says Kriti Sharma, a consultant in artificial intelligence with Sage, the British enterprise software company. "We need collaboration between experts in anthropology, law, policy makers, business leaders to address the questions emerging technology will continue to ask of us."
"It is exciting to see increased academic research activity in AI fairness and accountability over the last 18 months," she added, "but in truth we aren't seeing enough business leaders, companies applying AI, those who will eventually make AI mainstream in every aspect of our lives, take the same level of responsibility to create unbiased AI."
This article was originally published in Forbes.