Algorithms are incredible aids for making data-driven, efficient decisions. And as more industries uncover their predictive power, companies are increasingly turning to algorithms to make objective and comprehensive choices. However, while we often rely on technology to avoid inherent human biases, there is a dark side to algorithm-based decisions: the potential for homogenous data sets to produce biased algorithms.
Many of the people and companies employing algorithms hope that their use of technology in replacement of humans results in reduced unconscious bias. While it would be great if it were that simple, this mindset is often a case of "mathwashing:" our tendency to attribute objectivity to technology.
Consider this example: there are more men named John than there are women named anything as S&P 5000 CEOs. If we built a predictive model of CEO performance, it's possible that John would be a better predictor of being a successful CEO than the female gender. Is this truly reflective of a person's potential to be a CEO, or just noise from the bias in the training set? In this example, it is obvious that being named "John" is simply noise. However, when presented with similar evidence in real world situations, it is not always as easy to spot the biases.
We've started to see this in less consequential but still troubling situations. Tay, Microsoft's Twitter bot, turned misogynistic and antisemitic. If you search for images of gorillas on Google, you may get shown images of black men. If you're a non-native English speaking student and plagiarize part of an essay, Turnitin (a plagiarism detection software) will be more likely to detect your cheating than it will native speakers.
To avoid situations like this, I strongly believe that any algorithm making decisions about opportunities that affect people's lives requires a methodological design and testing process to ensure that it is truly bias-free. Because when you employ AI itself to remove bias from an algorithm, the results can be extraordinary.
Take for example what we're working on at pymetrics, where we build algorithms based on top performers to select the ideal candidates for jobs. Sometimes we have to build our algorithms based on a homogenous group of people - all white men, for example.
A crucial part of our algorithm development process is to correct for bias so that anyone - regardless of their gender or ethnicity - has the same probability of matching to any job. I strongly believe it is the duty of any algorithm's creator to check for bias, remove it, and monitor outcomes to ensure it is creating equal access to opportunity.
Other technology-driven platforms, like Humanyze and HireVue, are also developing processes to remove bias from algorithms and create equal access to job opportunities. By using the bias-free algorithms developed by these types of companies, we've seen global organizations dramatically transform their gender, ethnic and socioeconomic diversity in ways that they've never been able to achieve in the past. We've seen financial services companies take roles that were previously 80-20 male-female employees to 50-50. We're seeing algorithms move the needle for diversity in ways that had never been possible when we solely relied on humans to make decisions.
In the next five to ten years, algorithms will be making decisions directly affect our health, job prospects, and abilities to get loans. They have the potential to be our most powerful tool for making efficient, effective and bias-free decisions, but only if we design them intentionally.