In "The Adventure of the Silver Blaze," Sherlock Holmes famously solved a case not by discovering a clue-but by noting its absence. In that case, it was a dog that didn't bark, and that lack of barking helped identify the culprit.
The fact that humans are able to make deductions and learn from something that's missing isn't something that's yet been widely applied to machine learning, but that's something that a team of researchers a IBM want to change. In a paper published earlier this year
, the team outlined a means of using missing results to get a better understanding of how machine learning models work.
"One of the pitfalls of deep learning is that it's more or less black box," explained Amit Dhurandhar, one of the members of the research team. "So it's hard to determine why a certain decision was arrived at. The answer might be accurate but in many human-critical applications-like medicine-that's insufficient."
To help get a better understanding of how machine learning algorithms arrive at their decisions, the IBM team created a system for "contrastive explanations"-looking for information that was missing in order to better understand how a machine learning model arrived at its conclusion. What this means in practice is that, for example, if a machine learning model is identifying photos of a dog, this method can be used to show not only what the machine model is using to identify a dog (like fur and eyes) but also what things have to be absent for the model to identify a dog (like it doesn't have wings.)
"It's a simple idea, but it's an important one, that I think others have missed," said Pradeep Ravikumar, an associate professor in the machine learning department at Carnegie Mellon University, who is not affiliated with the IBM team.
Ravikumar notes that IBM's approach is ideally suited for making determinations in areas where a machine learning model is making binary distinctions-something's either there or it isn't-which means that, for example, if someone was denied a loan, that could be explained not only by what's present in a credit report (like a default) but what isn't (like a person doesn't have a college degree.)
In the paper, the IBM team was able to successfully use this approach with three different kinds of datasets: fMRI images of brains, handwritten numbers and a procurement fraud dataset. In all these datasets, the researchers were able to get a better understanding of how machine learning models made decisions.
"It's interesting that pertinent negatives play an essential role in many domains, where explanations are important," the researchers wrote. "As such, it seems though that they are most useful when inputs in different classes are â??close' to each other. For instance, they are more important when distinguishing a diagnosis of flu or pneumonia, rather than say a microwave from an airplane."
The key takeaway with this method, says Dhurandhar, is that by being better able to understand artificial intelligence, humans are able to work with those models to achieve better results than either the human or the machine learning model could do on its own. Additionally, understanding why a computer came to the decision it did makes people more likely to go with the model's recommendation.
"People want to know why they were recommended things," he said. "And then once they know that, it improves their willingness to buy into it."
The article was originally published here