The machines are coming after us. Elon Musk repeatedly warns us to be very afraid. Masayoshi Son is so convinced of the coming "singularity" that he's busy buying up stakes in companies with enough of the right data.
I recently met a super-smart scientist who is neither concerned about an AI takeover nor complacent about the meaningful changes artificial intelligence will bring. He is Richard Socher, chief scientist at software maker Salesforce and a computer science lecturer at Stanford. Socher founded a company called MetaMind that he sold to Salesforce last year. He's busy integrating the fruits of his research-his field is "deep learning for natural language processing in computer vision"-into Salesforce's products, specifically its Einstein-branded AI technology.
A 34-year-old German, Socher is having quite the career. He got into deep learning at just the right time, as algorithms became powerful enough to make the kind of advances that had eluded researchers for years. He's applying his expertise to mundane but commercially powerful concepts, like letting a computer read customer emails to flag, for example, upset customers or those most likely to purchase. He also teaches a wildly popular course on natural language processing. More than 650 students enrolled the last time he taught it, and more watch on YouTube.
Socher is ideally positioned to resolve the debate between the humanists and the Terminators. He's firmly in the humanist camp. "There's no reason right now to be worried about self-conscious AI algorithms that set their own goals and go crazy," he says. "It's just that there's no credible research path, currently, towards that. We're making a huge amount of progress and we don't need that kind of hype to be excited about current AI."
Socher's "no credible research path" is as strong a declaration as a scientist can make. He says, in fact, that the biases of human researchers and the diversity of humans building the AIs are more concerning than AIs taking over. "AI systems are only as good as the training data that they get," he says. "So if your training data has certain biases, sexist or racist biases, your AI will pick those up."