FOR ALL THE hype about killer robots, 2017 saw some notable strides in artificial intelligence. A bot called Libratus out-bluffed poker kingpins, for example. Out in the real world, machine learning is being put to use improving farming and widening access to healthcare.
But have you talked to Siri or Alexa recently? Then you'll know that despite the hype, and worried billionaires, there are many things that artificial intelligence still can't do or understand. Here are five thorny problems that experts will be bending their brains against next year.
The meaning of our words
Machines are better than ever at working with text and language. Facebook can read out a description of images for visually impaired people. Google does a decent job of suggesting terse replies to emails. Yet software still can't really understand the meaning of our words and the ideas we share with them. "We're able to take concepts we've learned and combine them in different ways, and apply them in new situations," says Melanie Mitchell, a professor at Portland State University. "These AI and machine learning systems are not."
Mitchell describes today's software as stuck behind what mathematician Gian Carlo-Rota called "the barrier of meaning." Some leading AI research teams are trying to figure out how to clamber over it.
One strand of that work aims to give machines the kind of grounding in common sense and the physical world that underpins our own thinking. Facebook researchers are trying to teach software to understand reality by watching video, for example. Others are working on mimicking what we can do with that knowledge about the world. Google has been tinkering with software that tries to learn metaphors. Mitchell has experimented with systems that interpret what's happening in photos using analogies and a store of concepts about the world.
The reality gap impeding the robot revolution
Robot hardware has gotten pretty good. You can buy a palm-sized drone with HD camera for $500. Machines that haul boxes and walk on two legs have improved also. Why are we not all surrounded by bustling mechanical helpers? Today's robots lack the brains to match their sophisticated brawn.
Getting a robot to do anything requires specific programming for a particular task. They can learn operations like grasping objects from repeated trials (and errors). But the process is relatively slow. One promising shortcut is to have robots train in virtual, simulated worlds, and then download that hard-won knowledge into physical robot bodies. Yet that approach is afflicted by the reality gap-a phrase describing how skills a robot learned in simulation do not always work when transferred to a machine in the physical world.
The reality gap is narrowing. In October, Google reported promising results in experiments where simulated and real robot arms learned to pick up diverse objects including tape dispensers, toys, and combs.
Further progress is important to the hopes of people working on autonomous vehicles. Companies in the race to roboticize driving deploy virtual cars on simulated streets to reduce the time and money spent testing in real traffic and road conditions. Chris Urmson, CEO of autonomous-driving startup Aurora, says making virtual testing more applicable to real vehicles is one of his team's priorities. "It'll be neat to see over the next year or so how we can leverage that to accelerate learning," says Urmson, who previously led Google parent Alphabet's autonomous-car project.
Guarding against AI hacking
The software that runs our electrical grids, security cameras, and cellphones is plagued by security flaws. We shouldn't expect software for self-driving cars and domestic robots to be any different. It may in fact be worse: There's evidence that the complexity of machine-learning software introduces new avenues of attack.
Researchers showed this year that you can hide a secret trigger inside a machine-learning system that causes it to flip into evil mode at the sight of a particular signal. The team at NYU devised a street-sign recognition system that functioned normally-unless it saw a yellow Post-It. Attaching one of the sticky notes to a stop sign in Brooklyn caused the system to report the sign as a speed limit. The potential for such tricks might pose problems for self-driving cars.