'Will Artificial Intelligence ever replace humans?' is a hotly debated question these days.
Some people claim computers will eventually gain super intelligence, be able to outperform humans on any task, and destroy humanity.
Other people say 'don't worry, AI will just be another tool we can use and control, like our current computers.'
Machines including computers have long been better than us at many tasks, like arithmetic, or weaving, but those are often repetitive and mechanical operations.
So why shouldn't I believe that there are some things that are simply impossible for machines to do as well as people?
Well, we've traditionally thought of intelligence as something mysterious that can only exist in biological organisms, especially humans.
But from the perspective of modern physical science, intelligence is simply a particular
kind of information processing and reacting performed by particular arrangements of elementary particles moving around, and there's no law of physics that says it's impossible to do that kind of information processing better than humans already do.
It's not a stretch to say that earthworms process information better than rocks, and humans better than earthworms, and in many areas, machines are already better than humans.
This suggests we've likely only seen the tip of the intelligence iceberg, and that we're on track to unlock the full intelligence that's latent in nature and use it to help humanity flourish - or flounder.
So how do we keep ourselves on the right side of the 'flourish or flounder' balance?
What, if anything, should we really be concerned about with super intelligent AI?
Here's what has many top AI researchers concerned not machines or computers turning evil, but something more subtle super aintelligence that simply doesn't share our goals.
A super intelligent AI is by definition very good at attaining its goals, so the most important thing for us to do is to ensure that its goals are aligned with ours.
As an analogy, humans are more intelligent and competent than ants, and if we want to build a hydroelectric dam where there happens to be an anthill, there may no malevolence involved.
Super intelligence doesn't have to be something negative.
In fact, if we get it right, AI might become the best thing ever to happen to humanity.
Everything I love about civilization is the product of intelligence, so if AI amplifies our collective intelligence enough to solve today's and tomorrow's greatest problems, humanity might flourish like never before.
Most AI researchers think super intelligence is at least decades away...
But the research needed to ensure that it remains beneficial to humanity rather than harmful might also take decades, so we need to start right away.
For example, we'll need to figure out how to ensure machines learn the collective goals of humanity, adopt these goals for themselves, and retain the goals as they keep getting smarter.