Rajendra

I write columns on news related to bots, specially in the categories of Artificial Intelligence, bot startup, bot funding.I am also interested in recent developments in the fields of data science, machine learning and natural language processing ...

Full Bio 
Follow on

I write columns on news related to bots, specially in the categories of Artificial Intelligence, bot startup, bot funding.I am also interested in recent developments in the fields of data science, machine learning and natural language processing

This asset class turned Rs 1 lakh into Rs 625 crore in 7 years; make a wild guess!
627 days ago

Artificial intelligence is not our friend: Hillary Clinton is worried about the future of technology
631 days ago

More than 1 lakh scholarship on offer by Google, Know how to apply
632 days ago

Humans have some learning to do in an A.I. led world
632 days ago

Human Pilot Beats Artificial Intelligence In NASA's Drone Race
633 days ago

Google AI can create better machine-learning code than the researchers who made it
72138 views

More than 1 lakh scholarship on offer by Google, Know how to apply
58725 views

13-year-old Indian AI developer vows to train 100,000 coders
40275 views

Pornhub is using machine learning to automatically tag its 5 million videos
36717 views

Deep Learning vs. Machine Learning A Data Scientist's Perspective
29487 views

Will humans have the wisdom to manage artificial intelligence effectively?

By Rajendra |Email | Sep 11, 2017 | 13350 Views

Artificial intelligence of all kinds is becoming ubiquitous, but its explosive growth comes with big challenges.

Recently, for example, Elon Musk and 116 founders of robotics and AI companies signed a letter to the UN asking the organization to find a way to limit weapons control by autonomous robots.

So, can humans design a safe future living alongside artificial intelligence?

Max Tegmark, a professor of physics at MIT and author of a new book, Life 3.0 - Being Human in the Age of Artificial Intelligence, says to make sure humans stay in charge, we need to first envision what kind of future we want and steer artificial intelligence in that direction.

"The most interesting thing [about AI] is not the quibble about whether we should worry or not, or speculate about exactly what's going to happen, but rather to ask, 'What concrete things can we do today to make the outcome as good as possible?'" Tegmark says.

"Everything I love about civilization is the product of intelligence," he continues. "If we can amplify our own intelligence with AI, we have the potential to solve all of the terrible problems we are stumped by today and create a future where humanity can flourish like never before. Or we can screw up like never before because of poor planning. I would really like to see us get this done right."

Tegmark came to his Life 1.0, 2.0, 3.0 classification through an idiosyncratic way of asking the question, "What is life itself?"

"What's special about living things to me isn't what they are made of, but what they do," he says. "I'm just a blob of quarks, like all other objects in the world. ... So, I define life more broadly as simply an information-processing entity that can retain its complexity and replicate."

Bacteria, the prime example of Life 1.0, can't learn anything during their lifetimes. They can only evolve over time due to changing conditions. So, "when a bacterium replicates, it's not replicating its atoms, it's replicating information - the pattern into which its atoms are arranged. So, I think of all life as having hardware that's made of atoms and software that's made up of bits of information that encode all its skills and knowledge."

Life 2.0 is humans. We are stuck with our evolved hardware, but we can learn and change by essentially choosing to "install new software modules," Tegmark says. "If you want to become a lawyer, you go to law school and install legal skills. If you choose to study Spanish, you install a software module for that. I think it's this ability to design our own software that's enabled cultural evolution and human domination over our planet."

Life 3.0 is life that can design not just its software, but also its hardware. This type of life can "become the master of its own destiny by breaking free from all evolutionary shackles," Tegmark says.

As for AI's potential to transform human existence, Tegmark says it's up to us to ensure this happens in a positive way, because "if you have no clue what sort of future you are trying to create, you are very unlikely to get it."

"How do we take, for example, today's buggy and hackable computers and transform them into robust AI systems that we really trust," he asks. "Maybe it was annoying the last time your computer crashed, but [imagine] if that was the computer controlling your self-driving car or your nuclear power plant or your electric grid or your nuclear arsenal."

Also, looking further ahead, how do we figure out how to make computers understand our human goals? "As we know from having kids, them understanding our goals isn't enough for them to adopt them," he explains. "How do we get computers to adopt our goals? And how do we make sure they keep those goals going forward?"

Tegmark says that while he rolls his eyes at a lot of AI movies these days, "2001" beautifully illustrated the problem with goal alignment. Because HAL was not evil, right? The problem with HAL wasn't malice. It was simply competence and misaligned goals. The goals of HAL didn't agree with the goals of Dave, and too bad for Dave. ... We want to make sure that if we have machines that are more intelligent than us, they share our goals."

Tegmark disagrees with those whose fear of AI's potential to wreak havoc on humanity leads them to want to hit the brakes on the entire idea. "I don't think we should try to stop technology. I think it's impossible," he says. "Every way in which 2017 is better than the Stone Age is because of technology. Rather, we should try to create a great future with it by winning this race between the growing power of the technology and the growing wisdom with which we manage it."

This presents humanity with a great challenge, however.

"We  are so used to staying ahead in this wisdom race by learning from mistakes," Tegmark says. "We invented fire, and - oops. Then we invented the fire extinguisher. We invented cars, screwed up a bunch of times, and we invented the seat belt and the airbag. But with more powerful technology, like nuclear weapons and superhuman AI, we don't want to learn from mistakes. That's a terrible strategy. We want to prepare, do AI safety research, get things right the first time, because that's probably the only chance we have. I think we can do it if we really plan carefully."

Source: PRI