I write columns on news related to bots, specially in the categories of Artificial Intelligence, bot startup, bot funding.I am also interested in recent developments in the fields of data science, machine learning and natural language processing ...
I write columns on news related to bots, specially in the categories of Artificial Intelligence, bot startup, bot funding.I am also interested in recent developments in the fields of data science, machine learning and natural language processing
Artificial Intelligence can bring huge benefits for society. And despite the headlines, fears of AI taking over are nothing more than Holywood fantasy, writes Patrice Caine.
Patrice Caine is Chairman and CEO of Thales.
During the height of summer, an open letter to the United Nations warning of the dangers of Artificial Intelligence, signed by Tesla CEO Elon Musk and more than 100 other entrepreneurs, caused quite a stir in the media and the scientific community.
The letter gave rise to a heated digitised debate between the Tesla CEO and Facebook founder Mark Zuckerberg.
The letter's signatories were alerting the international community about the possible defence applications of AI, and in particular about the threat of lethal autonomous weapons or "killer robots".
It is within this context that I wish to address the theme of Artificial Intelligence here at the Women's forum for the Economy & Society Global Meeting in Paris (#WFGM17). The scope of the subject is constantly evolving and we at Thales owe the most recent advances in this domain to a woman at the CNRS / THALES research centre in Palaiseau.
Political debate must continue
To claim that AI could "spell the end of the human race" may grab headlines, but the underlying debate over moral and political limitations on the use of intelligent weapons is legitimate, provided the discussions are clear-headed and dispassionate.
Artificial Intelligence will clearly offer new possibilities in reconnaissance, identification, raising the tempo of operations and potentially in decision-making too. But the decision to engage in combat will always be a political decision, and as such it will always depend on whether or not humans actually want to delegate that responsibility to a machine.
A moral and political response is therefore needed to address the legal vacuum that exists today.
The debate about establishing an international legal framework for such weapons began at the United Nations in 2016 and must continue, along with discussions on nuclear and biological weapons which led to the restrictions in place today.
Beyond the buzz, the technology is still quite limited
Let's try to remain modest and lucid about the level of maturity of AI.
Even the most powerful artificial neural networks are still a very long way from matching the capabilities of the human brain, which is much more sophisticated â?¦ and most definitely one of the most complex objects in the universe.
In fact there is a difference between "weak AI" and "strong AI". Strong AI would be endowed with consciousness, like human beings; however, all the AI platforms we know today, even the most advanced, are examples of weak AI, and this is likely to be the case for a long time to come.
For the simple reason that there is no evidence to prove that consciousness would spontaneously appear just because AI platforms become bigger and more complex.
Our vision: humans are crucial to critical decision-making
Consciousness is what makes us human, what distinguishes us from robots, and it's also the crucial factor in making a decision.
Hollywood fantasies about machines breaking free from their creator and AI platforms destroying the human race are still in the realm of science fiction.
"Embracing our humanity" is moreover one of the themes of the Women's forum. It implies that some of the answers to the disruptions in the world are to be found at the heart of ourselves. In a certain manner, it's down to the human to find the answers to the questions posed by AI and its utilisation.
For in real life, an AI platform is still a machine and, like any other machine, it needs to be kept in check if it is to be trusted. That will mean controlling the quality and integrity of the data it uses, and ensuring that it learns in appropriate ways. It will mean taking steps to avoid the kind of dysfunctional situations reported recently, where robots started to exhibit racist behaviour simply because some of the data they were using was racially prejudiced.
Our challenge: achieving ever-closer harmony between humans and machines
Science fiction aside, Artificial Intelligence can respond to great ambitions. It is a tremendous opportunity with the potential to bring huge progress in many areas, from security and transport to medicine and also our understanding of climate change.
That progress needs to serve the best interests of the human race, but in no way must it undermine or replace people, who need to be able to take conscious actions at every decisive moment, in situations of ever greater complexity.
Progress in AI must seek to achieve ever-closer harmony between people and machines, so that we humans can concentrate on what makes us human.
And this is no doubt the biggest challenge ahead: perfect understanding of the mechanisms behind AI developments, while at the same time analysing human behaviours as people interact with these new systems.
Thales. An ancient name that resonates with our engagement today
It would not be wrong to claim that the origins of AI can be traced back to the first formalization of human thought and reasoning, developed over two thousand years ago.
Thales of Miletus, one of the Seven Sages of ancient Greece, was an eminent representative of the age when mathematics was associated with philosophy, and that same association resonates well with the questions we are asking today.
However unpredictable they may be, humans with their consciousness must remain sole masters of their decisions and their destiny.