Australia's Chief Scientist, Alan Finkel, has called on governments and businesses across the world to consider developing a regulatory framework for artificial intelligence devices, ranging from the likes of Apple's Siri to weaponised drones.
Dr Finkel, who was speaking at the Creative Innovation Global conference, said he was optimistic about AI, but an ethical stamp needed to be developed, similar to a Fair Trade label, in order to give consumers trust that the AI in a device had been developed according to specified global standards.
"Two years ago I published an article in Cosmos magazine calling for a global accord [on weaponised drones]. In the same year, more than 3000 AI and robotics researchers signed an open letter urging the leaders of the world to take action to prevent a global arms race," he said.
"On the other end of the spectrum are tools in everyday use, such as social media platforms and smartphones. There are difficult to regulate by UN Convention.
"Here we might look instead for tools that empower us as consumers and citizens to make responsible choices. I can imagine an ethical AI stamp call it the Asimov, in honour of the Isaac Asimov who gave us the three Robot Laws."
The Asimov stamp, Dr Finkel said, would be "woefully inadequate" in the case of something like weaponised drones, but it needed to be part of a continuum of regulation from smartphones upwards, in order to ensure AI devices were adequately trained and controlled.
The comments from Dr Finkel come after a study earlier this year from researchers at the University of Virginia showed that image recognition algorithms became embedded with gender and race biases based on large data sets of images used to train algorithms used by companies such as Google and Facebook.
The study found that images of shopping and washing were linked to women, while coaching and shooting were associated with men. These connections weren't just made, they were found to be amplified by the AI algorithms.
In 2016, researchers from Boston University and Microsoft also discovered that AI algorithms trained on Google News replicated the gender bias' of humans.
Dr Finkel said the next generation of people and artificial intelligence would grow up together and, like children, AI needed to be taught, but not banned, as it made society "more productive, more perceptive and more ambitious".
"Here's a snapshot of what your AI children have been up to in recent times. They've been taking human jobs, they've been helping police work out who might commit crimes in the future, they've been implicated in more than a handful of botch-ups, they've learnt from our bad habits and absorbed some unpleasant ideas. They've made some of us very, very afraid," he said, referencing fear mongering news headlines.
"So obviously, the rules have to change. The rules have to evolve for all AI and they have to be enforced. In humanity 2.0, there are consequences for breaking the rules... So too in AI 2.0, we need rules and consequences for breaking them.
"[But] we don't want a total ban. We don't want a free-for-all. We want a forward-looking regulatory regime, with clear expectations, that gives consumers confidence that we can roll out the technology safely."