If there are only a handful of companies running AI, then it's unlikely that it will be applied broadly enough, according to Microsoft's strategic policy advisor Dave Heiner.
The greatest risk relating to artificial intelligence (AI) is not deploying it fast enough in all fields of human endeavour, according to Dave Heiner, strategic policy advisor at Microsoft.
"Any place [where] intelligence is helpful, which is just about every place, AI could be helpful as well," Heiner said at the Microsoft Summit in Sydney last week.
"There's just no possibility whatsoever ... if AI is just being run by four or five companies, that it can possibly be deployed broadly enough."
Heiner also noted that AI -- although he prefers the term "computational intelligence" -- is about "amplifying human ingenuity" in industries such as education, healthcare, and government, rather than making humans redundant.
Holding a similar view, chief storyteller and GM of Microsoft AI Steve Clayton said the company is urging business leaders to "replace the labour-saving and automation mindset with a maker and creation mindset".
"If you do a search on any search engine out there ... if you search for artificial intelligence you'll probably be met by images of robots and sort of humanoid type things and automation and I think the conversation around AI is in that realm right now of AI is going to be this overlord around society," he said at the Microsoft Summit.
Meanwhile, according to Julia White, corporate vice president at Microsoft Azure, the argument that "AI will replace humans" assumes that humans are not "learning, growing, adapting beings".
"The way that technology has augmented our capabilities thus far, we don't see the next generation to be different. Just because I can now have a smartphone and a PC doesn't mean that I, as a human, am no longer relevant. I'm actually more capable. I can communicate with more people. I can be more efficient. I can get more done," White told ZDNet.
"When I think back 30 years ago when people were like, 'What does it mean if I can do my accounting online and I don't have to meet with an accountant?' Your accountant can add more value because they're not just filling out paperwork, they're actually doing more rewarding work."
Given the amount of data being generated globally, the opportunity to leverage that data for the betterment of society is too big to ignore, Clayton said.
"This data arises from the multiple different devices we have in our world, whether it's PCs, smart speakers, but more importantly whether it's sensors out in the world, sensors in factories, on farms, in hospitals, these IoT sensors now connected to the internet that are generating enormous amounts of data and that combined with some real breakthroughs in the last two or three years around powerful algorithms, in particular in the field of machine learning and in particular in a field called deep neural networking, [we have] the ability to take enormous amounts of data to compute them at scale to start to teach computers to be able to see, hear, recognise, and understand the world in the same way that we do," he said at the event, adding that Microsoft believes less than half a percent of the data currently available to us is being used for intelligence.
But pro-AI companies need to first ensure consumers fully trust the technology, Heiner said.
"We need that [trust] because AI systems are ... dependent upon data. If we want to use AI to help make better decisions about people, like who's at risk for a heart attack or who should get a certain organ for organ transplant, that kind of thing, then we need data about people," Heiner said at the Microsoft Summit.
"If people don't trust that their data will be used in a good way, they won't share it. If people don't trust the results of an AI system, they won't be applying AI either.
"We need to really work hard on ... a set of societal issues that AI raises like the reliability of the systems, privacy relating to data, the fairness of these systems, and transparency, being really able to explain how they work."
Last week, the Australian Competition and Consumer Commission (ACCC) noted the risk of artificial intelligence facilitating collusion and decreasing competition in the market, without necessarily violating any competition laws.
The development of deep learning and AI could mean that companies are unaware of how or why a machine comes to a particular conclusion, ACCC chair Rod Sims noted, adding that companies cannot avoid liability in Australia by saying "my robot did it".
"It is said that a profit-maximising algorithm will work out the oligopolistic pricing game and, being logical and less prone to flights of fancy, stick to it," he said.
The ACCC has taken steps to address the potential anti-competitive risks associated with AI, introducing new misuse of market power provisions under the Competition and Consumer Act 2010, which Sims said is "fit-for-purpose" to prohibit, for example, a company with substantial market power from deploying a machine learning algorithm that helps it determine profit-maximising downstream prices and engage in margin squeeze.
Earlier this month, federal parliamentarians Bridget McKenzie and Ed Husic said Australians from all walks of life need to have a diplomatic discussion about the potential impact of AI and the boundaries that need to be established to ensure AI is developed and used for good.
Senator McKenzie, who is the chair of the Foreign Affairs, Defence, and Trade Legislation Committee, said that if bright minds like Stephen Hawking and Elon Musk are warning of "evil AI" destroying humankind if not properly monitored and regulated, then we should not just brush the warning off.
"You don't want [AI] to be the solution to so many of our societal problems that when the public says 'hang on' and raises concerns, that it's already too late. We need to be having those discussions early, and for very rational and reasonable reasons," McKenzie told ZDNet.
"It's not about being fearful of technology or not being the cool kid on the block. It's about having a rational concern and actually understanding the potential of this technology. The potential of this technology is that it's not just a robot listening to your commands ... it's a robot that is able to think.
"How much technology-enabling infrastructure can you have on your body before you become a robot?"
Shadow Minister for the Digital Economy Ed Husic said there is an opportunity for Australia to "champion the issue", adding that initial discussions about AI do not need to focus on regulation.
"Let's have a discussion about ... the boundaries we need to put in place. Obviously, we want people to have the creative freedom to develop and use AI in a way they can give maximum benefit to humanity, but where's the trip wire? I think we haven't really focused on that enough," he said.
"I think our country should make this a diplomatic priority, working with like-minded nations to start thinking on a world stage about what we're going to do. If the World Economic Forum says it's something we should think about, and they wouldn't necessarily rush to a regulatory response, then I think it's something we should think about."
Others, like Swiss neuroscientist and founder of Starmind Pascal Kaufmann, believe "true AI" does not even exist yet because companies are likening the human brain to a computer. However, the brain does not process information, retrieve knowledge, or store memories like a computer does.
Kaufmann told ZDNet earlier this year that AI will remain a stagnant field of technology until "the brain code has been cracked".
As it exists today, AI is often just the "human intelligence of programmers condensed into source code", and that until we understand natural intelligence through neuroscience, we will not be able to build the artificial kind, Kaufmann said.
Source: ZD Net