Nand Kishor Contributor

Nand Kishor is the Product Manager of House of Bots. After finishing his studies in computer science, he ideated & re-launched Real Estate Business Intelligence Tool, where he created one of the leading Business Intelligence Tool for property price analysis in 2012. He also writes, research and sharing knowledge about Artificial Intelligence (AI), Machine Learning (ML), Data Science, Big Data, Python Language etc... ...

Full Bio 
Follow on

Nand Kishor is the Product Manager of House of Bots. After finishing his studies in computer science, he ideated & re-launched Real Estate Business Intelligence Tool, where he created one of the leading Business Intelligence Tool for property price analysis in 2012. He also writes, research and sharing knowledge about Artificial Intelligence (AI), Machine Learning (ML), Data Science, Big Data, Python Language etc...

3 Best Programming Languages For Internet of Things Development In 2018
345 days ago

Data science is the big draw in business schools
518 days ago

7 Effective Methods for Fitting a Liner
528 days ago

3 Thoughts on Why Deep Learning Works So Well
528 days ago

3 million at risk from the rise of robots
528 days ago

Top 10 Hot Artificial Intelligence (AI) Technologies
310560 views

Here's why so many data scientists are leaving their jobs
80784 views

2018 Data Science Interview Questions for Top Tech Companies
76626 views

Want to be a millionaire before you turn 25? Study artificial intelligence or machine learning
76077 views

Google announces scholarship program to train 1.3 lakh Indian developers in emerging technologies
61380 views

Artificial intelligence, safety, and end-of-life care

By Nand Kishor |Email | Oct 24, 2017 | 6339 Views

There is plenty that Artificial Intelligence can achieve in the healthcare arena, including promoting communication between all the members of the care team, and among the team and families


In one of my recent columns, admittedly a rather heavy piece of reading, I tried to define the various aspects of cognitive science-or the science of learning-that are used by researchers in the artificial intelligence (AI) field as they attempt to build in more 'learning' capability into the software they write. The hope is that this software will learn more by itself, without additional programming, as it runs in real world applications.

The column ensued from the continuing communications I have been having with John Fox, a professor in the department of engineering science at the University of Oxford. Fox is an interdisciplinary scientist working on reasoning, decision-making, and other theories of natural and artificial cognition. For many years, Fox was a scientist with Cancer Research UK (CRUK), an organization that has made major contributions to the prevention, diagnosis and treatment of cancer. He is now the chief scientific officer at both OpenClinical and at Deontics, one a not-for-profit foundation supported by CRUK, and the other a start-up which is trying to bring advances in AI to medicine.

Fox says that psychologists have known for years that human decision making is flawed, even if sometimes amazingly creative, and overconfidence is an important source of error in routine settings. A large part of the motivation for applying AI in medicine comes from the knowledge that to err is human and that overconfidence is an established cause of clinical mistakes. Overconfidence is a human failing and not that of a machine; it has a huge influence on our personal and collective successes and failures. For instance, overconfident data scientists botched predictions about the outcome of the UK referendum on Brexit and the US presidential election.

Fox says he made an error in that he thought AI was similar to other sciences that support medicine. It is taken for granted that medical equipment and drug companies have a duty of care to show that their products are effective and safe before they are released for commercial use. He assumed that AI researchers would similarly recognize that they have a duty of care to all those potentially affected by poor engineering or misuse in safety critical settings. He now realizes that these assumptions were naive.

Those building commercial tools based on the technologies derived from AI research have to date focused just on enthusiastic marketing in order to get customers in. Safety has taken a back seat. Considering that the insertion of AI applications into medicine in the first place was based on the presumption that AI could counter the human failing of overconfidence, Fox continues to be surprised how optimistic software developers are. He says that they always seem to have supreme confidence that worst-case scenarios won't come to pass, or that if they do happen then their management is someone else's responsibility.

In contrast, pharmaceutical companies are tightly regulated to make sure they fulfil their duty of care obligations to their customers and patients. Proving drugs are safe is an expensive process and also runs the risk of revealing that a claimed wonder-drug isn't effective. It should be the same with AI.

Much like other new technologies like crypto currencies based on blockchain, which are now beginning to face regulation the world over as governments wake up to the fact that non-governmental actors are undermining national sovereignty by minting currency, new technology in the medical AI business is likely to be regulated. In medicine, this regulation will probably act in much the same way as pharmaceutical regulation does. Regulation is not always a four-letter word. The medical AI community would be wise to engage with and lead discussions around safety issues if it wants to ensure that the regulatory framework that ensues is to its liking.

Meanwhile, it seems pretty obvious that entry level AI products like the robots that carry out simple domestic chores like the "nursebots" that are being trialled for hospital use, have a simple repertoire of behaviours and it should not be difficult to design their software controllers to operate safely in most circumstances. Standard safety engineering techniques-like internationally established standards for hazard and operability or 'HAZOP' standards-are up to the job where software failures simply cannot be tolerated. It seems to me that much can be done with AI even at this rudimentary level. For instance, perhaps nursebots can be used for elder care.

Fox and I have both lost our parents and have spoken of our respective experiences with their end-of-life care. This topic should concern all of us, simply because we may well be at the receiving end someday. While at CRUK, Fox interacted with Dame Cicely Saunders, the founder of the hospice movement. Dame Cicely was famous for her insight that the quality of end-of-life care depends not just on the knowledge and expertise of clinicians, but also their responses to the emotional and sometimes spiritual needs of people. Many patients' end-of-life experiences in hospitals don't match those offered by a hospice.

Fox is now starting a crowdfunded, not-for-profit project with Dr. Adrian Tookman, medical director of the Marie Curie Hospice in Hampstead, London to show how AI can offer a new approach to end-of-life care. This crowdfunding is run by JustGiving.com and in my mind, is a cause worth supporting.

While there is plenty that AI can achieve in this arena, including sophisticated diagnosis and case management plans, at the very least it will promote communication between all the members of the care team, and between the team and families. By helping to ensure that things are done right for our elders, Fox hopes to take pressure off nursing teams and free them to do what computers can't do-empathize and care.

Source: Mint