Nand Kishor Contributor

Nand Kishor is the Product Manager of House of Bots. After finishing his studies in computer science, he ideated & re-launched Real Estate Business Intelligence Tool, where he created one of the leading Business Intelligence Tool for property price analysis in 2012. He also writes, research and sharing knowledge about Artificial Intelligence (AI), Machine Learning (ML), Data Science, Big Data, Python Language etc... ...

Full Bio 
Follow on

Nand Kishor is the Product Manager of House of Bots. After finishing his studies in computer science, he ideated & re-launched Real Estate Business Intelligence Tool, where he created one of the leading Business Intelligence Tool for property price analysis in 2012. He also writes, research and sharing knowledge about Artificial Intelligence (AI), Machine Learning (ML), Data Science, Big Data, Python Language etc...

3 Best Programming Languages For Internet of Things Development In 2018
339 days ago

Data science is the big draw in business schools
512 days ago

7 Effective Methods for Fitting a Liner
522 days ago

3 Thoughts on Why Deep Learning Works So Well
522 days ago

3 million at risk from the rise of robots
522 days ago

Top 10 Hot Artificial Intelligence (AI) Technologies
310230 views

Here's why so many data scientists are leaving their jobs
80715 views

2018 Data Science Interview Questions for Top Tech Companies
76458 views

Want to be a millionaire before you turn 25? Study artificial intelligence or machine learning
75942 views

Google announces scholarship program to train 1.3 lakh Indian developers in emerging technologies
61305 views

Scientists To Trump: Stop Trying To Use Racist AI

By Nand Kishor |Email | Nov 21, 2017 | 6255 Views

ICE wants a machine learning system to automate its decision-making. Scientists, engineers, and technologists say it's a bad idea.

Machine learning algorithms are good at a lot of things, but they're not so good at producing unbiased results-especially when they're working with data that's already biased. Yet the Trump administration is looking for a way to automate immigration decisions using AI based on biased and irrelevant information. And now, scientists are refusing.

In June, the U.S. Immigrant and Customs Enforcement (ICE) released a letter saying that the agency was searching for someone to design a machine-learning algorithm to automate information gathering about immigrants and determine whether it can be used to prosecute them or deny them entry to the country. The ultimate goal? To enforce President Trump's executive orders, which have targeted Muslim-majority countries, and to determine whether a person will "contribute to the national interests"-whatever that means.

Given that the information you can find about someone online is a poor proxy for whether someone is a terrorist or if they should be admitted into the country, any such system is likely to be deeply biased and based on information like a person's religion, income, and whether or not they're critical of the Trump administration. It's something that's already happening within the U.S.'s criminal justice system, where AI technology is being used to reinforce existing biases that disproportionately discriminate against black people, as ProPublica has reported extensively. 

Last week a group of 54 scientists and technologists who specialize in machine learning wrote a letter rebuking ICE, explaining that there's no way to create a computer program that could "provide reliable or objective assessment of the traits that ICE seeks to measure." Instead, the letter says, "algorithms designed to predict these undefined qualities could be used to arbitrarily flag groups of immigrants under a veneer of objectivity."

The scientists include researchers at MIT, Stanford, NYU, Columbia, Carnegie Mellon and many more of the country's top universities, as well as at Microsoft Research and Google Research. While on its face it simply urges ICE to abandon the initiative, it is also a boycott, a determination that none of the signees would ever participate in building such a biased algorithm.

Just because you post something critical of Trump's foreign policy on Facebook doesn't mean you're a threat to national security. A low income doesn't mean you have nothing to contribute to society. And your religion doesn't imply that you have radical tendencies or would ever commit an act of terrorism. Not only would the data used to train the algorithm be flawed-the scientists write that machine learning models generate a large number of false positives when trying to predict extremely rare, real-life events like a terrorist attack. That means that this hypothetical system would flag far more innocents as potential threats, endangering their well-being and their future for no reason whatsoever. According to the ACLU, such a system would be a threat to civil liberties of both immigrants and citizens.

This is one reason that many of these scientists and companies are trying to build a standard of ethics for AI.  For instance, the AI Now Institute at NYU focuses on the social implications of machine learning, including bias, inclusion, security, and civil rights. Its cofounder and director of research, Kate Crawford, is one of the signees of the letter.

But the question remains: will the rest of tech stand up to Trump?

Source: Fastcodesign