Rajendra

I write columns on news related to bots, specially in the categories of Artificial Intelligence, bot startup, bot funding.I am also interested in recent developments in the fields of data science, machine learning and natural language processing ...

Full Bio 
Follow on

I write columns on news related to bots, specially in the categories of Artificial Intelligence, bot startup, bot funding.I am also interested in recent developments in the fields of data science, machine learning and natural language processing

This asset class turned Rs 1 lakh into Rs 625 crore in 7 years; make a wild guess!
630 days ago

Artificial intelligence is not our friend: Hillary Clinton is worried about the future of technology
634 days ago

More than 1 lakh scholarship on offer by Google, Know how to apply
635 days ago

Humans have some learning to do in an A.I. led world
635 days ago

Human Pilot Beats Artificial Intelligence In NASA's Drone Race
636 days ago

Google AI can create better machine-learning code than the researchers who made it
72201 views

More than 1 lakh scholarship on offer by Google, Know how to apply
58773 views

13-year-old Indian AI developer vows to train 100,000 coders
40305 views

Pornhub is using machine learning to automatically tag its 5 million videos
36732 views

Deep Learning vs. Machine Learning A Data Scientist's Perspective
29658 views

Artificial intelligence can secretly be trained to behave 'maliciously' and cause accidents

By Rajendra |Email | Aug 29, 2017 | 6546 Views

Neural networks can be secretly trained to misbehave, according to a new research paper.

A team of New York University scientists has found that people can corrupt artificial intelligence systems by tampering with their training data, and such malicious amendments can be difficult to detect.

This method of attack could even be used to cause real-world accidents.

Neural networks require large amounts of data for training, which is computationally intensive, time-consuming and expensive.

Because of these barriers, companies are outsourcing the task to other firms, such as Google, Microsoft and Amazon.

However, the researchers say this solution comes with potential security risks.

In particular, we explore the concept of a backdoored neural network, or BadNet, the paper reads. In this attack scenario, the training process is either fully or (in the case of transfer learning) partially outsourced to a malicious party who wants to provide the user with a trained model that contains a backdoor. 

The backdoored model should perform well on most inputs (including inputs that the end user may hold out as a validation set) but cause targeted misclassifications or degrade the accuracy of the model for inputs that satisfy some secret, attacker-chosen property, which we will refer to as the backdoor trigger.

In one instance, the researchers managed to train a system to misidentify a stop sign with a post-it stuck to it as a speed limit sign, which could potentially [cause] an autonomous vehicle to continue through an intersection without stopping.

What's more, so-called 'BadNets' can be hard to detect.

BadNets are stealthy, i.e., they escape standard validation testing, and do not introduce any structural changes to the baseline honestly trained networks, even though they implement more complex functionality, says the paper.

Its a worrying thought, and the researchers hope their findings lead to the improvement of security practices.

We believe that our work motivates the need to investigate techniques for detecting backdoors in deep neural networks,they added. 

Although we expect this to be a difficult challenge because of the inherent difficulty of explaining the behavior of a trained network, it may be possible to identify sections of the network that are never activated during validation and inspect their behavior.

Source: independent