Rajendra

I write columns on news related to bots, specially in the categories of Artificial Intelligence, bot startup, bot funding.I am also interested in recent developments in the fields of data science, machine learning and natural language processing ...

Full Bio 
Follow on

I write columns on news related to bots, specially in the categories of Artificial Intelligence, bot startup, bot funding.I am also interested in recent developments in the fields of data science, machine learning and natural language processing

This asset class turned Rs 1 lakh into Rs 625 crore in 7 years; make a wild guess!
1102 days ago

Artificial intelligence is not our friend: Hillary Clinton is worried about the future of technology
1106 days ago

More than 1 lakh scholarship on offer by Google, Know how to apply
1107 days ago

Humans have some learning to do in an A.I. led world
1107 days ago

Human Pilot Beats Artificial Intelligence In NASA's Drone Race
1108 days ago

Google AI can create better machine-learning code than the researchers who made it
78114 views

More than 1 lakh scholarship on offer by Google, Know how to apply
65946 views

Rise of the sex robots: Life-like doll goes on sale for 15,000 pound
48792 views

13-year-old Indian AI developer vows to train 100,000 coders
47175 views

Pornhub is using machine learning to automatically tag its 5 million videos
42963 views

Your Artificial Intelligence Is Not Bias-Free

By Rajendra |Email | Sep 13, 2017 | 8364 Views

Machines have no emotions. So, they must be objective - right? Not so fast. A new wave of algorithmic issues has recently hit the news, bringing the bias of AI into greater focus. The question now is not just whether we should allow AI to replace humans in industry, but how to prevent these tools from further perpetrating race and gender biases that are harmful to society if and when they do.

First, a look at bias itself. Where do machines get it, and how can it be avoided? The answer is not as simple as it seems. To put it simply, "machine bias is human bias." And that bias can develop in a multitude of ways. For example:


Data-driven bias: If there is one objective lesson machines have learned, it's this: garbage in, garbage out. Machines do not question the data they are given - they look for patterns within it. For instance, learning systems that were trained to predict recidivism rates in parolees showed blacks were almost twice as likely as whites to be considered high-risk reoffenders - yet whites were far more likely to be labeled low-risk and go on to commit other crimes. When the data is skewed by human bias, the AI results will be skewed, as well - in this case impacting something as serious as human freedom.

Interactive bias: By now, we're probably familiar with the disaster that was Tay, Microsoft's Twitter-based chatbot that turned into an aggressive racist after learning through interaction from his Twitter follows. When machines are taught to learn from those around them, they don't decide which things to filter. They simply take it all in - for better or worse.

Emergent bias: Somewhat like interactive bias, emergent bias involves what happens via interaction over time. For instance, all of us on Facebook know we don't always see the updates our friends post. That's because Facebook has an algorithm that decides which posts we are most likely to want to see. Unfortunately, that often means there are a lot of things we never even know about - just because Facebook's math equation decided against it.

Similarity bias: As the country deals with a new round of political issues and racism, this similarity bias is another huge issue. Similarity bias emerges when algorithms distort the content people see when looking for news and information online. As opposed to showing them all news options, it shows them the options they are most likely to agree with - a situation that further compounds political issues on both sides.

The question remains: what do we do about it? One of the most maddening aspects of AI is that even the ones developing it don't fully understand how it works. Yet, AI and machine learning seem to be on a bullet train, and most companies are showing no sign of stopping. I believe that as the awareness of AI bias and "math-washing" continues to evolve, so will the demand for greater transparency in AI development. After all, the algorithms major companies are using to feed us news and information are impacting the decisions we make in our businesses and personal lives.

A number of "watch-dog" organizations like AI Now are popping up to start the fight. But I anticipate that in the future, machine bias will be such a large issue that many companies will need to create completely new positions - bias detectors and algorithm analysts - to ensure that their AI is as bias-free as possible. And when I say "new positions," I'm not talking about "tech positions." After all, much of the bias currently found in algorithms is weighted toward males because they are largely the group creating the algorithms to begin with. I personally call for fewer technology-based writers and communicators who can explain algorithms in layman's terms to bring greater transparency to customers, readers, and views so that we truly understand where the information we're receiving is coming from.

I'd also venture to say that in the future, anyone hired for coding algorithms will need to be vetted for biases before they even start. Only in that way will a company really know what types of biases might pop up in their data-and try to prevent it.

Will we ever build truly objective machines? Not likely. So long as humans are involved in the process, some bias will exist. But what we can do - right now - is increase the transparency of every algorithm being used by publishing disclaimers and using simple language so the public truly understands the impact these algorithms are having on our daily lives. For many of us, it's more than a matter of which Facebook posts we see every day. It could be a matter of which job we get - how much we make - and whether we see freedom. It seems obvious to me those aren't decisions we should leave up to machines alone.

Source: Forbes