The development and capabilities of AI based systems are evolving at a very rapid pace. It is leaving many question related to social impact, governance, and ethical implementation unanswered.
Many sectors eventually started adopting technologies and big data, which enabled them a smooth transition to AI, autonomous system and algorithmic decision making. AI and algorithmic system guide a vast array of decisions in both private and public sectors.
AI uses huge chunk of data to take a decision against manipulation, biases, social discrimination and property rights. This has led to rise in concern where AI is trusted to make important decision that affect our lives. This calls for the need for more transparency and accountability of AI and the need for Governance of AI.
Biased decisions / social discrimination -
It has been proven on several accounts in the past several years that AI can be just as or even more biased than human beings. Black box machine-learning models are already having a major impact on people's lives.
The problem is, if the information trainers feed to these algorithms is unbalanced, the system will eventually adopt the covert and overt biases that those data sets contain. Moreover, at present, the AI industry is suffering from diversity troubles that some label the "white guy problem," or largely dominated by white males.
This is the reason why an AI-judged beauty contest turned out to award mostly white candidates, a name-ranking algorithm ended up favoring white-sounding names, and advertising algorithms preferred to show high-paying job ads to male visitors.
A system called COMPAS, made by a company called North pointe, offers to predict defendants' likelihood of reoffending and is used by some judges to determine whether an inmate is granted parole. The workings of COMPAS are kept secret, but an investigation found evidence that the model may be biased against minorities.
Technology Arms Race-
Innovation in weaponized AI have already taken many forms. The technology is used in many ways that involves the missiles and drones to find the target hundreds of miles away and counter them.
Algorithms good at searching holiday photos can be used to scour spy satellite image. Companies like Google and Apple have developed many recent advances in this field.
Google has long been associated itself with the moto "Don't be evil". Recently Google confirmed that it is providing technologies to the US military, which can interpret video imagery as part of project maven. This technology can be used to pinpoint better bombing targets. This may lead to an autonomous weapon system.
As AI systems are now involved in making decision, in the case of autonomous weapons how much human control is necessary or required, who bears responsibility for the AI-based output remains a big question.
To ensure transparency and accountability for the AI ecosystem, our government, civil society, the private sector and academia must be at the table to discuss the governance mechanism to minimize risk and the possible downsides of AI and the autonomous system while harnessing the full potential of the technology. The process is certainly complex but not impossible.