Everyone is excited about artificial intelligence. Great strides have been made in technology and in the technique of machine learning. However, at this early stage in its development, we may need to curb our enthusiasm somewhat.
Already the value of AI can be seen in a wide range of trades including marketing and sales, business operation, insurance, banking and finance, and more. In short, it is an ideal way to perform a wide range of business activities from managing human capital and analyzing people's performance through recruitment and more. Its potential runs through the thread of the entire business Eco structure. It is more than apparent already that the value of AI to the entire economy can be worth trillions of dollars.
Sometimes we may forget that AI is still an act in progress. Due to its infancy, there are still limitations to the technology that must be overcome before we are indeed in the brave new world of AI.
In a recent podcast published by the McKinsey Global Institute, a firm that analyzes the global economy, Michael Chui, chairman of the company and James Manyika, director, discussed what the limitations are on AI and what is being done to alleviate them.
Factors That Limit The Potential Of AI
Manyika noted that the limitations of AI are "purely technical." He identified them as to how to explain what the algorithm is doing? Why is it making the choices, outcomes, and forecasts that it does? Then there are practical limitations involving the data as well as its use.
He explained that in the process of learning, we are giving computer data to not only program them but also train them. "We're teaching them," he said. They are trained by providing them labeled data. Teaching a machine to identify objects in a photograph or to acknowledge a variance in a data stream that may indicate that a machine is going to breakdown is performed by feeding them a lot of labeled data that indicates that in this batch of data the machine is about to break and in that collection of data the machine is not about to break and the computer figures out if a machine is about to break.
Chui identified five limitations to AI that must be overcome. He explained that now humans are labeling the data. For example, people are going through photos of traffic and tracing out the cars and the lane markers to create labeled data that self-driving cars can use to create the algorithm needed to drive the cars.
Manyika noted that he knows of students who go to a public library to label art so that algorithms can be created that the computer uses to make forecasts. For example, in the United Kingdom, groups of people are identifying photos of different breeds of dogs, using labeled data that is used to create algorithms so that the computer can identify the data and know what it is.
This process is being used for medical purposes, he pointed out. People are labeling photographs of different types of tumors so that when a computer scans them, it can understand what a tumor is and what kind of tumor it is.
The problem is that an excessive amount of data is needed to teach the computer. The challenge is to create a way for the computer to go through the labeled data quicker.
Tools that are now being used to do that include generative adversarial networks (GAN). The tools use two networks - one generates the right things and the other distinguishes whether the computer is generating the right thing. The two networks compete against each other to permit the computer to do the right thing. This technique allows a computer to generate art in the style of a particular artist or generate architecture in the style of other things that have been observed.
Manyika pointed out people are currently experimenting with other techniques of machine learning. For example, he said that researchers at Microsoft Research Lab are developing in stream labeling, a process that labels the data through use. In other words, the computer is trying to interpret the data based on how it is being used. Although stream labeling has been around for a while, it has recently made major strides. Still, according to Manyika, labeling data is a limitation that needs more development.
Another limitation of AI is not enough data. To combat the problem, companies that develop AI are acquiring data over multiple years. To try and cut down in the amount of time to gather data, companies are turning to simulated environments. Creating a simulated environment within a computer allows you to run more trials so that the computer can learn a lot more things quicker.
Then there is the problem of explaining why the computer decided what it did. Known as explainability, the issue deals with regulations and regulators who may investigate an algorithm's decision. For example, if someone has been let out of jail on bond and someone else wasn't, someone is going to want to know why. One could try to explain the decision, but it certainly will be difficult.
Chui explained that there is a technique being developed that can provide an explanation. Called LIME, which stands for the locally interpretable model-agnostic explanation, it involves looking at parts of a model and inputs and seeing whether that alters the outcome. For example, if you are looking at a photo and trying to determine if the item in the photograph is a pickup truck or a car, then if the windscreen of the truck or the back of the car is changed, then does either one of those changes make a difference. That shows that the model is focusing on the back of the car or the windscreen of the truck to make a decision. What's happening is that there are experiments being done on the model to determine what makes a difference.
Finally, biased data is also a limitation on AI. If the data going into the computer is biased, then the outcome is also biased. For example, we know that some communities are subject to more police presence than other communities. If the computer is to determine whether a high number of police in a community limits crime and the data comes from the neighborhood with heavy police presence and a neighborhood with little if any police presence, then the computer's decision is based on more data from the neighborhood with police and no if any data from the neighborhood that do not have police. The oversampled neighborhood can cause a skewed conclusion. So reliance on AI may result in a reliance on inherent bias in the data. The challenge, therefore, is to figure out a way to "de-bias" the data.
So, as we can see the potential of AI, we also have to recognize its limitations. Don't fret; AI researchers are working feverishly on the problems. Some things that were considered limitations on AI a few years ago are not today because of its quick development. That is why you need to constantly check with AI researchers what is possible today.