Artificial Intelligence which is necessary for chatbot is mainly focused on the natural language. Earlier, the main focus was on natural language processing (NLP), which is the ability to test the person. What has really changed in the last two years, the focus on natural language generation (NLG), the ability to provide much more flexibility in reply, with more natural responses built by deep learning systems?
Anyone who is regularly visiting web sites knows the primary and early use of chatbots is to instantly pop up on a site to try to ask pre-sales questions. The second area is when a customer goes to a support site and the system works to try to resolve the most basic questions without human intervention. As it is obvious that these are not live people, the former is usually ignored while the second is often viewed as annoying. That's because people are used to dealing with people and the NLG components are still often missing or rough so that the people quickly get frustrated. The same happens in voice response systems.
There is also another issue of human culture. Many consumers are finally realizing that there are security risks to using computers. As they are used to dealing with other people, they feel more comfortable with people and the perceived risk is lower. People are often very hesitant to provide personal information to a chatbot.
Earlier today, CGS, a global provider of software and outsourcing services, released its 2019 CGS Customer Service Security and Compliance Survey. A key finding was that almost 42.9% of respondents prefer to switch to a phone call when a chatbot asks for personal information. Another data point was that 41.4% of respondents said they would never want their personal data stored by a company even if the people trusted the brand.
The improvement in chatbot technologies has been rapid, but the technology alone is not sufficient, said Michael D. Mills, SVP, Global BPO Solutions, Contact Center division, CGS. In customer support services, we all need to be more cognizant of consumers privacy and the local compliance requirements, making it the forefront of the business. As governments continue to address data security and privacy concerns, all industries need to ensure they are prepared for the regulatory needs of today and the future.
The points companies should take are twofold. First, a much better job must be done to improve basic data security and to advertise that security. Second, and more to the point of this article, marketing needs to be done alongside the rapid rollout of chatbots.
That second point doesn't mean trying to fool customers into believing that they are talking to live people. As the systems are still improving, evidence a lie with destroy trust in the brand. Rather, there needs to be a mindset that the chats need to start by making it clear that the goal is to provide rapid service while also reinforcing security. Over the next 1824 months, that dual goal should display itself by having the chatbot give an option to put people in touch with a live operator if personal information is required. Alongside that process, the company should be visibly improving data security while marketing nudges consumers in the direction of choosing, on their own, to provide more information to improved systems.
While it is a more difficult technical problem, the adoption of voice chatbots should proceed faster than that of text conversations. Most people are more comfortable talking than typing, so regardless of AI its a more acceptable model. Voice is much better at conveying emotions that is text. A warm voice on the other end of the call helps. In addition, while sentiment analysis is difficult, the ability to identify stress in a voice and other issues can help an AI system adapt answers in a better way than a standardized system can do the same. Finally, as the CGS survey shows, people trust voice and its easier to escalate to a human voice if the customer is already on the phone.
As natural language technology improves, it will be able to answer more questions in a way that will provide trust to the people using the chatbots. AI is only one tool and the security issue is a company policy that must be addressed in the larger context alongside natural language. To get humans to trust chatbots is going to take a larger focus than just one on AI.