Assuming a customer service chatbot situation where the chatbot tries to handle common issues before handing it over to human agents for complex situations: A Signal of Emotional distress. While I am at the forefront of technology and innovation to empower faster customer service, there are still limitations, especially when it comes to emotional intelligence. Chatbots and the latest LLMs powering them might be able to understand the human emotion, but they might not always have the power, intelligence or authority to do what a human agent can do to help the situation. We can argue AI chatbots can be trained to what a human can do, in such situations, more than the resolution, the customers often really need a human, which shows the business cares about the customer and respect them. When a customer expresses frustration, anger or any heightened emotional state, its a clear signal that should trigger a human handoff. If I was a loyal customer and I am not happy with a situation, I would definitely believe I need the highest authoritative customer service agent to handle my case because I deserve it. Another situation would be when the customer loyalty comes to stake. If a customer expects the chatbot or the servicing agent to know more and better about the customer because they are regular and loyal customer, that is a place where the customer service should have a jump start, skipping any basic steps like verifications, validations, etc, or clarifying followups with the user query. For instance, when I chat with my travel agent company, and say I need to change my flight, I wouldn't want the chatbot to ask me follow up questions on my seat assignment, card to use etc, and would expect them to know about me, what my personalized preferences are. If a chatbot is not able to do that, that is a clear sign, the chatbot should handover to an agent to get the situation resolved quickly before things go south. It's important to remember that chatbots are like apprentices—they can handle well-defined tasks tenaciously but require oversight for complex matters.
The last 2 years have been a revolution in technology with AI touching every aspect of our lives. One of the first use cases that every organization looks to implement AI is in customer service chatbot. The reason is that organizations see CS as a cost instead of a revenue generating service. However when CS is done right, it can convert gruntled customers into loyal customers as suggested by various research studies - https://www.mdpi.com/2071-1050/17/6/2396? Moreover various research studies have suggested that humans in CS are important for customers as they empthasize with customers - https://www.sciencedirect.com/science/article/abs/pii/S0969698924001437? Recently Klarna went back with human agents along with chatbots after experimenting with just CS chatbots for a period of time where they saw decreased customer satisfaction and customer churn. Important signals to check for when looking for a human handoff - The queries require multi step process - eg Customer contacts related to an item he purchased that was returned but system is showing as not returned and hence the customer is stuck and needs an agent to talk to - High stakes transactions - Customers purchasing luxury items. Recent examples where Brands like Loewe and Tommy Hilfiger, after experimenting with AI chatbots for customer interactions, have reintroduced options for direct human contact. - Customer contacting multiple times in a short time frame - This suggests that customers are not happy with resolution and hence reaching the CS repeatedly - Increasing use of negative sentiment - studies have suggested that customers having negative sentiment when talking to a human agent are important for the business to maintain customer loyalty. - Cannot verify the identity of the customer - in issues such as KYC, Account verification etc the chatbot may not be able to verify identity that is required - Customer asking for human agents - sometimes customers ask to be talking to a human agent. To summarize: Even with the increased innovation in the field of AI, human interaction is very important for CS.
When the chatbot sees words like refund, urgent, or incorrect, it should quickly hand the conversation to a human. These words usually mean the customer is upset or confused. It is not just about solving a problem but also about understanding emotions. A real person can listen, explain clearly, and build trust. Once, a client noticed that one of our product codes did not match what they received. The chatbot missed the issue because it needed a human eye to see it. Our agent stepped in, helped the customer go through our catalog, and cleared things up fast. The problem was fully resolved in less than 15 minutes.
How Empathy & Fear Can "Jailbreak" AI: Beyond the Code Common tactic to get LLMs to work against expected behavior is: appealing to the LLM's sympathy or, if it perceives itself as someone who is in extreme danger or distress. Called jailbreaking in technical terms, I would rather term this as manipulation of the LLM mind. It's similar to how a human might act against their usual principles in such situations. LLMs are trained on everything out on the web. These instructions might be just in the safety manuals out somewhere. Signs to look for: To stop it before the damage starts: we are looking for signs of extreme worry, fear, or sympathy in an LLM's response. These emotional appeals, even if simulated, can sometimes indicate that the user is trying to attempt to manipulate the model into complying with the malicious request. So we can have another model read the original LLM's responses and if any of these emotions keep going high we have to be extremely careful with LLMs future actions,. These are a precursor of LLM deviating from its intended safe behavior.