Vectara
Chat

Customers Don’t Like Your Chatbot. Here’s How to Make Them Love It.

AI has raised the stakes for customer communications with your Brand. Are you hitting the mark? Or do you want to make sure your strategy isn’t a failure to launch. Learn what makes a smart chatbot

19 minutes readCustomers Don’t Like Your Chatbot. Here’s How to Make Them Love It.

Not Feeling Chatty?

LLMs and conversational chatbots are all the rage, but many sites and platforms you utilize today still rely on legacy keyword search systems. In companies known for adopting new technologies at breakneck speeds, why do we see hesitation in switching user experiences to chatbot-powered systems? The answer is that end-users have little tolerance for misguided chatbots. Think about searching on your favorite site or platform. You are primarily given a list of results in hopes that your answer will be floating in the first ten hits. In the worst scenarios, we are left rooting even further into a seemingly never-ending list of results. Chatbots are trained to better understand your question and navigate you to the correct result. Chatbots can often better retrieve relevant information and summarize it in an actionable way to the user. In theory, chatbots are far superior at handling the task of question/answering, but they hallucinate and can often get the intent very wrong.

Conversely, you might have a fully chatbot-run experience; think of the last time you dealt with your bank or insurance company online. These chatbots are generally helpful for navigating to the right help article or getting you queued up better with someone on the other line. We all practice our own tactics to get around these chatbots or simply give up and pick up the phone. Now imagine these frustrations in an online retail situation; the risk would be too significant. In a healthcare scenario where you are driving for the adoption of new plans, understanding of the plans is vital. We don’t have to get too creative to start imagining the value these chatbots can bring, but today, you might have to face the hard fact that… your customers don’t like your chatbot. What will you do about it?

Current Chatbot Limitations

There is an ever-evolving customer engagement landscape as brick-and-mortar retail shops move to predominantly online sales. Healthcare companies have started rolling out telemedicine services. Legal firms now deal in troves of data. The opportunities to enhance online and knowledge-based experiences are everywhere. Chatbots provide a simulation experience of human assistance but are only as good as the data they were trained on and the context to which they are being applied. LLMs are engineered for relevance and context. But many chatbots you deal with today may not yet leverage LLMs. Even when equipped with these large models, many chatbot systems experience some common problems.

Some of the most common hurdles faced by current chatbot systems are

  • Hallucinations – We generally see hallucinations in chatbots manifest in two ways. The first is when the LLM is asked to respond with information that is not in the dataset it was trained on, or that was lost during training. The second is when the LLM responds with content that is incompatible with your opinion or belief. Hallucinations are an artifact of this powerful LLM technology.
  • Poor Chat Recollection – Many chatbot systems struggle to retain information and context after asking a series of questions. This may result in the chatbot overstating the same information or needing clarification about the original intent of a series of questions.
  • General Confusion – Often, when LLMs don’t know an answer or aren’t properly set up with a RAG pipeline to ground the answer in facts, they will make up information or insert irrelevant details in the place of an answer or part of an answer. Below are some examples of how popular LLM models hallucinate from the recent testing and launch of our Hallucination Evaluation Model.
Examples of Detected Hallucinations

Underperforming chatbots also put a burden on live assistants and data teams. When Chatbots fall over or fall short, that means manual work for data teams and manual intervention by live assistants. Data teams must retrain and optimize models or include new training data. If the goal of your chatbot is to decrease support costs, then you will likely never realize that impact if the chatbot gets continually skipped over. At the end of the day, your un-optimized chatbot may be costing you more than just LLM compute.

Assessing Your Chatbot’s Performance

So, how do you know if you have a well-performing chatbot? Many metrics exist to gauge the effectiveness of a chatbot. Some performance metrics are technical, like the accuracy of results, detected levels of hallucination, speed of summarization, and end-user satisfaction scores. Other metrics might be business-related, like # of support deflections, online conversion rates, and customer satisfaction collections. Analyzing user feedback for actionable insights at incremental stages of your chatbot deployment is critical to ensure your business isn’t blind to the deficiencies in your chatbot experience. Some common strategies for doing this are reviewing system and chat logs to identify frequent issues/common hallucinations. Your ability to benchmark against industry standards may give you a better foundation to quantify.

Remaining agile in chatbot development requires regular training and updates based on user interactions. This might utilize tools for user story tracking to understand better at what stage people drop off your chat experience. Additionally, employing A/B testing to optimize chatbot responses gives you an agile framework for experimentation, further tuning the chatbot outputs to the desired efficiency level. However, beware that frequently, this fine-tuning can become a never-ending task.

Once your metrics are established, monitoring performance metrics over time for ongoing evaluation should become standard practice in your team. Good-performing AI teams often establish a feedback loop with users and internal stakeholders for continuous improvement.

Tips for Leveraging Advanced AI

Modern chatbots rely heavily on the performance and retrieval of Large Language Models (LLMs). For many companies, this is a new endeavor. One important exercise is to keep current with the latest models and iterations of models. Utilizing outdated models will provide you with limitations on the expected performance. Understanding the various models in the ecosystem and what roles they can best be leveraged to fulfill. Cost-effectiveness and accelerated response times are what users should expect when using optimized models. Leveraging current LLMs with strong cross-lingual understanding can improve accuracy in response generation.

One additional caution and a common misstep experienced by companies looking to build a chatbot is basing their search solely on LLM-based search. By combining LLM search with BM25 (keyword) search, users can leverage the accuracy of both methods in a dynamic called Hybrid Search. Exploring hybrid search capabilities for enhanced information retrieval, partnered with Vectara’s precision and recall, makes a powerful engine for chatbot workloads that perform with accuracy.

Vectara utilizes multiple LLMs for language detection, retrieval, and reranking. Each provides superior performance and accuracy for its task in the platform. The full neural net fit is mapped for optimized runtime, which includes reducing latency between LLM components. For the user, they never have to worry about what the LLMs are doing. They simply receive an end-to-end configured platform with easy ingest and API operation.

Ensuring Data Privacy

Data privacy may be an issue that is front of mind for your end users. How will the data I provide be used by the chatbot system? Is it safe to share information about my account with a chatbot? At a minimum, we hope that the data we provide is not used to further train the model, which could potentially expose information about a user. AI solutions are not unique in their considrations with regulation and compliance, especially within industry-specific regulations. The recent executive order administered by the Biden administration states this in narrower terms: “Without safeguards, AI can put Americans’ privacy further at risk. AI not only makes it easier to extract, identify, and exploit personal data, but it also heightens incentives to do so because companies use data to train AI systems.” The article further details specific steps, including “Protect Americans’ privacy by prioritizing federal support for accelerating the development and use of privacy-preserving techniques—including ones that use cutting-edge AI and that let AI systems be trained while preserving the privacy of the training data.”

Every company can do its part by implementing secure data encryption and user authentication, as well as performing regular audits to ensure data privacy and compliance. Vectara provides total isolation between the generative tier and the retrieval tier, complying with regulatory environments that disallow training of public models on corporate, personal, or government data. Engineered to have complete isolation between the training tier (no customer data) and the retrieval/summarization tier (with customer data). The platform also provides client-configurable data retention, meaning you can discard the original documents and text used in the platform.

5 Benefits of Upgrading Your Chatbot with Vectara

Upgrading and optimizing your chatbot may seem daunting in a world of LLMs and hybrid search, but you don’t need to fear this innovation. Platforms like Vectara make it easy by providing a safe entry point to working with LLM chatbots. Vectara powers top chatbot providers like Conversica and Apex Chat and can help you build your next chatbot with integrations to common tools and easy operation via GUI or API. With Vectara powering your chatbot, you can realize some of these five benefits of upgrading.

  1. Enhanced User Satisfaction: Delighted and engaged users who stay loyal to your platforms and services are gold. Vectara can help you deliver precise answers swiftly to ease onboarding friction.
  2. Cost Efficiency: Reducing operational costs by leveraging a fully tuned, end-to-end platform like Vectara means you won’t have to guess what your chatbot is optimized for in cost savings.
  3. Increased Conversion Rates: When customers can find the information they need, engagement goes up. Vectara can help users receive the best answer first, without hallucinations and with full citations.
  4. Strategic Resource Optimization: Free up your most valuable human resources to focus on customer success. Redirect human resources to more strategic aspects of the value chain. Vectara provides an end-to-end platform that abstracts away the complexity, allowing businesses to focus on their core.
  5. Future Readiness: Ensure your chatbot can evolve with emerging AI advancements without having to be re-architecting constantly. Vectara is a SaaS platform that provides access to the latest generative capabilities that are continually being evaluated and implemented into the platform.

Conclusion

Chatbots provide a better contextual understanding of user questions and are best positioned to fulfill many question/answer use cases. However, many chatbots hallucinate, provide erroneous answers, and misguide users towards frustration. Addressing these issues is key to making chatbots a reliable strategy for every business. To ensure performance and mitigate hallucinations, companies must continually monitor their AI systems and implement strategies for A/B testing and quick iteration. LLM-based solutions will provide many of these benefits but can be enhanced by leveraging hybrid search. By leveraging Vectara, users can access these LLM capabilities with a safe entry point. Vectara users never have to worry about their data being used for inappropriate activities, such as training. But don’t take our word for it. Start building your chatbot today.

SonoSim + Vectara: Success Story [Video]Learn how SomoSim improved their educational content search with Vectara’s trusted GenAI platform, enabling practitioners to more quickly, easily, and accurately find exactly what they are looking for, regardless of how they ask.SonoSim + Vectara Success Story
SonoSim + Vectara: Success Story [Video]
Before you go...

Connect with
our Community!