Ryan FalkenbergRyan FalkenbergJune 12, 2019
chatbot-4071274_1280-1280x811.jpg

11min1355

Have you been the victim of chatbot incompetence recently?

It typically starts with a specific query that you need help with. You don’t have the time to listen to the contact centre’s hold music, so you turn to the company’s much vaunted chatbot.

It seems fairly straightforward. You type in your question and press enter. The chatbot comes back with a list of completely unrelated content links, and asks if any of these solve your problem. ‘No’ you say.

You retype your question, hoping this time it’s a little clearer. Again the chatbot cheerfully responds with a new list of possible ‘helpful’ articles and FAQ links, and mentions that it is busy learning and is grateful for your help. It will get more accurate the more people engage with it.

Oh, so your diabolical customer experience is all for a good cause – to train their chatbot! The cheek of it.

After a third attempt, you notice a link that may be relevant. You click on the link and are taken to a three-page document providing generic product information. The assumption is that you will take the time to read this document and then work out the answer yourself. With raised blood pressure, you click on the ‘Connect me to an agent’ link and hope that possibly they may have the knowledge needed to solve your query. Sometimes they do, sometimes they don’t. That’s how it goes with so many omnichannel customer journeys these days.

I must confess I expected more. I envisaged that by now we could engage with chatbots that are capable of diagnosing my specific issue, and then offering me a relevant response that results in a relevant action. In other words, a chatbot that is not simply an over-hyped digital assistant that can execute basic instructions or offer me links to possible content matches. I had in mind an digital advisor that could operate at the level of an expert – one whose intelligence is defined as much by the relevance of the questions it asks as the answers it finally offers.

If you talk to most AI companies, their chatbots already perform like digital experts. They will refer to their amazing natural language understanding and incredibly intelligent algorithms that are powered by ‘machine learning’, ‘deep learning’, and ‘neural nets’. They will give you the sense that all you need to do is point their technology in the direction of your knowledge base and the digital advisor will magically onboard all your product, policy and procedural expertise. Then, with just a little bit of guidance, you can soon have trained your chatbot into a digital Einstein that can change your customer service offering forever.

When you ask them to show you a working example, they will probably show you one of their canned demo’s – built off a scenario where the source data is in rich supply, the use case is clearly defined, and the user script can be carefully followed. As a result, their chatbot’s conversation will feel so intelligent, so human-like, that you will feel you simply have to have one.

Just don’t ask them mid-way to type in something unscripted and to upset their crafted storyline! I am certain that you will be quickly informed that they have not managed to train this chatbot to cover all contexts, and that this is simply used for illustration purposes.

No ifs or bots: Many chatbots are lacking when it comes to customer support

The real reason is that it is all really a digital mirage. It looks so achievable until you shift your eyes down to your current position, and suddenly the mirage vanishes. There are a number of reasons for this:

Companies seldom have the quality of data needed to accurately train a customer facing chatbot

Most companies operate in a world of legacy systems, limited integrations, poor quality data, and poorly documented internal policies and procedures – the very things that cognitive systems depend on to build their engagement accuracy.

A customer support chatbot is powered more by prescriptive than predictive logic

To understand the difference, ask Siri or Alexa for an answer based on available information, and they can usually give it to you. For example, if you ask: “What is the weather looking like tomorrow in London?”, you will be amazed how accurate the answer is. That is because the information exists, and thousands of people have already asked the same or a similar question. The patterns are thus established and the correct answer can be predicted.

However, try ask a question that requires more context before answering. Say: “What is the best home loan for me?”. You will probably notice that the response will be to offer you possible links to companies offering loans. It won’t begin by understanding your needs. This is because a financial need analysis is driven off a diagnostic set of prescribed logic. There is no answer yet – the problem still needs to be understood.

In regulated environments, you need to be able to prove your chatbot asked the right questions and offered the right advice

Where a chatbot is powered by predictive logic – the logic you need to train and that keeps ‘learning’ based on multiple engagements – you will find it will struggle in a regulated environment. This is because the logic is designed to change and adapt, based on user engagement. It is also hard to prove how a decision was reached, as each recommendation is made in what is often referred to as a ‘black box’. This is hugely problematic when you are offering customers advisory support in a regulated environment, such as banking and finance.

Context matters, and the way most companies capture prescriptive logic lacks context

Prescriptive logic is typically captured using documents (knowledge bases) or decision trees (process flows). It’s how we have trained employees brains for decades and it’s how we are trying to train our chatbots. So just like giving staff exercises to learn how to apply the formula to different situations, we get teams to train the chatbot, telling them when they are right and when they are wrong. The problem is that documents and decision trees are not able to capture all the possible scenarios. They can only describe a few. And as a result, the more variables you need to consider in order to offer a customer accurate, relevant advice, the harder it becomes to achieve.

The good news is that there are now digital platforms available that allow you to achieve the holy grail – a chatbot capable of asking me context relevant questions that then lead to relevant answers and actions. These platforms have been built off data-powered, prescriptive logic that can ensure your customers are offered a consistent, compliant and context-relevant digital engagement, one that leads to a successful customer service outcome every time.

These platforms have acknowledged that not all logic should be predicted, and that for customer support chatbots the foundation of the logic has to prescribed. The trick is ensuring it is also contextual, and these platforms have now managed to do this in a way that can be maintained effectively.

The dawn of chatbots capable of offering customers consistent, compliant and yet highly context-relevant customer engagements is upon us. And it’s about time, too.


Ryan FalkenbergRyan FalkenbergMay 7, 2019
cyber-4062449__480.jpg

6min970

Ask the leadership of any reasonably-sized company what technology they’re looking to implement and they’ll almost invariably mention artificial intelligence (AI).

In theory, that’s great, because AI has the potential to fundamentally change the way a businesses operates and creates a great Customer Experience. The longer the business uses an AI application, the better the experience should get. Given enough time, the system can collect enough data on each individual customer to provide meaningful, hyper-personalised experiences.

Implemented badly, however, AI can be a total disaster. Rather than feeling like the business they’re dealing with cares about them, they’re left with the impression that customer service has been handed over to a bunch of dimwitted machines.

Let’s talk about chatbots

The easiest way to illustrate how varied the AI experience can be is to look at chatbots. They’re the kind of front-facing AI that more companies are using and which an increasingly large body of customers are familiar with. Trouble is, most companies are terrible at implementing chatbots.

Apart from a few forward-thinking exceptions, companies tend to put a chatbot on their website in the hope that that it will learn from each interaction it has with a customer and that its answers will become more nuanced over time. They also operate in the belief that customers will tell the chatbot when it’s wrong, helping to train it further (hands up if you’ve ever done this willingly).

That would be great…if the chatbot was actually equipped to do so. However, for the most part, chatbots are simply going through the company’s existing knowledge bases and serving you with a document (or, in the worst cases, multiple documents) to try and help. It’s essentially a slightly smarter form of search.

As anyone who’s tried to use the search function on a corporate website will tell you – that’s not particularly helpful, especially when you’ve got a specific query. Let’s say that I want to know if I can insure my sunglasses. I don’t want to have to scour through insurance agency documents to try and figure out the answer. I just want the answer.

Contextual, hyper-personalised, relevant

As long as chatbots rely on a flawed architecture that depends on the existence of relevant documents containing the needed information, they won’t be able to provide that answer.

If you’re going to use AI to improve CX, you need to take a different approach. If you want to operate in the digital era and want to drive logic through data then you need to start it in data. That means looking beyond your existing documentation and CX architecture and integrating insight into customer behaviour across digital and offline channels.

This approach will, ultimately, allow you to offer customer support that is hyper-personalised, relevant, and compliant.

A chatbot built on this kind of framework understands what you’re asking and can answer specific questions according to what you actually need. While that’s just one small part of CX, anyone who’s cursed a company for failing to provide useful information, will know how important it is.

The aim of AI

That said, this approach shouldn’t be limited to chatbots. Consistency – in style, tone, and content – is one of the most important factors in successful CX.

It’s therefore imperative that any organisation turning to AI to improve CX apply a data-first architecture across every customer-facing channel. So, whether I make a query using a chatbot, the search function on a website, or a call centre, I should get the same – relevant – answer.

However, if this is going to happen, businesses need to stop trying to bolt AI onto their existing architectures and take an approach that allows it to reach its full potential.

 




Inform. Inspire. Include.
A free way to improve your business.

Customer Experience Magazine is the online magazine packed full of industry news, blogs, features, reports, case studies, video bites and international stories all focusing on customer experience.


CONTACT US

CALL US ANYTIME



Contact Information

For article submissions:
Editor
Paul Ainsworth
editorial@cxm.co.uk

For general inquiries, advertising and partnership information:
advertising@cxm.co.uk
Tel: 0207 1932 428

For Masterclass enquiries:
antonija@cxm.co.uk
Tel: 0207 1937 483

Awards International ltd
Acacia Farm, Lower Road,
Royston, Herts, SG8 0EE
Company number: 6707388

JOBS IN CUSTOMER SUPPORT

Find a job in customer support with Jobsora


Newsletter