It’s tragically easy to think of news stories about AI (artificial intelligence) gone awry. Even a cursory Google search yields horrifying examples like the disastrous attempt at using an algorithmic substitute for A-level exams in the UK or Amazon Rekognition’s false matching of American Congressmen and women (disproportionately people of colour in Congress) with criminal mugshots.  

As internal advocates for the customer, CX leaders need to make sure their organizations don’t harm their customers with discriminatory AI. Unfortunately, CX leaders who would like to set sail with AI often find themselves marooned due to trust and transparency issues because of the opacity and complexity of AI systems. In Forrester’s 2021 Data and Analytics survey, around a fifth of decision-makers cited, in their top 5 challenges, a lack of trust in AI among employees and a dearth of transparency around how AI/ML systems build models. 

Fortunately, explainable artificial intelligence (XAI) is a new trend in the field of AI that CX leaders can adopt to ensure trust in the technology while uncovering even more insights. XAI refers to the processes and techniques applied to AI systems for ensuring that people can pull back the curtain to understand how these systems arrived at their outputs 

The murky depths 

Part of the challenge firms are facing is that the majority of AI systems are based on machine learning, which, depending on the algorithm used, may be readily interpretable or completely obscured. XAI is critical today because many of the most popular machine learning techniques such as deep learning result in opaque models. And even traditional transparent methods may be rendered unintelligible when solving complex problems that require transforming a multitude of variables.  

As firms begin to experiment with XAI, it’s important to nail down the nuances of its seemingly synonymous terminology. At its core, XAI can be broken down into two general approaches: transparency and interpretability.  

woman in silhouette checking on Explainable artificial intelligence

Transparent AI approaches (sometimes called “fully specified”) leverage algorithms that result in a “glass-box” model, where the inner machinations of the model are laid bare and the paths to the model’s output are clear. For some more advanced AI techniques, like those rooted in deep learning, this may not be possible. In these instances, companies may employ interpretability – an approach in which post-hoc surrogate models estimate an opaque model’s decision logic. 

CX leaders may also need to decide whether to produce “global” explanations about the entire model, or “local” explanations about individual instances. Local explanations are particularly important in Europe, where GDPR gives customers the right to an explanation of automated decisions. Selecting the right type of explainability will depend on your use case. Many banks, for example, favour global model explanations for credit determination and fraud detection, as financial services regulatory bodies mandate this level of explainability. 

Opaque models tell no tales 

The reasons customer experience leaders should care about explainable artificial intelligence extends beyond meeting regulatory requirements and building trust with customers. Indeed, for insights driven CX leaders, “X” really does mark the spot for buried treasure.  

Most of the focus in the AI field is on “first-order insights” – the prediction, classification, or other outputs of the model. However, just beneath the surface of your models are a bevy of “second-order insights” the important patterns and anomalies the algorithm discovered in your data. This can yield all sorts of important non-intuitive information about your customers that would otherwise be lost. This point is worth underscoring: new strategies are rarely born from old insights.  

Recognizing the power of these tools, Forrester predicts that in 2022, 20% of companies will rely on XAI-derived insights to make transformational strategic decisions. If you’re already using artificial intelligence and would like to join this group, it’s time to batten down the hatches. Take stock of your current AI deployments and assess the level of explainability necessary. If your CX organisation is just getting its AI sea legs, consider enlisting explainable artificial intelligence as your first mate to help build the business case to leadership.   

Post Views: 3229