The Performance Management & Quality Monitoring campaign helps companies within the UK contact centre industry become pioneers of next generation best practice
Also known as the P&Q Challenge, the aim is to get everyone in the contact centre industry to collectively invent the next generation of operational practice in performance management and quality monitoring. Sponsored by Nexidia, long term educators of speech and text analytics, and facilitated by Martin Hill Wilson, the P&Q Challenge is a series of six workshops helping organisations adapt their current performance and quality practices by working through what is called the Strategic Quality Framework. This framework facilitates a 360° approach, encouraging organisations to produce their own version that suits their own challenges and culture. At the end of the workshops, each organisation goes through a P&Q Assessment where they are required to demonstrate what their old P&Q system looks like, and the steps they are taking to implement an upgraded ecosystem.
So far over 50 organisations have participated in the campaign, including companies such as Lloyds, bSkyb, LV, Vodafone, Tesco Bank, Thames Water and many more. Supported by key contact centre forums and associations such as The Professional Planning Forum, Contact Centre Management Association (CCMA), South East Contact Centre Forum (SECCF), South West Contact Centre Forum (SWCCF) and the Welsh Contact Centre Forum (WCCF), the campaign has generated a lot of rich content and experiences aiming to bring next generation thinking into the workplace. Download the whitepaper Investigating the current state of performance management in the contact centre.
All companies will come together at the P&Q Challenge Best Practice Awards on Thursday 27th March 2014 at Sheraton Park Lane Hotel, London where they will share their journey’s and be accredited as pioneers of best practice in Performance Management & Quality Monitoring.
This brings me on to my next topic…why performance management with interaction analytics leads to greater customer success…
It’s Not an Exact Science
Measuring agent performance has never been an exact science. In fact, many times the process is onerous, time consuming and doesn’t yield the results you’re looking for. I should know. I spent years managing agents, attempting to provide the constructive coaching my agents needed to meet their goals and provide the best possible service to our customers. Unfortunately, not having the information needed to do so often proved to be a nearly insurmountable challenge.
Here’s how this process always worked for me – I suspect many of you reading this will be nodding your head as you have experienced a very similar scenario.
Exception or Pattern?
It’s the first of the month and I need to perform my assessments on my agent, Jane Smith. I pull five of Jane’s calls from my call recording software to review. The problem is that these are random calls, and could be about any number of issues that Jane typically handles.
We’re a telecommunications provider and in the second call I’m listening to, Jane tries to help a customer who has no picture. Jane schedules a service technician to go to the customer’s home, without attempting any troubleshooting steps like first sending a signal to the box. This goes against company policy and will affect her bonus if she has too many unnecessary “truck rolls.”
However on this call, the customer was impatient and claimed they had this problem before, so I don’t know if Jane would have tried to troubleshoot had the caller reacted differently, or if this is indicative of the fact that she really needs more coaching or retraining on this topic.
A Fruitless Process
What I need is more examples where a customer calls in with a similar problem to see how Jane handles the issue, so I have a better sense of her skill set in this area. The only way to find these examples is to set off on what’s essentially a wild goose chase, pulling additional calls from the recorder, listening to a few seconds of each one, and trying to determine if it fits the bill. But time eventually catches up with me. I have 25 more agents to review and I can’t afford to go through this fruitless process, so I give up.
When it comes time to review with Jane, we discuss the call, she tells me it was an anomaly, I give her a few pointers, and honestly, hope for the best. Next month rolls around, and I want to see if Jane’s doing better with these type of calls. Once again, I’m mostly out of luck. With only random calls to choose from, it’s hit or miss if the ones I get are related to a truck roll. So as before, I do assessments on the calls I have available, without knowing if she’s making progress in this key metric.
And at the end of the quarter, when the numbers come in, I see Jane does in fact have a higher than average number of truck rolls. She did need more targeted coaching. But not only did I not have good examples of her calls, I didn’t have examples of other agents who were doing it well to use as best practices. Finding those calls would have required an even greater hunting expedition.
Wasting Time on a Broken System
In the end, no one benefits from this broken system. The company needlessly spent money on truck rolls that didn’t need to happen. I, as the supervisor, spent a lot of time searching for the right calls to review and coach against, without stellar results. The agent missed her bonus because she didn’t receive the training and coaching she needed. And some customers probably needlessly waited for service calls when an agent in the know could have solved the problem over the phone.
Getting the Right Information
What supervisors need is a way to have the right calls for every agent, without the fruitless hunting and pecking. If I only knew which agents to talk to, and about what, then the time I spent coaching could finally be effective and valuable. I need more than the random sampling, hope to get lucky method. If I had a method that would easily score agents against 100% of their calls, not just a random sample, and have the calls categorised based on the company’s metrics or goals, this would ensure that I always know which areas my agents struggle with and that I have calls available to coach against or use as best practice examples. What’s needed is the interjection of Interaction Analytics into the quality management process – you focus on bringing the human touch of coaching and mentoring to your agents so that they can bring better service to your customers.
Zaiba Mian, EMEA Marketing Manager at Nexidia has over 8 years of marketing experience working predominantly within the I.T software industry. Her skills span across the whole marketing mix, with a real passion for marketing communications and brand management. Having achieved a first class honours degree in Business Management and Marketing, Zaiba has developed and implemented a number of successful lead nurturing programs that have since been replicated across entire businesses.