Listening to your customers is just not enough

Written by  __

Marketing is all about understanding, anticipating and meeting needs, so good marketers never get tired of listening to customers. We love to find ways to ‘bring them to life’ and give them voice in our organisations and decision making.

We now have better tools for doing this than ever before. The old suggestion box has been replaced by Facebook pages, Twitter streams and Google+ profiles. At customer service counters we install happy-face/ ad-face feedback devices. We provide toll-free-comment lines, we invite SMS ratings and we build surveys in to our email signatures or tag them on to the end of our call center interactions. We are always listening.

This is exciting for marketers. We analyse and synthesise this information – sometimes using powerful text and sentiment analysis tools – and make sure that this customer view percolates in to our thinking. A healthy organisation will embrace this feedback and use it to identify opportunities, spur action and stay connected with its customers.

Opening up channels for listening to customers has never been easier.

So what is the problem?

We need to recognise and understand the limitations of the feedback we receive in this way – and be careful not to use it beyond that for which it is designed.

Despite the fact that we may be getting hundreds or thousands of responses, high tech suggestion boxes do not provide a reliable or accurate measure of the quality of your customer’s interactions with your organisation.

There is often an irresistible temptation to use this type of data to make numeric comparisons or draw statistical and quantitative conclusions, particularly when we are dealing with high volumes of feedback.

The use of scales and ratings in these ‘surveys’ tempts companies to cross the line – and use the data to assess metrics such as which branch or agent or region is performing better, what level of overall customer experience is the company offering, whether the service level getting better or worse, which touch-point needs more investment, and who should get a bonus this year.

This is wrong. Just listening to customers or gathering feedback should not be confused with research based on scientific principles – and the information we get should not be used to make the hard, quantitative decisions that businesses need to make every day.

Why do I say this?

One of the main reasons is that all of the feedback and dialogue channels mentioned above suffer from a self-selection bias. The customer that chooses to contribute and offer feedback – may not be representative of all your customers. Typically customers that feel strongly positive or negative are the ones that choose to participate. An outraged or delighted customer is far more likely to provide feedback.

Given that much of good service is about consistency, we miss out on the vast majority of experiences. The occasional thrilled customer or unhappy customer can often be a poor reflection on the typical experience offered over a reasonable period of time.

Another typical problem with this type of data is that those delivering the service are often able

to have an influence on who provides the feedback. This can lead to a lob-sided perspective. Is every customer invited with equal enthusiasm to participate in the ‘short survey at the end of this call’? Service agents can, by their tone of voice, encourage happy customers to participate and ‘not encourage’ disgruntled ones.

The bottom line is this: If you are going to consider your feedback process as anything more than a qualitative open dialogue with your customers, then it needs to be designed much more scientifically . Particularly if you are going to use the data to make hard business and resource decisions.

‘Being designed much more scientifically’ means making sure that each customer has a close to equal opportunity of being selected to participate. This normally requires a offering a direct and firm invitation to a representative selected sample of customers. A quick and un-invasive survey will give you high response rates and confidence in the sample.

It also means that the data must be collected in a neutral manner. The service agent should not be involved in initiating the survey – or even worse – collecting the ‘response card’. A third party and separate ‘survey channel’ will receive a much more honest assessment. This is often difficult in practical terms because you want to be sure that the evaluation is done as close to the time of the actual service experience as possible.

Luckily, technology comes to the rescue and can do the job outlined above very well.

Tools (such as Dashboard’s own Custometer) can be used to draw a representative sample of customers, who have recently experienced a service interaction, and approach them with a short survey.

Widely-used technology such as IVR can ensure that the survey is implemented in a 100% consistent way across touch-points and over time so that sound quantitative assessments can be made with a high degree of confidence and low cost. Results can be presented quickly, clearly and crisply in action-oriented dashboards

I am by no means advocating that a highly structured approach should block dialogue, or the rich and textured voice of the customer getting through to decision makers. It is good practice to always invite and encourage open ended comments (IVR has an advantage in that you can digitally record these comments rather than ask an agent to summarise and write them down). Importantly, too, service staff must be alerted to poor ratings so that they can be followed up quickly and directly.

Being ready to listen carefully to any customer is the lifeblood of any service-oriented business. But it is not usually a very good way to objectively assess or monitor the overall quality and performance of a customer service system. For that you need a tightly structured and carefully designed measurement programme.