|Traditional customer satisfaction surveys do not ask enough of the right questions
Satisfaction surveys rarely reveal customers’ real thoughts
Customer satisfaction surveys have become a part of everyday life for consumers and retailers alike. Millions are now spent by companies eager to unearth the hidden nuggets of desire that they can tap into, the game-changing trends they can identify early and clean up. Such a pity, then, that they usually end up asking the wrong questions.
For the vast majority of organisations measuring customer satisfaction is something of a misnomer. In reality, what they are doing amounts to little more than measuring their internal processes in dealing with customer queries and complaints.
More often than not the questions asked in customer surveys are designed to identify glitches in their technology or back-office systems rather than assessing the customer’s actual experience.
Take the burgeoning online retailing sector, for example. In a recent study we identified a “halo effect” that surrounds online shopping, which appears to reduce over time. Essentially, this overstates customer satisfaction scores, particularly among new users.
We discovered this effect by asking a series of context-based questions about consumers’ internet shopping history. Traditional customer satisfaction techniques focus purely on evaluating specific internal processes, in this case, the functionality of the website.
Shopping online is considered more satisfying and more involving by those newer to it. Older users buying online for the first time are also more enthusiastic because the internet is still something of a wonder to them.
When we asked people to consider their last shopping experience, the online supermarket scored higher in each of the key measurements such as value for money, range of products stocked and trustworthiness of the brand.
Interestingly, those new to online shopping are also less satisfied with buying their groceries in-store than those who have never shopped online – feeling less relaxed, more distracted and suffering a keener sense of frustration. Online shopping has effectively dismantled their satisfaction of store shopping.
In general, those who have not grown up with a PC in the house are significantly more optimistic about the benefits of technology and therefore respond much more positively to each new service and convenience they discover. Younger users are not quite so effusive because the internet has always been part of their lives.
However, whatever the level of internet enthusiasm or age group, the halo effect does not last long in either group. After little more than a year of doing the weekly grocery shop online, it has diminished for most people as they begin to judge the experience in its own right and make comparisons with other companies engaged in online retail activity.
Levels of excitement about the possibilities online grocery shopping holds drops off by around 25% by the time someone has been doing it for two to four years and the sense of frustration has crept up by 17%.
In most organisations, customer satisfaction surveys tend to be constructed to evaluate the processes involved in the shopping experience. Process-focused customer satisfaction surveys would not identify the halo effect which can only be uncovered by learning about the consumer’s experience in context to their everyday lives and general internet history.
As the “halo effect” wears off, customer satisfaction with online shopping will fall. The danger of traditional measures is that organisations are led to believe the cause of the drop is an internal problem: the website’s functionality or a process failing.
The tactical, process-driven response would be to redesign parts of the site. In actual fact there is a bigger strategic concern the company should be addressing, namely that over time online shopping becomes less positive in people’s minds and less important to their brand experience. Solving that issue requires much more than a few new flourishes on a website.
More often than not the questions asked in customer surveys are designed to identify glitches in their technology or back-office systems. As for gathering data about “customer experience” most surveys ask very little. They only measure what is under their direct control – effectively assuming that nothing else influences behaviour or perception.
Customer behaviour and customer experience is rooted in context. At Intersperience we use a ‘5-step’ context model to ensure that we always ask fundamental questions that allow us to evaluate the experience in full. We need to understand how familiar customers are with the internet, what they expect from interaction with a particular company, what frame of mind they were in when they contacted the organisation and how it responded to their needs. All five will have different affects on behaviour and perception.
This way we get round the distortion caused by the halo effect. If a retailer only asked new customers about their experience, the online store would outperform the physical stores by a massive margin. However, over time the online shopping experience would appear to deteriorate.
On traditional measurements of customer satisfaction a retailer taking the first year customer as the barometer of customer satisfaction would conclude the website is hitting the mark and working very efficiently. If the more experienced online shoppers are the benchmark, the findings might cause the organisation to consider wholesale changes to an “under-performing” site which is not under-performing at all.
How the customers’ true perceptions of the brand were changing would hardly be noticed.
Customer satisfaction surveys are like the computers they are invariably compiled on. If you put the wrong information in at one end you will never get the right answer out at the other.
For more information on the issues raised in this update, please email firstname.lastname@example.org