The Ultimate Question® And the Net Promoter® Score
One magical question (the so-called Ultimate Question®) and one simple formula (the Net Promoter® Score or NPS®) are the ultimate measures of customer satisfaction and the ultimate predictors of a company’s future success.
These were the assertions in the book The Ultimate Question by Fred Reichheld, a Bain & Company consultant. The same assertions were repeated and expanded upon in a following book, The Ultimate Question 2.0 by Fred Reichheld and Rob Markey (also a Bain & Company consultant). They argued that The Ultimate Question and the Net Promoter Score “drive extraordinary financial and competitive results.”
Many presidents/CEOs of companies have read the books about the Ultimate Question and NPS, or have heard them discussed at conferences, or heard comments about them from other senior executives. The books, the publicity, the conferences, and the favorable press have elevated NPS to almost mythical status, the holy grail of business success. But is there really one ultimate question? Is NPS really the ultimate predictor of success?
The Ultimate Question is, “How likely is it that you would recommend (this product, service, company) to a colleague or friend?” The answer scale is a 10-to-0 scale, with 10 defined as “Extremely Likely” to recommend, and 0 defined as “Not At All Likely” to recommend (see chart below).
The Net Promoter Score is calculated from the answers to the 10-to-0 scale. The 10 and 9 ratings are grouped together and called Promoters (green in the chart). The 8 and 7 ratings are called Passives (yellow on the chart above), and those who give a rating of 6 or below are called Detractors (red). The NPS formula is the percent classified as Promoters minus the percent classified as Detractors.
The following contains some observations about the Ultimate Question and NPS. Let’s start with the positives and then discuss the negatives.
Positives
- The question itself is a good one. It’s clear and easy to understand.
- The 10-to-0 rating scale is widely used and generally accepted as a sensitive scale (i.e., it can accurately measure small differences from person to person).
- The labeling of the scale’s endpoints (Very Likely = 10 and Not at all Likely = 0) are clear and understandable.
Negatives
- Ambiguity. The individual numbers on the 10-to-0 answer scale (except for the endpoints) are not labeled or defined. What does someone’s answer really mean? Is a 7 rating or an 8 rating positive, neutral, or negative? Some people tend to give high ratings, while others tend to give low ratings, especially when most of the points on the scale are not precisely defined.
- Lost Information. The Net Promoter Score formula is imprecise because of lost information. Here’s how NPS loses information:
- The NPS counts a 10 answer and a 9 answer as equal. Isn’t a 10 better than a 9? This information (that a 10 is better than a 9) is lost in the formula.
- If someone answers an 8 or a 7, the answer simply doesn’t count; it’s not included in the formula. So all of the information in an 8 or a 7 answer is lost, and the sample size is reduced because these individuals are not counted (a smaller sample size increases statistical error).
- The NPS counts a 6 answer the same as a 0 answer, a 5 answer the same as a 0 answer, a 4 answer the same as a 0 answer, and so on. Isn’t a 6 answer much better than a 0 answer? Isn’t a 5 answer better than a 0 answer? So most of the information in answers 6, 5, 4, 3, 2, and 1 is lost in the NPS formula because it counts all of these 6-to-1 ratings as equal to zero.
- In effect, the NPS converts a very sensitive 10-to-0 answer scale into a crude 2-point scale (Promoters and Detractors) that loses much of the information contained in the original answers.
A far better measure than the Net Promoter Score is a simple average of the answers to the 10-to-0 scale, where a 10 answer counts as a 10, a 9 answer counts as a 9, an 8 answer counts as an 8, and so on down to 0. This results in an average score somewhere between 10 and 0 that contains all of the information in the original answer scale. No information is lost!
- Misnomers. The terms “Promoters,” “Passives,” and “Detractors” are curious. If someone answers with a 10 or a 9 rating, it would seem defensible to classify them as Promoters (i.e., people highly likely to recommend your brand or company). Calling an 8 or a 7 rating Passives is highly questionable. An 8 or a 7 rating is pretty darn good, and one might conclude that the individuals who give those ratings are also likely to recommend your brand. So the Passive name is a misnomer, but the real sin is the term “Detractor.” Nowhere on the answer scale is there a place to record that someone is likely to recommend that people not buy your brand. That end of the scale says “Not At All Likely” to recommend. “Not-likely-to-recommend” is a far cry from being a Detractor (i.e., someone who actively tells friends not to buy your brand or someone who makes negative remarks about your company). Detractor is a misnomer.
- Recommendation Metric. The likelihood that someone will recommend a brand or company varies tremendously from product category to product category. Someone may recommend a car dealership, or a restaurant, or a golf course (high-interest categories) but not mention a drugstore, gas station, bank, or funeral home (low-interest categories). If customer recommendations are not a major factor in your product category, then the NPS might not be a worthwhile measure for your brand. A sound strategy is to tailor the customer-experience questions to your product or service and to your business goals. Use multiple questions that measure customer experience relevant to your company. Don’t buy into the illusion of universal truth or the promise of an “ultimate question.” Don’t fall for simple answers to complex questions. If the “Ultimate Question” is not really the ultimate question, then what are some best practices to create better questions to measure customer satisfaction?
Questionnaire Design
The first rule is “do no harm.” That is, your attempts to measure customer satisfaction should not lower your customers’ satisfaction. This means that questionnaires should be simple, concise, and relevant. Use very simple rating scales (Yes/No; Very Good/Somewhat Good/Not Good; Excellent/Good/ Fair/Poor). Short word-defined scales (e.g., excellent, good, fair, poor) are easy for customers to answer, and the results are easy to explain to executives and employees. Moreover, short, simple scales work well on PCs, tablet computers, and smartphones. Long, complicated scales should be avoided.
The questionnaire should almost always begin with an open-ended question to give the customer a chance to tell his/her story. An opening “question” might be:
“Please tell us about your recent experience of buying a new Lexus from our dealer in north Denver.”
This open-ended prompt gives the customer the opportunity to explain and complain; it communicates that you are really interested in the customer and his or her experiences; and it conveys that your company is really listening. Then you can ask rating questions about various aspects of the customer’s experience, but keep these few in number. Most satisfaction questionnaires are much too long. If you want to include a recommendation question, you might consider something similar to the following (this is a restaurant example, so remember that the exact wording must be tailored to your product, company, and situation):
With this question-and-answer scale, it’s possible to calculate a Net Recommendation Score™ according to the following formula:
The Recommendation Question with its well-defined answer choices and the Net Recommendation Score™ formula give you a much more precise measure of the net influence of customer recommendations than the Ultimate Question and the Net Promoter Score.
In summary, the Ultimate Question is simply another question; it has no magical value, no special meaning, no prescience of success. The NPS is not a magical formula, but a flawed formula that loses much of the information in the original answer scale. If you like the concept of measuring the influence of customer recommendations, you might want to consider the Net Recommendation Score™, but please remember that the Net Recommendation Score™ is only one measure—and you will need other questions to fully measure and understand the customer experience.
Author
Jerry W. Thomas
Chief Executive Officer
Jerry founded Decision Analyst in September 1978. The firm has grown over the years and is now one of the largest privately held, employee-owned research agencies in North America. The firm prides itself on mastery of advanced analytics, predictive modeling, and choice modeling to optimize marketing decisions, and is deeply involved in the development of leading-edge analytic software. Jerry plays a key role in the development of Decision Analyst’s proprietary research services and related mathematical models.
Jerry graduated from the University of Texas at Arlington, earned his MBA at the University of Texas at Austin, and studied graduate economics at SMU.
Copyright © 2014 by Decision Analyst, Inc.
This posting may not be copied, published, or used in any way without written permission of Decision Analyst.