I have never liked ranking questions. For a start, they seem to place an unusually high cognitive burden upon the respondents. After a lifetime of being asked to score things out of five, and to work efficiently through batteries of rating style questions, well, suddenly ranking questions emerge like a sudden speed bump. Any efficient respondent hits the thing going far too fast, and their suspension – or tolerance for the questionnaire – bottoms out. Clunk!
It isn’t just because there is confusion in the typical respondent about the difference between rating and ranking. That’s one cause of the problems. (Heaven knows, another one is the “meaning” of the numbers now that a “5” now means fifth ranked instead of an “excellent.”)
The second cause of the problem is that we often have too many items to rank. I call this the wine list effect. If you go into a restaurant and the waiter asks you whether you have selected a wine yet, you’ll know the feeling I generally experience. There’s a list of 30 white wines. You don’t know any of them, but you know three things.
- You prefer Chardonnay over Riesling.
- Anything over $60 a bottle is ridiculous.
- On the other hand, selecting the cheapest wine will simply make you look mean. Avoid that one.
My own wine-choosing rules are quite efficient, and present quite a good cognitive shortcut, as all heuristics are meant to do. The rules quickly eliminate a dozen of the 30 wines on offer. The remaining wines I am, of course, ill-equipped to choose from – I am indifferent to them. So at this point I quickly point to one of the remaining 18 candidates. It’s probably a pretty good drop, and if I am quick enough in my selection I maintain a look of appearing knowledgeable.
“Good choice Sir,” says the waiter. (And with those words he has just earned his tip for the evening!) Everybody wins.
Well, that’s my strategy revealed. That’s how most people rank most things in life. As soon as the shortlist grows longer than five or six items, we start applying a few blanket rules or heuristics. Our brains are not actually engineered to conduct a fully rational ranking process – one by one – towards seven or more competing products or services or attributes.
So what kind of data do you get when you ask a respondent in your questionnaire to rank nine service attributes, or 11 brands, or a dozen product configurations: rank them from best down to worst?
It is just too complicated!
And supposing you had 1000 respondents diligently ranking these items from one down to 12, (and we are assuming that they each fully understand that one equals good, and 12 equals the worst,) what sort of data do we get? Is the gap between the first and second attribute the same size as the gap between the second and third attribute etcetera? Can we tell whether the top three attributes are clear leaders? Or have they nudged just ahead of the pack?
Likewise, do we know whether an attribute has failed to impress – or whether in fact it has actively turned off the respondents.
Ranking questions fail to explain or illuminate what’s really going on in the consumer mind. You don’t know whether I rejected the Champagne because I don’t like Champagne, or because it was way over $60 per bottle, or whether I didn’t reject it all, but rather chose something even better.
Is there an answer to this problem? One elegant solution – it isn’t perfect – is Max-Diff. In a typical exercise you may have a dozen variables which you wish to see ranked meaningfully.
Max-Diff breaks the cognitive process down into may be seven or eight chunks. Each time the respondent is presented with four variables and asked to choose the most important, as well is the least important. They then repeat the exercise, but with different combinations of variables – they may do this seven or eight times. This gets us over the cognitive problems associated with ranking questions.
It also gets us over the analytical problems. Here, and I’m thinking of the Sawtooth Max-Diff module, each variable in question is ultimately scored with an index that clearly shows the relative preference or rejection. I can see the degree to which each was chosen as “best” or as “worst” – or I can see the net score (best minus worst.)
Thus we can see that Variable1 is easily preferred – and by a big margin – over the second ranked variable. From the data we might understand that six of the 12 variables were testing cluster together – neither particularly liked, nor disliked. In other words we get to see something of the thinking that goes behind the ranking scores. We understand better the nature of the consumer decisions.
If you applied this software to my various wine list selections over the past 10 years you would understand that Duncan Stuart – wine-wise – is driven by a rejection of high prices, and a rejection of certain varieties. By contrast, the sad facts would reveal, this writer is neither driven by supreme wine knowledge, or by a fail-safe palate. There: Max-Diff has just revealed my dodgy wine selection secrets.
Max-Diff is an easy module to use, it works well in online surveys, and the analysis is blindingly simple. As a research tool it fulfils the basic need we have to understand human choice-making. It understands that people are not all that mathematical in their approach, but are intuitive and sometimes less logical than the wine waiter and assorted guests might ever suspect.