Measuring the 2 million combo menu.

Upsize this. Downsize that. No wonder we have whiplash: a fast food menu presents us with something like 2 million possible combinations.

In an age of mass customisation we now have the tools to measure really complex customer choices.

Scenario one. A young guy walks into Burger King and he’s starving. He’s got $10 in his pocket and while he waits in the queue he stares up at the menu board. Does he go for the tender crisp chicken wrap with honey mustard? Or does he stick with his usual Double Cheese Whopper? Or should that be a single cheese but with an upsized Coke? Next please. He makes his order.

Scenario two. Life Insurance. The young mother sits at the dining table with her tablet opened to show the Life Direct website. She is comparison shopping – looking for the best life cover for herself and her husband. She enters the details into the online calculator and nominates how much cover they need (enough to cover the mortgage,) and six competing offers pop-up within 15 seconds. These are priced marginally differently per month. She has heard of some good things about some insurance companies but bad things about one of the six. And a few of the competing offers come with additional conditions and benefits. She weighs everything up and clicks one.

Scenario three. The couple have all but signed up for the new Ford. They tested other competing brands, but this is the hatchback they really like the best. “Of course,” says the salesman as he starts to fill in the sales order form; “we haven’t discussed your options yet. Do you prefer automatic, or manual? How about the sport model with the mag wheels? That’s only $1200 extra. Of course you have the two-door, but how about the four-door option? And it’s a bit extra for the metallic paint – though it tends to age a lot better than the straight enamel colours.”

“You mean it’s not standard?” she asks. The couple look at each other. Suddenly the decision seems complicated. She murmurs to her partner that maybe the Mazda with the free “on road package” is back in the running. The salesman begins to sweat. The deal is slipping through his fingers.

We are in the age of mass customisation

Three very common consumer scenarios. In an age of mass customisation, consumer choices don’t just come down to brand preference or advertising awareness – the two staples of consumer research – but rather come down to an astonishingly complex human algorithm which somehow handles what we might call menu-based choice architecture or MBC.

How complex? One estimation I have seen of the typical burger menu board from where during the 90 seconds in which you queue up you must make a choice, is that there are in excess of 2 million possible combinations and permutations for a typical $15 order. Double cheese. Extra fries. Upsized Coke. Hold the beetroot. Everything comes with a cost and a trade-off, governed by personal tastes, one’s own sense of value and the various conditional skews of this week’s promotions. Somehow humans manage to make a decision, though to be honest, you can see how habits develop. They become an efficient way of dealing with information overload.

A big challenge for market researchers

How do researchers even begin to test the attractiveness of 2 million options? The problem for market researchers is not just one of volume. The complication comes from the fact that the consumer choices are not strictly rational. When you think about it, paying an extra dollar for an additional slice of cheese in a double cheeseburger is bloody expensive. And if I’m constrained by the $10 note in my back pocket, then who can tell whether I would prefer an upsize Coke and small fries with my burger, or a Jumbo coke and no fries at all? Or no Coke at all, but extra fries? Or Coke and Fries but hold the extra cheese? Or…

Menu-based choice modelling is a relatively new field of market research that springs from its roots in conjoint analysis.
Conjoint, I learned recently to my delight, was a term first applied by my statistical hero John Tukey, who was fascinated by the question of how we can measure things that are measured not discretely, but considered jointly – hence con – joint.
Since the 1970s conjoint approaches have been applied with increasing confidence.

At first these were paper-based, and a typical example of this kind of study gave respondents a set of 16, 24, or 32 cards on which various combinations and permutations of product description were portrayed. Generally this might be enough to efficiently calculate the relative attractiveness of competing offers.

For example if five competing brands of peanut butter came in four different price points, low or medium saltiness, super crunchy, regular crunchy or smooth, then we have 5 x 4 x 2 x 3 = 120 potential combinations of product to test. Good design might reduce this to a fairly representative 32 physical cards each with a specific combo of price, brand and flavour etc.
The conjoint approach – with carefully calibrated options to test – enables us to determine which factors are most driving consumer decisions (price is often the main driver,) and the degree to which consumers consider the trade-offs between, say a preferred brand at a higher price, versus a lower priced offer from a brand we don’t trust so well.

Conjoint has the advantage of boiling each variable down to utility scores, and – since price was one of these variables – allowing us to put a price-tag on brand appeal, or flavour.

Even so, the paper-based system, because it still requires at least 32 cards to give us enough data on the peanut butter survey, still puts on respondents a high cognitive load.

By the 1980s conjoint studies were made possible on computers, and this approach enabled some clever variations to be tried. First, the computer approach eased the cognitive load by offering three or four cards at a time so that respondents could more easily choose their preferred option (or none of these) over 9 or 10 iterations. Another innovation: useful in some situations but not universally, is adaptive conjoint, which in cycling a respondent through a series of choice exercises, may quickly decide that this respondent never chooses Sanitarium brand, so it begins testing variations among the preferred ETA and Pam’s brands. It focuses on the useful part of the choice landscape.

These approaches have been increasingly honed and refined, and I have always admired the developers of Sawtooth for working and reworking their algorithms and methodologies to create increasingly predictive conjoint products. They are willing to challenge their own products. They moved away from favouriting Adaptive Conjoint a few years ago.

Up until 2010 the softwares offered respondents a choice between “competing cards.” This approach works best on mass produced products such as peanut butter, where there may be less than 200 combinations and permutations available. However in the last decade or so, and with the rise of e-commerce and the presence of bundled offers (think how phone, energy and broadband packages are now being bundled,) classic choose-one-card conjoint only goes some of the way to explaining what goes on in the consumer mind.

Enter MBC.

This is a Sawtooth product and is a particularly expensive software. I recently forked out New Zealand around $12,000 for a copy, and dismayingly, instead of receiving a sexy drag-and-drop statistical package, had the equivalent of a jumbo bucket of Lego bits offloaded onto my computer. It isn’t pretty. It requires HTML coding, and requires some real steep learning to get a handle on.

In my case, I’m just lucky that I have worked with conjoint for a few years now, and conceptually can see where this is coming from. But even so, I can think of no software that I have dealt with in 22 years that has required such a demanding user experience. Luckily by the time the respondent sees it, things aren’t so complicated.

Not pretty for the researcher – but emulates reality for the respondent

With the right HTML coding, the online survey presents – metaphorically – a replica of the Burger King menu board. There you can choose the basic flavour of burger (beef, chicken, fish, cheese,) and then top it up with your own preferred options. Choose the beef value meal if you want – and upsize those fries. As you select your options the price goes up or down so respondents can very accurately replicate the actual choice experience.

And that’s the point of MBC: to emulate as closely as possible real choice situations. We might discover that consumer choice architecture radically transforms itself depending on whether there is a price point of $10, or whether there is a price point of $12 or $15. Sawtooth experience shows that many of the trade-offs are not strictly rational. There may be an overriding decision about whether to order a whopper or double whopper, but regardless, the shopper may always prefer to have a Coke of a certain size. In other words there is no trade-off here, or there is: but it applies to the fries only.

In a typical example the respondent is shown 10 variations of menu boards, and each time given a price limit and asked to choose. Given that the questionnaire needs HTML coding, it will cost the client somewhat more to conduct an MBC survey compared to a regular out-of-the-box online survey. The investment should be well worth it.

Another consideration: given the 2 million permutations available, this is not something to be tested with small sample sizes. A thousand respondents may be required as a minimum, each doing ten choices to generate 10,000 data points. Picture these as trigs on data landscape.

Given enough of these reference points, the software then has enough information to fill in a complete picture of the consumer-choice landscape. You simply can’t do this when you break down the survey into constituent questions about price, taste, brand, options etc – that is, in considering these elements not jointly, but singly.

Now comes the good part. By assembling this data, you could more or less model the optimum menu board. If I was Burger King for example, I could probably massage the menu so that the average punter with $10 in their back pocket would spend not $9.20, but something closer to $9.80 a 9% lift in revenue willingly forked out by happy customers. If I wanted to try a special offer on the flame grilled chicken burger, I could see what impact this would have on revenue and on the demand for other options.

How accurate is MBC?

As with any conjoint approach, the accuracy is largely dependent on the questionnaire design, and on the survey design which can be tested before it goes into field – using random data and running it through the MBC simulator.

If I was doing a landscape map of New Zealand and had 10,000 trig points I could probably come up with a pretty accurate picture of the geological landscape. There would be enough points to suggest the Southern Alps, and the ruggedness of the King country for example. But it wouldn’t be totally granular or perfect, and I would be extrapolating a lot of the story.

So similarly, given that we are testing, say, 2 million combinations of menu with just 10,000 data points, don’t expect the modelling to be perfect. Sawtooth has tested MBC using actual data, (but held out for comparison,) and found the software chooses the right options with 80% accuracy or thereabouts. So for my money, the MBC accuracy is right up there with Neural Networks for accuracy, but much more useful than NNs for testing menu-based choices and explaining the interactions that are going on.

Currently I’m employing the software for a client who is looking at bundled products, in a particularly complex market, that’s why I bought the software. There is simply no other tool available to do the job. Without giving any client information away, I will within a few weeks disguise the data and make available a case study to show how MBC works, and how it can be used.
I mention this because I am a strong believer in the idea that market research professionals ought to share their experiences for mutual benefit. As a profession we are under mounting challenge from the big data mob. However, so long as they stay in the world of Excel, and so long as market researchers remain aggressive in the way we trial and purchase new software advances, we will continue to have a healthy profession. My first reaction to MBC is that it is an answer – at last – to a world where customers can now customise just about anything. You are welcome to contact me if you want to learn more.

Duncan Stuart
64-9 366-0620

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s