Tag Archives: analytics

Measuring the 2 million combo menu.

Upsize this. Downsize that. No wonder we have whiplash: a fast food menu presents us with something like 2 million possible combinations.

In an age of mass customisation we now have the tools to measure really complex customer choices.

Scenario one. A young guy walks into Burger King and he’s starving. He’s got $10 in his pocket and while he waits in the queue he stares up at the menu board. Does he go for the tender crisp chicken wrap with honey mustard? Or does he stick with his usual Double Cheese Whopper? Or should that be a single cheese but with an upsized Coke? Next please. He makes his order.

Scenario two. Life Insurance. The young mother sits at the dining table with her tablet opened to show the Life Direct website. She is comparison shopping – looking for the best life cover for herself and her husband. She enters the details into the online calculator and nominates how much cover they need (enough to cover the mortgage,) and six competing offers pop-up within 15 seconds. These are priced marginally differently per month. She has heard of some good things about some insurance companies but bad things about one of the six. And a few of the competing offers come with additional conditions and benefits. She weighs everything up and clicks one.

Scenario three. The couple have all but signed up for the new Ford. They tested other competing brands, but this is the hatchback they really like the best. “Of course,” says the salesman as he starts to fill in the sales order form; “we haven’t discussed your options yet. Do you prefer automatic, or manual? How about the sport model with the mag wheels? That’s only $1200 extra. Of course you have the two-door, but how about the four-door option? And it’s a bit extra for the metallic paint – though it tends to age a lot better than the straight enamel colours.”

“You mean it’s not standard?” she asks. The couple look at each other. Suddenly the decision seems complicated. She murmurs to her partner that maybe the Mazda with the free “on road package” is back in the running. The salesman begins to sweat. The deal is slipping through his fingers.

We are in the age of mass customisation

Three very common consumer scenarios. In an age of mass customisation, consumer choices don’t just come down to brand preference or advertising awareness – the two staples of consumer research – but rather come down to an astonishingly complex human algorithm which somehow handles what we might call menu-based choice architecture or MBC.

How complex? One estimation I have seen of the typical burger menu board from where during the 90 seconds in which you queue up you must make a choice, is that there are in excess of 2 million possible combinations and permutations for a typical $15 order. Double cheese. Extra fries. Upsized Coke. Hold the beetroot. Everything comes with a cost and a trade-off, governed by personal tastes, one’s own sense of value and the various conditional skews of this week’s promotions. Somehow humans manage to make a decision, though to be honest, you can see how habits develop. They become an efficient way of dealing with information overload.

A big challenge for market researchers

How do researchers even begin to test the attractiveness of 2 million options? The problem for market researchers is not just one of volume. The complication comes from the fact that the consumer choices are not strictly rational. When you think about it, paying an extra dollar for an additional slice of cheese in a double cheeseburger is bloody expensive. And if I’m constrained by the $10 note in my back pocket, then who can tell whether I would prefer an upsize Coke and small fries with my burger, or a Jumbo coke and no fries at all? Or no Coke at all, but extra fries? Or Coke and Fries but hold the extra cheese? Or…

Menu-based choice modelling is a relatively new field of market research that springs from its roots in conjoint analysis.
Conjoint, I learned recently to my delight, was a term first applied by my statistical hero John Tukey, who was fascinated by the question of how we can measure things that are measured not discretely, but considered jointly – hence con – joint.
Since the 1970s conjoint approaches have been applied with increasing confidence.

At first these were paper-based, and a typical example of this kind of study gave respondents a set of 16, 24, or 32 cards on which various combinations and permutations of product description were portrayed. Generally this might be enough to efficiently calculate the relative attractiveness of competing offers.

For example if five competing brands of peanut butter came in four different price points, low or medium saltiness, super crunchy, regular crunchy or smooth, then we have 5 x 4 x 2 x 3 = 120 potential combinations of product to test. Good design might reduce this to a fairly representative 32 physical cards each with a specific combo of price, brand and flavour etc.
The conjoint approach – with carefully calibrated options to test – enables us to determine which factors are most driving consumer decisions (price is often the main driver,) and the degree to which consumers consider the trade-offs between, say a preferred brand at a higher price, versus a lower priced offer from a brand we don’t trust so well.

Conjoint has the advantage of boiling each variable down to utility scores, and – since price was one of these variables – allowing us to put a price-tag on brand appeal, or flavour.

Even so, the paper-based system, because it still requires at least 32 cards to give us enough data on the peanut butter survey, still puts on respondents a high cognitive load.

By the 1980s conjoint studies were made possible on computers, and this approach enabled some clever variations to be tried. First, the computer approach eased the cognitive load by offering three or four cards at a time so that respondents could more easily choose their preferred option (or none of these) over 9 or 10 iterations. Another innovation: useful in some situations but not universally, is adaptive conjoint, which in cycling a respondent through a series of choice exercises, may quickly decide that this respondent never chooses Sanitarium brand, so it begins testing variations among the preferred ETA and Pam’s brands. It focuses on the useful part of the choice landscape.

These approaches have been increasingly honed and refined, and I have always admired the developers of Sawtooth for working and reworking their algorithms and methodologies to create increasingly predictive conjoint products. They are willing to challenge their own products. They moved away from favouriting Adaptive Conjoint a few years ago.

Up until 2010 the softwares offered respondents a choice between “competing cards.” This approach works best on mass produced products such as peanut butter, where there may be less than 200 combinations and permutations available. However in the last decade or so, and with the rise of e-commerce and the presence of bundled offers (think how phone, energy and broadband packages are now being bundled,) classic choose-one-card conjoint only goes some of the way to explaining what goes on in the consumer mind.

Enter MBC.

This is a Sawtooth product and is a particularly expensive software. I recently forked out New Zealand around $12,000 for a copy, and dismayingly, instead of receiving a sexy drag-and-drop statistical package, had the equivalent of a jumbo bucket of Lego bits offloaded onto my computer. It isn’t pretty. It requires HTML coding, and requires some real steep learning to get a handle on.

In my case, I’m just lucky that I have worked with conjoint for a few years now, and conceptually can see where this is coming from. But even so, I can think of no software that I have dealt with in 22 years that has required such a demanding user experience. Luckily by the time the respondent sees it, things aren’t so complicated.

Not pretty for the researcher – but emulates reality for the respondent

With the right HTML coding, the online survey presents – metaphorically – a replica of the Burger King menu board. There you can choose the basic flavour of burger (beef, chicken, fish, cheese,) and then top it up with your own preferred options. Choose the beef value meal if you want – and upsize those fries. As you select your options the price goes up or down so respondents can very accurately replicate the actual choice experience.

And that’s the point of MBC: to emulate as closely as possible real choice situations. We might discover that consumer choice architecture radically transforms itself depending on whether there is a price point of $10, or whether there is a price point of $12 or $15. Sawtooth experience shows that many of the trade-offs are not strictly rational. There may be an overriding decision about whether to order a whopper or double whopper, but regardless, the shopper may always prefer to have a Coke of a certain size. In other words there is no trade-off here, or there is: but it applies to the fries only.

In a typical example the respondent is shown 10 variations of menu boards, and each time given a price limit and asked to choose. Given that the questionnaire needs HTML coding, it will cost the client somewhat more to conduct an MBC survey compared to a regular out-of-the-box online survey. The investment should be well worth it.

Another consideration: given the 2 million permutations available, this is not something to be tested with small sample sizes. A thousand respondents may be required as a minimum, each doing ten choices to generate 10,000 data points. Picture these as trigs on data landscape.

Given enough of these reference points, the software then has enough information to fill in a complete picture of the consumer-choice landscape. You simply can’t do this when you break down the survey into constituent questions about price, taste, brand, options etc – that is, in considering these elements not jointly, but singly.

Now comes the good part. By assembling this data, you could more or less model the optimum menu board. If I was Burger King for example, I could probably massage the menu so that the average punter with $10 in their back pocket would spend not $9.20, but something closer to $9.80 a 9% lift in revenue willingly forked out by happy customers. If I wanted to try a special offer on the flame grilled chicken burger, I could see what impact this would have on revenue and on the demand for other options.

How accurate is MBC?

As with any conjoint approach, the accuracy is largely dependent on the questionnaire design, and on the survey design which can be tested before it goes into field – using random data and running it through the MBC simulator.

If I was doing a landscape map of New Zealand and had 10,000 trig points I could probably come up with a pretty accurate picture of the geological landscape. There would be enough points to suggest the Southern Alps, and the ruggedness of the King country for example. But it wouldn’t be totally granular or perfect, and I would be extrapolating a lot of the story.

So similarly, given that we are testing, say, 2 million combinations of menu with just 10,000 data points, don’t expect the modelling to be perfect. Sawtooth has tested MBC using actual data, (but held out for comparison,) and found the software chooses the right options with 80% accuracy or thereabouts. So for my money, the MBC accuracy is right up there with Neural Networks for accuracy, but much more useful than NNs for testing menu-based choices and explaining the interactions that are going on.

Currently I’m employing the software for a client who is looking at bundled products, in a particularly complex market, that’s why I bought the software. There is simply no other tool available to do the job. Without giving any client information away, I will within a few weeks disguise the data and make available a case study to show how MBC works, and how it can be used.
I mention this because I am a strong believer in the idea that market research professionals ought to share their experiences for mutual benefit. As a profession we are under mounting challenge from the big data mob. However, so long as they stay in the world of Excel, and so long as market researchers remain aggressive in the way we trial and purchase new software advances, we will continue to have a healthy profession. My first reaction to MBC is that it is an answer – at last – to a world where customers can now customise just about anything. You are welcome to contact me if you want to learn more.

Duncan Stuart
64-9 366-0620

Here’s why I think Market Research companies are flying backwards.

Image
Yes, technically we’re flying, but only just. Kitty Hawk thinking doesn’t help high-flying clients.

The very last meeting I attended when I was with my previous employer did not go particularly well. It was the weekly meeting of senior execs and on the agenda were things like staff training, client account management, and the usual HR concerns. I already knew that I was going to be leaving the organisation, so I wanted to end by contributing something which I felt was very important. “I want to run some staff training on SPSS.”

I didn’t expect the answer from my senior colleague. She asked simply: “why?”

I was quite taken aback, and I must admit I fumbled for an answer, mumbling things like; “extracting more value from the data we collect.” I can’t remember my exact words but they were along those lines. Anyway the workshop suggestion got kiboshed, and I left the organisation a few weeks later.

Three weeks ago one of the bigger clients of that same organisation announced that they were getting SPSS in-house, and they introduced me to a new member of the team, a particularly bright analyst who had previously worked with the banks.

I realised I had just witnessed a well expressed closure to my fumbling argument from 18 months earlier. The reason we need to smarten up our analytical skills as market researchers is because if we don’t,  we will simply get overshadowed by our clients.

In fact I see this all over the place. In-house analytical units within banks are using SPSS and SAS quite routinely, and most of the major banks have adopted text analysis software to help them swiftly code and analyse their vast streams of incoming verbal feedback. The text analysis centre-of-gravity in New Zealand is located client side, and not on the side of the market research industry. The same could be said of Big Data analytics. MR companies are scarcely in the picture.

Meanwhile, what is happening within market research companies? Well this last week I’ve been in conversation with two analysts looking for better more challenging roles than they have been given within their market research firms. One of them, an employee with one of New Zealand’s leading MR firms, (and one I largely admire – they are by every measure a world-class operator,) had for several years been asked to produce crosstabs and other descriptive outputs, and had on no occasion, ever, had his mental capabilities even remotely stretched. I’m talking about a graduate in statistics who has effectively been cudgeled to death by the rote boredom of low calorie market research thinking. I asked what he was equipped with, software-wise and he told me: “Microsoft Excel.”

This is simply not good enough both professionally or stragically. While globally the volume of marketing data analytics is growing by something like 45% per annum, the market research industry is relatively flat-lining or showing single digit growth at best. In other words most of the growth is happening over at the client’s place. And they aren’t even specialists in data.

If the market research industry wishes to gain relevancy, then it has got to offer something that clients can’t provide for themselves. It used to be the market researchers provided unrivalled capabilities in the design and execution and analysis of market research projects. The key word here is “unrivalled” but I’m afraid the leading research firms are being simply outstripped by their own clients.

The mystery to me is why the larger firms appear blind to this phenomenon. Perhaps in building their systems around undernourished, under-equipped DP departments, they have a wonderfully profitable business model. Pay monkey wages, and equip them with Excel. And for that matter, keep them at arms length from the client so they never get to see how their work even made a difference. The old production line model. Tighten the screws and send it along the line.

Or perhaps the big firms are simply comfortable in doing things the way that I’ve always done, or perhaps senior managers, having grown up in Kitty Hawk thinking lack the imagination or the will to fly into the stratosphere.

Either way, if I was a young analyst and looking at my career prospects my attention would be over on the client side, or on dedicated data analytics operators such as Infotools. That’s actually a painful thing for me to say, speaking as a life member of my market research professional body. But if the status quo prevails, then we are going to see not just the relative decline, but the absolute decline of our industry.

What can market research firms do to rectify this problem?  Here are 4 suggestions:

  1. Invest in decent analytical software. Just do it. A few thousand dollars – for a much better return than sending your exec to that overseas conference.
  2. Reignite the spirit of innovation becomes from your groovy team of analysts. Rather than focus merely on descriptive data, let them loose on the meta data – the stuff that explains the architecture of the public mood.
  3. Develop a value add proposition to take to clients. Look, all of us can describe the results of a survey, but we market researchers know how to make that data sing and drive decision-making.
  4. Employ specialists in big data, so that as market research companies we can integrate the thinking that comes from market surveys, and qualitative work, with the massive data sitting largely untouched in the world of the client.

In my view market research industry has been going off-course for the last 20 years. We are stuck at Kitty Hawk. We stopped shooting for the moon.

DSC00104

 

 

Structural Equation Modelling – an underused form of multivariate analysis

Image

 

One of the challenges in data analysis is to get a sense of storyline. My distaste for crosstabs (yeah, sure they can tell us stuff) comes from their fragmentary style of storytelling.  If this was a crime scene, then crosstabs would give you a sliver of glass here, a trace of gunpowder over there, as well as a few fingerprints which may, or may not be connected to the main story.  The evidence seldom hangs together.

Years ago I was thinking how cool it would be if we could somehow construct a flowchart showing how A causes B, drives C etcetera – when a colleague tapped me on the shoulder and said something like “Duh…you haven’t used structural equation models?”

So ten years ago I added AMOS to my little library of SPSS products and wow, what a useful implement.  The classic case study they always use to illustrate AMOS is how Education (which school you went to) and Income (how rich your family is) help predict those SAT scores that make or break your chances of getting into Harvard.

This is the data they use on the demonstration video: http://vimeo.com/21136244

As the video demonstrates, a flowchart style model is very quick to put together, and even a simple model shows the relative importance of the drivers. In this case, being rich doesn’t give you good grades…but it does help you get into a good school which DOES give you good grades.  The other good thing about SEMs is that they need to factor in all those variables that we haven’t asked about in the survey.  For example in staff satisfaction surveys around about half of what causes staff engagement is nothing to do with pay, or training, or leadership style, or the perks of the job – nice though these may be. Something like 50% of the story is due to the attitude of the staff member – whether they’re fundamentally engaged as an individual, or jaded and indifferent to world around them. So in a staff survey SEM we might create an “unobserved variable” which helps us get a measure of these exogenous forces. I like it because we often assume, otherwise, that only the things we measure are driving the outcomes.  That somehow 100% of the SAT story is determined by Income and Education alone. Never mind that little Johnny at Auckland Grammar is still a lazy sod.

The result of an SEM is an elegant model that quickly tells us what the drivers are.  I recall doing one for a Government department (customer satisfaction) and the survey asked dozens of questions about demographics to deliverables such as promptness of answering the phone, the demeanor of the staff etc etc etc.  Well when we put it together in one model, most variables didn’t matter a toss. The one that really mattered – and explained 70% of the story was this:  When you phoned up to get an answer to a problem…did you get a helpful answer?  There were dozens of deliverables but these were all peripheral to the main desire of their customers: they just wanted answers. 

SEM’s deliver a holistic picture in a format that non-statisticians easily get. They’re more useful in describing what’s going on – more so than serving as an explorational tool.  And they’re easy to generate, and deliver diagnostic statistics so you can see if your models are significant and/or meaningful. My question is – why do I only see these used in academia?

Gaining 20% productivity. Number 1. Wholesale Value Destruction – how organisations reduce the value of their hard-won data.

I’ve pondered for the past 20 months how Market Research organisations can add value to their offer and therefore deliver better ROI for their clients as well as for themselves. I’m going to talk about this subject from time to time because I am firmly of the belief that most MR firms could easily achieve a 20% lift in productivity. During this recent period I was with a large research firm and nothing dissuaded me that a 20% lift is fully achievable.  All they had to do was challenge their own status quo. That is the hard first step, and I look forward to the day they take it.

Meanwhile I’m going to establish a few battlegrounds on which the value/productivity equation needs to be fought.  The most obvious area, I think, is the question of what we do with the data. Do we add value to it? Or do we destroy the value of the data we collect?

By and large I think we do both. Value-wise, our analytics routines have the efficiency of a boat that is taking on water – so half our effort goes into powering the vessel along while the other half goes into bailing water water from out of the bilge.

If you did a time and motion study of a typical analytics functionary you’d see too little of the Power-Up activities, and too much of the Bail Out. Let’s itemise these.

Power-Up Analysis

  1. Pre-planning the analytic process for the project.
  2. Constructions – creation through recoding and other calculations – of variables that matter. (By and large, most demographic variables are of little consequence by contrast.)
  3. Exploring for new discoveries of underlying principles. These give strategic direction.
  4. Developing dashboards that allow the client (not the researcher) to quickly look up every little detail. (Sales by region X)
  5. Developing what-if scenario models, so the client can keep on using the data even when their plans, or situation changes.

Bail-out activities that destroy value.

  1. Analysis without a plan. Few minutes spent this way will be fruitful.
  2. Drift-net analysis (running crosstabs of everything by everything in search of those nuggets.)  This process actually multiplies the time it takes to find answers.
  3. Descriptive research. Did you know males buy more than do females? Not interesting unless you can tell me WHY.
  4. Producing hundreds of slides.
  5. Any activity that makes the story more granular rather than less granular. My old metaphor: less sand please, more sand-castles.

I’ve watched many projects where the Bailing-Out activities outweighed the Power-Up activities. In other words if these were boats, then they’d be afloat, but going nowhere. That’s not what the client wants.

There are several cures for this problem. One is to rework the existing project planning process so that the team spends one more meeting with the client – asking how the research is going to be used.  Think about that carefully.

Another way to address the problem is to equip the analysts with better tools. I know a number of senior researchers who think crosstabs are fine and should form the backbone of all research analysis. Well, not in my books. They deliver only moderate value – relying on luck rather than intellect to deliver anything of use

.Image