Tag Archives: decision making

The big thing we forget to measure

RISKIn our market research reports we generally make a concerted stab at the idea that our data is both precise and reliable. We gleefully report the sample size,  and we pinch our fingers together as we cite the maximum margin of error – which in many surveys is plus, or minus, 3.1%. Talk about pinpoint accuracy!

Yet we blithely ignore the fact that our clients work in a fuzzy universe where things go right, or horribly wrong. If you were the brand or marketing manager for Malaysian Airlines this year, I really wonder if your standard market research measures – brand awareness, consideration, advertising awareness etcetera – would have anything remotely to do with the fact that you have lost, very tragically, two airliners within the space of three months. Risk happens. Regardless of your marketing investment, passengers aren’t flying MH.

Or if you are the marketing manager for Coca-Cola in your country, do you think honestly that the subtle shifts of brand awareness, ad recall and consideration have as much effect as whether or not this summer proves to be wet and dismal, or an absolute scorcher?

We may not have a crystal ball when it comes to weather forecasting, but we do have decades of accurate climate data.  When I did this exercise a few years ago I gathered 30 years worth of data, popped it into Excel, then used a risk analysis tool to come up with a reasonable distribution curve based on that data. It looked something like the chart above.  Then I could set my temperature parameter – below x° – and based on that, could fairly reasonably calculate that my city had a 20% chance of having a dismal summer. The risk was high enough, I felt, that any marketing manager for any weather sensitive product or service should have a contingency plan in case the clouds rolled over and the temperatures dropped.

Why don’t we do this more often?  Why don’t we build in the various risks that accompany the work of our clients? If we did, then we could better help them to make decisions as circumstances arise.  We consider some risks – what if the competitor achieves greater brand awareness and consideration – yet we treat many of the other risks (weather,  currency or price fluctuations, whether or not supermarkets choose to stock us or otherwise, whether or not some kind of health scare will affect our category, etcetera,) as if these were off-limits and outside our scope.

Though these data are not outside our scope to all. The data may not come in via our surveys, but they are just as relevant.  Going back to the weather example: we did observational research at a local lunch bar, and found that on wet cool days the pattern of drinks consumption was quite different to that on hot sunny days. It wasn’t just a question of volume. On cool days people switched from CSD’s – slightly – towards juices, as well as choc-o-milk type drinks.

So if I was a soft drink marketer, and I had a market research firm supplying me with climate risk data, as well is an idea of how people behave when it is hot, average, or cool – then I might come up with a marketing plan for each of those circumstances. I would go into summer expecting it to be average, but as soon as the long-range weather forecast told me  that indeed the weather is going to be cool – I would think, well, I had a 20% expectation that this would happen. Time to wheel out Plan B. I’d be prepared.

The risk analysis tool that I use is called @Risk and it is frighteningly simple to use. It works as a plug-in to Excel, and takes roughly 10 minutes to learn.  Since using the software, my outlook toward what we do as market researchers has totally changed.

We are not in the survey business. We are in the business of assisting our clients to make informed, evidence-based decisions.

Sometimes the evidence comes from our survey work, bravo! But sometimes the evidence comes from the weather office, or from the war zone of the Ukraine.

The Adam Sandler effect

sandlerI was thinking a little more about the different rules we apply when we make our customer choices, and how these nuances may be lost if we ask research questions in the wrong way.

A really simple example, and one I’ve mentioned before illustrates what I call the Adam Sandler effect.

What happens is this: Five of you decide to go to the movies this Saturday. So far so good.

But which movie? You and your group love movies, and surely you have a collective favourite to see. So you start discussing what’s on and four of you agree the new Clooney film is the one you want.

“Ah… but I’ve already seen that film,” says the fifth member of your clique.

The veto rule.

Okay what’s our next best choice? And so it goes. Whatever you choose, somebody has either already seen it, or has read a tepid review.

What you have here is a collision between two competing sets of rules. You set out to see your favourite film, and instead you and your group end up seeing the “least objectionable” film that, actually, nobody has wanted to see. This is where Adam Sandler, I swear, has earned his place as one of Hollywood’s three top grossing actors of the past 10 years.

Apart from The Wedding Singer which was a great little film, the rest have been an appalling bunch of half witted comedies. Little Nicky anyone?

It doesn’t matter. Every weekend at the movies there is a blockbuster or three –  and then there is Adam Sandler, lurking there: his movies ready to pick up the fallout from your friends well-meaning decision process.

Now for researchers this has serious implications. If we only ask about what things people want, then we may end up with a theoretical ideal – but our research will never pick up the kind of goofy, half-assed, slacker productions that actually gross the big dollars. In our questionnaires we need to think about how we might pick up the Adam Sandler effect. Good luck to the guy. He has the knack of reading the Saturday night crowd much more accurately than most of our surveys could ever hope to achieve. We should learn from that.

  • Choices depend on positives as well as vetoes
  • When two or more people make a decision, the outcome depends more strongly on vetoes than on positives
  • There is always a market for things that are least objectionable.

What Hugo Chavez taught me about decision-making.

Image
Hugo Chavez and my Uncle made a massive decision through their joint belief in clarity.

I’ve never met the late Hugo Chavez but I am one degree of separation from Venezuela’s revered leader thanks to his dealings with my uncle, Rod Stuart who was based in Montreal. And Rod had this very instructive story for me as a researcher when once I had tried to impress him with the grunty statistical work I could do.

I thought I’d be impressing my Uncle who was a civil engineer and the man in charge, the person at the very top, of several hydro projects world-wide. He was responsible for, or consultant to (I think) 6 of the largest hydro projects on our planet including the Three Gorges project, the massive Canadian Churchill Falls project, a huge Pakistani damn built in the 1960s as well as the top-10 ranked Simon Bolivar Hydro project in Venezuela.

It was on this project that uncle Rod was asked to consult. He received a phone call in Montreal directly from Hugo Chavez asking Rod to come to his palace. “Duncan,” my uncle advised me, “if any world leader asks you to meet at their palace my best advice is to catch the next plane.”

So he reported to Chavez who was trying to sign-off the new hydro project. “What’s the problem?” my uncle asked.

“The engineers,” said the president, “they’ve recommended two sites for the hydro project – we could flood this valley over here…or,” he said, pointing to a map, “we need to drown this valley over there. But which one?”

Rod knew the engineers who had written the massive report: “Mister President, the guys who wrote that report are very good. They practically wrote the book on hydro decisions.”

“That’s the problem!” barked Chavez. “I don’t want a book, I just want an answer.”

So my uncle agreed to read the report over the next 48 hours, and deliver a recommendation for the President.

“It was an enjoyable two days,” Rod reported. He was in his 70s at the time. “My hotel room looked over the swimming pool and suddenly I realised why Miss Venezuela always wins Miss Universe. Duncan, they all looked like Miss Universe.”

Two days later he reported back at the presidential palace. The report, he said, was very thorough, and had considered geo-physical risks, return on capital, social costs, time frames, delivery of electricity, worker safety, climate…the whole rich texture of risk and return on a massive capital project.

“…and?” asked the president.

“Mister President, what the report is really saying is: it’s about 50/50. So I have two questions for you, and then we can come to a decision. My first question is this: are you certain you want to build a hydro project at all?”

“Of course I am,” said the president  “We are a growing country and we do not want to be energy dependent.”

“Then it comes to this. If on economic grounds both sites are about 50/50, and on risk terms they are about 50/50, and on engineering terms they are about 50/50 and the social cost of drowning this valley here is the same as drowning that valley over there…if it’s all about 50:50 then here is my second question. Mister President, do you have any personal reasons for choosing this valley or that valley?”

Hugo Chavez looked at the map and weighed his words delicately. He pointed to one of the valleys and reflected, “You know, my mother grew up in a little village over here….”

“In that case,” said Uncle Rod, “I suggest we build the dam in the other valley.”

Rod told me the story because he wanted me to understand that decisions, big or small, have no need to be over-complicated. Often in statistics we are compelled to test whether one number is “statistically” higher than the next. Rod’s point: if you’re having to test whether there’s a difference, then really in real terms there isn’t any difference. For that reason he and Hugo Chavez were able to reduce what started off as a complex quadratic equation, and layer by layer cancelling out the differences between Option 1 and Option 2. In the end, having taken a step back to check that any decision needed to be made at all, the difference between Option 1 and Option 2 was really a test of whether the president could sleep more comfortably with one choice over the other.

Both men knew they didn’t live in a perfectly black and white world, and both knew – to their credit – when to stop focusing on the decimal points and when to simply make a decision.

Rod found Chavez to be an open-minded, intelligent man. His words: “I don’t want a book, I just want an answer.” are a credo that researchers ought to live by.