Monthly Archives: August 2014

THE POWER OF “DON’T KNOW”

If there’s one piece of feedback I get most from respondents to questionnaires I’ve written, it is that I don’t always give them Don’t Know as an option.  I think there are times when it is useful to exclude this option, and times when Don’t Know should be mandatory in a questionnaire. 

Why would I exclude Don’t Know? I do so when the question is of a lower order – for example where I am trying to get a rough measure on a minor issue. From my reading of the literature, the exclusion of don’t know will prod a few more people to express which way they lean – and frankly the more who do so, the easier it is to analyse the data. But this comes at a cost.

Research studies have suggested that there is some reluctance by respondents, at least in some surveys, to admit that they don’t know which way they stand. To include Don’t Know as an option makes it explicitly acceptable to tick that box if that’s the way the respondent feels. What we may miss is an indication of which way the respondent might be leaning – and US political polls very frequently ask people who Don’t Know a subsequent question: yes, but which way do you lean? And a high proportion of those who said they don’t know, then indicate that actually they lean this way or the other.

So as I see it, there are competing arguments about the inclusion of Don’t Know.

But there are circumstances in which I try always to include Don’t Know option. these circumstances where I am trying to get an accurate measure not of general attitudes, but of projected behaviours. Which brand will you buy? Which way would you vote?

I have a rule of thumb that for any given question around 10 to 15% of respondents can be counted on to be unsure which box to tick. If that’s true, then we have an effect that rivals or even exceeds the variance that may be caused by the survey’s margin of error.  In other words, if you don’t provide a Don’t Know option when it matters, you could easily be invalidating your own conclusions. That lack of opinion may be very powerful stuff.

 

The big thing we forget to measure

RISKIn our market research reports we generally make a concerted stab at the idea that our data is both precise and reliable. We gleefully report the sample size,  and we pinch our fingers together as we cite the maximum margin of error – which in many surveys is plus, or minus, 3.1%. Talk about pinpoint accuracy!

Yet we blithely ignore the fact that our clients work in a fuzzy universe where things go right, or horribly wrong. If you were the brand or marketing manager for Malaysian Airlines this year, I really wonder if your standard market research measures – brand awareness, consideration, advertising awareness etcetera – would have anything remotely to do with the fact that you have lost, very tragically, two airliners within the space of three months. Risk happens. Regardless of your marketing investment, passengers aren’t flying MH.

Or if you are the marketing manager for Coca-Cola in your country, do you think honestly that the subtle shifts of brand awareness, ad recall and consideration have as much effect as whether or not this summer proves to be wet and dismal, or an absolute scorcher?

We may not have a crystal ball when it comes to weather forecasting, but we do have decades of accurate climate data.  When I did this exercise a few years ago I gathered 30 years worth of data, popped it into Excel, then used a risk analysis tool to come up with a reasonable distribution curve based on that data. It looked something like the chart above.  Then I could set my temperature parameter – below x° – and based on that, could fairly reasonably calculate that my city had a 20% chance of having a dismal summer. The risk was high enough, I felt, that any marketing manager for any weather sensitive product or service should have a contingency plan in case the clouds rolled over and the temperatures dropped.

Why don’t we do this more often?  Why don’t we build in the various risks that accompany the work of our clients? If we did, then we could better help them to make decisions as circumstances arise.  We consider some risks – what if the competitor achieves greater brand awareness and consideration – yet we treat many of the other risks (weather,  currency or price fluctuations, whether or not supermarkets choose to stock us or otherwise, whether or not some kind of health scare will affect our category, etcetera,) as if these were off-limits and outside our scope.

Though these data are not outside our scope to all. The data may not come in via our surveys, but they are just as relevant.  Going back to the weather example: we did observational research at a local lunch bar, and found that on wet cool days the pattern of drinks consumption was quite different to that on hot sunny days. It wasn’t just a question of volume. On cool days people switched from CSD’s – slightly – towards juices, as well as choc-o-milk type drinks.

So if I was a soft drink marketer, and I had a market research firm supplying me with climate risk data, as well is an idea of how people behave when it is hot, average, or cool – then I might come up with a marketing plan for each of those circumstances. I would go into summer expecting it to be average, but as soon as the long-range weather forecast told me  that indeed the weather is going to be cool – I would think, well, I had a 20% expectation that this would happen. Time to wheel out Plan B. I’d be prepared.

The risk analysis tool that I use is called @Risk and it is frighteningly simple to use. It works as a plug-in to Excel, and takes roughly 10 minutes to learn.  Since using the software, my outlook toward what we do as market researchers has totally changed.

We are not in the survey business. We are in the business of assisting our clients to make informed, evidence-based decisions.

Sometimes the evidence comes from our survey work, bravo! But sometimes the evidence comes from the weather office, or from the war zone of the Ukraine.

The Adam Sandler effect

sandlerI was thinking a little more about the different rules we apply when we make our customer choices, and how these nuances may be lost if we ask research questions in the wrong way.

A really simple example, and one I’ve mentioned before illustrates what I call the Adam Sandler effect.

What happens is this: Five of you decide to go to the movies this Saturday. So far so good.

But which movie? You and your group love movies, and surely you have a collective favourite to see. So you start discussing what’s on and four of you agree the new Clooney film is the one you want.

“Ah… but I’ve already seen that film,” says the fifth member of your clique.

The veto rule.

Okay what’s our next best choice? And so it goes. Whatever you choose, somebody has either already seen it, or has read a tepid review.

What you have here is a collision between two competing sets of rules. You set out to see your favourite film, and instead you and your group end up seeing the “least objectionable” film that, actually, nobody has wanted to see. This is where Adam Sandler, I swear, has earned his place as one of Hollywood’s three top grossing actors of the past 10 years.

Apart from The Wedding Singer which was a great little film, the rest have been an appalling bunch of half witted comedies. Little Nicky anyone?

It doesn’t matter. Every weekend at the movies there is a blockbuster or three –  and then there is Adam Sandler, lurking there: his movies ready to pick up the fallout from your friends well-meaning decision process.

Now for researchers this has serious implications. If we only ask about what things people want, then we may end up with a theoretical ideal – but our research will never pick up the kind of goofy, half-assed, slacker productions that actually gross the big dollars. In our questionnaires we need to think about how we might pick up the Adam Sandler effect. Good luck to the guy. He has the knack of reading the Saturday night crowd much more accurately than most of our surveys could ever hope to achieve. We should learn from that.

  • Choices depend on positives as well as vetoes
  • When two or more people make a decision, the outcome depends more strongly on vetoes than on positives
  • There is always a market for things that are least objectionable.

STORY-TELLING FOR MARKET RESEARCHERS

story

A solution to the increasing volume and complexity of research reporting is to increase our story-telling skills. Here are 10 useful guidelines.

Storytelling has become one of the hot topics in business circles in the last couple of years. One reason for this is the sheer explosion of the amount of information that must be processed by organisations and communicated to their various stakeholders. By some measures the amount of data in this world is growing by something like 45 per cent per annum. So how do business people communicate all this information?

Market researchers, before the age of the PC and the datashow projector, used to communicate by two means only. One was to physically get up, shuffle papers and present a virtual lecture to the client. The second means was to present a written report. We were famous for them and even until recently market research firms were criticised for their delivery of doorstopper reports.
No wonder so much of our work ended up populating the bottom drawer of the client’s desk. It was like this from the early decades of the 20th-century when pioneer Charles Parlin would submit reports hundreds of pages long, right through to the 1980s, when the advent of the PC and PowerPoint began to change the way we told our stories to our clients.

At first the use of visuals and a PowerPoint medium was an exciting new thing for market researchers. The medium suits our use of statistical charts, though most senior professionals will remember the heady days when assistants would come charging into their office saying ‘look at this!’ and show how they’d used clipart to help deliver the visual metaphor to whatever was going on inside the data. Fortunately the fad of adding whoosh sound effects passed quickly.

But did it lead to better story telling? By and large the answer is no. Over time market research slide decks have turned into gargantuan productions showing slide after slide of pies and bars. In short this process has commoditised a lot of market research. Many senior researchers may deny this but their staff gauge the success of their productive day by the number of slides they have produced. Presentations are described and measured by being a deck of 60 or being a major “120+” kind of presentation. Whole MR organisations are structured around the production and delivery of these slide decks.

This is a tragedy. Technology has led us to focus more on presenting greater volumes of supporting statistical evidence rather than the quality of insight delivered. With the amount of data increasing exponentially the problem is only getting worse.

Volume is not the only issue. The typical insights we deliver as market researchers in 2014 are, surely, deeper and more complex than the insights delivered 20 or 30 years ago. I remember joining a very good market research firm in the 1990s and in the bowels of the filing room I discovered a set of political polling reports from the 1970s. The charts were rudimentary and hand drawn. The reports were very basic. There was no segmentation work, nor any kind of underlying driver analysis: there was nothing except simple descriptive statistics.

Today statistical analysis may be quite advanced and require some explanation in order for the clients to understand how we have reached our conclusions. Researchers may also be dealing with several streams of data including sales data consumer survey data and verbatim feedback collected by the client’s own call centre. These various rivers of information may be compiled into one particularly rich report that goes beyond descriptive statistics and into the world of strategic thinking or what-if modelling.

At this point our reports may get bogged down not just in absolute volume but growing complexity as well. The solution to this problem is surely not “the same, but more of it.” We require a step-change in our reporting style, and I’m not alone in arguing that we need to shift from evidence-based reporting toward a story-telling emphasis.

My own uncle first alerted me to this problem back in the 1990s when he was an engineer in charge of major hydro projects worldwide. Montreal-based Uncle Rod told me a true story about how he had received an urgent phone call from Hugo Chavez, then president of Venezuela. The President wanted help to decipher a huge report about where to build a major hydro dam. The report had been put together by acknowledged experts in hydro construction and civil engineering. They considered the financial, engineering, geo-technical and social costs attached to each option. In short, the report , which was hundreds of pages long, set out the upside and downsides of two competing locations. My uncle explained to the President that the authors of the report had practically written the book on these kinds of complex decisions. “That’s the problem!” exclaimed Chavez, “they wrote a book. All I want is the answer!”

My uncle told me the story to impart two lessons. First he wanted to show me that even with billion-dollar decisions such as hydro projects, and the Venezuelan project is one of the 10 biggest in the world, one can get too bogged down in decimal points. To paraphrase those hundreds of pages of expertise, the choice between Location A and Location B was about 50-50. In the end, the experts should have had the courage to put it in those simple terms. The second point was that the report was too big and too technical for the audience. Hugo Chavez was no fool, quite the contrary, but neither was he a qualified engineer. As he said, all he wanted was to make a decision.

Market researchers think long and hard about the engagement level of respondents to the surveys we conduct. We are fully aware in questionnaire construction that we must keep things simple, brief, easily understood as well as engaging. Yet, at the same time, many of us fail to think of our reporting along these same terms. Why do we need to show page after page of pie charts? What is the benefit of making a deck 90 slides long? What processes do we implement to boil down all our information into one easily understood story that passes the Hugo Chavez test?

Here is where storytelling technique becomes a useful tool in the armoury of the professional market researcher. Many organisations instil presentation skills by giving younger researchers practice internally and then in front of clients in the process of sharing decks of PowerPoint slides – but this training process only covers half the story. We get very good at presenting, but the stories we present are underdeveloped or dull and overcomplicated.

Yet stories are an elegant solution to the problem of too much information. Humans are wired to process stories and understand them. Stories act as a kind of cognitive coathanger on which we can drape emotions, characters as well as the sense of actions and consequences that are the hallmark of human dramas.

Even a 4-year-old can hear the story of Little Red Riding Hood and gasp in the knowledge that Grandma’s house is now occupied by a wolf. In doing so that four-year-old is handling irony, and processing a moral universe that is in-fact quite complicated. I doubt if a deck of thirty PowerPoint slides showing pie charts (and various KPIs,) of right and wrong could impart the same level of wisdom. Aesop’s fables are another example of simple stories being able to impart rich life lessons.

And get this. A pre-schooler may not have the mathematical skills to interpret statistical charts, but even at age 5 they have the intellectual horsepower to comprehend the complexities and film grammar of a two-hour movie. The storytelling techniques of moviemaking should in fact package up the rich and complicated story that comes out of our market research work.

So what are the basics of filmmaking? What storytelling techniques do script writers and directors and film editors use to keep us engaged for 15 gripping weeks of a TV series such as Breaking Bad?

My own career as it turns out was blessed by the fact that I spent eight years in TV drama scripting. I was a script editor and writer for a host of shows predominantly soaps and cop dramas. This early career was entertaining and made a great dinner party conversations, though to be honest by the time I quit television in my early 30s I felt as if the experience had taken me down a professional cul-de-sac.

Not so, as it turned out. Over those eight years I was immersed in the world of storytelling and never realised what a universal skill-set this turned out to be, at least not until recently. So here’s my list of ten techniques that are useful for market research storytelling.

1. Include some back story. Before you launch into the main thrust of the report, it helps to recapture why the research was conducted in the first place. In a recent report for a bank I recounted how during the observational research project we had witnessed a customer who attempted to open an account, but failed in their quest. It was a minor drama compared to the bigger questions we were going to explore in the report, but the incident illustrated how even small and incidental details contributed to a failure by the bank. For the sake of two minutes the bank forfeited the lifetime value of their customer. So I framed the report in terms of this incident. My subtitle for the report was: The two minutes that cost $50,000. That little back story framed the rest of the discussion: it set the theme.

2. Develop good characters. Whether qualitative or quantitative, professional research prides itself on keeping respondents anonymous. For the sake of privacy this anonymity is a good thing, but it makes for lousy storytelling. This is why I love verbatim questions in my questionnaires. Without naming names I can refer to the lady who complained about the coffee. Without divulging identifying details I can refer to the grumpy old guy who just wouldn’t be pleased. In script writing good characterisation does not come out of demographic descriptions, it comes out of the decision-making by these characters. The Denzel Washington character in the train movie Unstoppable can be described demographically, but what makes him interesting and trustworthy are the decisions he makes along the way. The same in our data: here is the lady who is prepared to pay a premium price! Over there, the customer who yearns for the old-style products. By introducing a few of these characters into our narrative we can explain later results quite simply. Instead of pointing to slide after slide of NPS scores, we may simply conclude that the new strategy got the thumbs down from Mr Grumpy. Everybody in the room gets it.

3. Find suitable metaphors. Sometimes very complicated things can be explained by using a good simple illustration. When asked to explain a factor analysis, I ask the audience to picture a new kitchen device called the un-blender. Where a blender turns diverse ingredients into gray statistical soup, an un-blender starts off with grey soup and after 30 seconds reveals the underlying ingredients: the factors that made up the soup. So far my layman’s explanation of factor analysis has received warm reviews from all my clients including, uh oh, two PhDs in statistics. Far better the metaphor that gives the gist than the full technical explanation.

4. Structuring a story very carefully. One of the biggest challenges in film writing is to find a structure that produces a compelling tale. I quite like movies where two or three different strands either click together or collide just before the end of the movie. When you have 45 minutes to convey the rich discovery and the insights of a research project you have the same time available to you as that available to the writers of say an episode of CSI or Law And Order. In other words you have room to introduce a couple of twists and turns as you piece together the bloodstains, fingerprints and ballistic details required to reach a conclusion. Clients don’t mind if in the course of that presentation you show them a little bit about your forensic techniques. Your audience doesn’t mind seeing some of the story behind the main story. When we put together cop shows, the question of whodunit was always less interesting than the question of how to the cops find the guilty party. Market research follows the same narrative arc.

5. Involve the audience. The audience of the drama can at any one moment be either up with the play, ahead of the play or behind the action. A skilled storyteller varies the pace so that sometimes the audience knows what is coming around the bend before our main character does. “Don’t go down the alley!” we yell at the hero. “There’s a bad guy waiting for you with a gun!” We love those moments, at least in moderation. If we get ahead of the protagonist too often however we begin to wonder why we are bothering to watch such a klutz.

On the flip side, sometimes the hero does things and we don’t understand what he or she is up to. All will be revealed later! In TV storylining we used to refer to these as mysterioso moments. A few of these add spice to the drama, and they allow the audience to revel in the intelligence of the protagonist. At other times within the movie, we are simply up with the play, neither ahead of it all behind the protagonist.

Alfred Hitchcock was a master of control when it came to these three audience statuses. Within a heartbeat he could take us from being ahead of the action to being 12 steps behind. Just when we think we’ve figured everything out, we realise we are embroiled in something much bigger and more complicated! Now I’m not suggesting that market researchers go for that effect too often. But there is a lot to be said for having a kind of rhythm between the lean back and listen elements of the presentation and the lean forward moments in which the audience is challenged. Rhetoric questions, for example, signify a change in audience status.

6. Remind the audience of what’s at stake. Don’t forget we are in the business of providing the information required for our clients to make important and sometimes very expensive decisions. If we work in FMCG, then perhaps we need to remind the client that in this business 80% of new product launches fail on average every year. The stakes are high! One reason I used the story of the lost bank customer was that I wanted to reinforce that our modest project was not about measuring customer resources at the bank, but about mitigating risk of failure. I wanted that top of mind, so that even during the prosaic bar charts that I had to present, these were contextualised by what was at stake.

7. Seek storytelling variety. When I worked on a cop show in Australia we used to crank out two episodes every single week. As a group of storyliners we recognised that cops only do a certain number of things. They examine crime scenes, they grill the bad guys, they chase suspects, they observe from the anonymous grey van parked over the road. We boiled this down to eight modes of behaviour, and we made sure that in any given episode of the cop show each mode was used no more than once. In other words we didn’t have a car chase followed by a foot chase. Or have an interrogation scene followed later by another one. In marketing research reporting we also have a shortlist of reporting modes. But this is why I get critical of seeing a deck of slides it features a whole stream of descriptive charts, followed by yet another stream of descriptive charts. It is useful to break down our reports into chapters, and for each chapter to be fundamentally quite different to those previous. So after introducing what’s at stake in chapter 1, I might present a series of descriptive slides in chapter two before searching for strategies using different techniques in chapters 3,and 4. This keeps the storytelling interesting.

8. Don’t be afraid to develop a theme of a deeper nature. Very often in market research we get to study a subject but in the course of that study we ruminate on deeper material. We may be tracking the performance of the brand but at the same time we are witnessing a shift in the zeitgeist. Development of such scenes in the movie or TV program adds a lot of richness to the storytelling. We are not just witnessing a story about a person; we are reflecting on the human condition. In my own presentations I refer to these parts as the “I’ve been thinking” zone, and it may consist of a single photo, or a discussion about a relevant and fascinating book that I have been reading in conjunction with the research study. Sometimes these pauses in the narrative spark a much greater discussion than might be expected. Just as the film will resonate with the public because it seems to capture the mood of the audience, so a thematic discussion may capture and resonate with the mood of the client.

9. Action is better than talk. Charts are stepping stones in a forward moving narrative. Your headlines ought to be spiced up with verbs, your summary findings should lead to consequences. You’re building a case and working toward a verdict.

10. Finally, good storytelling always has an authenticity. What made that terrific, nail-biting movie Captain Phillips so genuinely exciting was the absolute authenticity of the Somali pirates. It was brilliant casting. The director allowed a degree of improvisation from his cast also, so that the scenes were never over-polished or slick. Those actors didn’t look like they were acting; they were the real thing: lean and desperate. In market research reports our statistics and charts and evidence and conclusions must all reek of similar authenticity. I quite frequently add an anecdote in my presentation about my own journey of doubt during the project, or about the difficulties of fieldwork. I will mention the fact that one respondent, a sleepless parent perhaps, completed the online survey at three o’clock in the morning, or that another respondent wrote 1200 words in response to a question about why they would recommend a product or service. I want the client to breathe-in and smell the reality of our work.

Storytelling places on us one demand that challenges many corporate style guides. Your firm may specify a certain tone, look and feel to its reports. I find storytelling by nature is more personal than that. A good story requires the emotional investment of the storyteller. In an age of more data than conventional reporting systems can deal with; storytelling demands that you lay your heart bare in the telling and sharing of the tale. You up for that?

Choices depend on different rules

Last week I invested US $12,000 in new software. That’s a ridiculous amount, and frankly worth about three times more than my car. Strangely, the decision to invest in this software was very simple. It was fuelled by an ideal: am I interested in doing what I do, to the highest standard that I can? The answer is an idealistic yes. Idealistic for sure. My financial adviser and good partner for 34 years just shook her head.

The software, by the way, included Sawtooth’s well-known Max-Diff module as well as their pricey but promising MBC module which takes choice modelling to a whole new level. In terms of learning new technology, MBC promises to stretch me to the limit. It is neither intuitive, nor pretty. But more about that in a later blog.

As soon as I had Max-Diff out of the box I used it on a client survey as part of a conjoint exercise. I’ve long been a fan of conjoint because that emulates realistic decision situations that people face in real life. We learn not just what they choose, but the architecture of their decision-making as well.

In this case I could compare the two approaches. In effect, the conjoint choice modelling and the Max-Diff exercise ran in parallel, testing more or less the same variables, but using different approaches.

With conjoint the respondent chooses (on-line) from a small array of cards, each with a different combination of attributes and features. They select the most optimal.

With Max-Diff the respondent chooses their favourite combination, as well is the least favourite.

Were the results similar? Well yes, they converge on the same truths, generally, but the results also revealed telling differences. One of the least important attributes, according to conjoint, proved to be one of the most important attributes according to Max-Diff. How could this be?

The lesson went back to some wonderful insights I learned from Alistair Gordon when we were working on the subject of heuristics – those rules of thumb that people use to evaluate complicated choices.

Most of us, when asked “how do people make choices?” figure that mentally we prepare a list, based on the variables, and we set about finding the best: in fact conjoint is predicated on exactly this process.

But Alistair introduced me to a fabulous concept: the veto rule. Put simply, if I was choosing between one brand of breakfast cereal and another, I may have a number of variables that contribute to optimality (Flavour, naturalness, organic-ness,) and no doubt my brain has worked up a complex algorithm that balances these things against the presence of raisins, puffed wheat, stone ground oats and dried apricots. Good luck trying to model that!

But I also have a few simple veto rules. If a competing breakfast cereal contains more than x% of sugar, then bingo – I drop it from the list of competitors.

This explained why some variables scored as important with Max-Diff, but scarcely registered with conjoint. Among the variables were a few conditions that might be described as veto conditions. Those who use Max-Diff alone seldom discuss these different effects.

So which approach – conjoint or Max-Diff – should one use? As ever, I think one should try both. My favourite research metaphor is about the blind men and the elephant, each discovering a different aspect of the animal, and each giving a different version of events. They are all correct, even if they have different answers. Together, they converge on the same answer: the whole elephant.

I do like the way research tools can give us these honest, statistically reliable, yet conflicting answers. They give us pause for thought, and they highlight the fact that numbers are merely numbers: quite useless without confident interpretation.RESEARCH