Category Archives: This Just In

Latest thoughts

A solution to the problems with ranking questions. My wine choice system revealed.

How did I choose this wine and win the compliments of the wine-waiter? My sad secret is revealed, and shows why ranking questions don't work.
How did I choose this wine and win the compliments of the wine-waiter? My sad secret is revealed, and shows why ranking questions don’t work.

I have never liked ranking questions. For a start, they seem to place an unusually high cognitive burden upon the respondents. After a lifetime of being asked to score things out of five, and to work efficiently through batteries of rating style questions, well, suddenly ranking questions emerge like a sudden speed bump. Any efficient respondent hits the thing going far too fast, and their suspension – or tolerance for the questionnaire – bottoms out. Clunk!

It isn’t just because there is confusion in the typical respondent about the difference between rating and ranking. That’s one cause of the problems. (Heaven knows, another one is the “meaning” of the numbers now that a “5” now means fifth ranked instead of an “excellent.”)

The second cause of the problem is that we often have too many items to rank. I call this the wine list effect. If you go into a restaurant and the waiter asks you whether you have selected a wine yet, you’ll know the feeling I generally experience. There’s a list of 30 white wines. You don’t know any of them, but you know three things.

  1. You prefer Chardonnay over Riesling.
  2. Anything over $60 a bottle is ridiculous.
  3. On the other hand, selecting the cheapest wine will simply make you look mean. Avoid that one.

My own wine-choosing rules are quite efficient, and present quite a good cognitive shortcut, as all heuristics are meant to do. The rules quickly eliminate a dozen of the 30 wines on offer. The remaining wines I am, of course, ill-equipped to choose from – I am indifferent to them. So at this point I quickly point to one of the remaining 18 candidates. It’s probably a pretty good drop, and if I am quick enough in my selection I maintain a look of appearing knowledgeable.

“Good choice Sir,” says the waiter. (And with those words he has just earned his tip for the evening!) Everybody wins.

Well, that’s my strategy revealed. That’s how most people rank most things in life. As soon as the shortlist grows longer than five or six items, we start applying a few blanket rules or heuristics. Our brains are not actually engineered to conduct a fully rational ranking process – one by one – towards seven or more competing products or services or attributes.

So what kind of data do you get when you ask a respondent in your questionnaire to rank nine service attributes, or 11 brands, or a dozen product configurations: rank them from best down to worst?

It is just too complicated!

And supposing you had 1000 respondents diligently ranking these items from one down to 12, (and we are assuming that they each fully understand that one equals good, and 12 equals the worst,) what sort of data do we get? Is the gap between the first and second attribute the same size as the gap between the second and third attribute etcetera? Can we tell whether the top three attributes are clear leaders? Or have they nudged just ahead of the pack?

Likewise, do we know whether an attribute has failed to impress – or whether in fact it has actively turned off the respondents.

Ranking questions fail to explain or illuminate what’s really going on in the consumer mind. You don’t know whether I rejected the Champagne because I don’t like Champagne, or because it was way over $60 per bottle, or whether I didn’t reject it all, but rather chose something even better.

Is there an answer to this problem? One elegant solution – it isn’t perfect – is Max-Diff. In a typical exercise you may have a dozen variables which you wish to see ranked meaningfully.

Max-Diff breaks the cognitive process down into may be seven or eight chunks. Each time the respondent is presented with four variables and asked to choose the most important, as well is the least important. They then repeat the exercise, but with different combinations of variables – they may do this seven or eight times. This gets us over the cognitive problems associated with ranking questions.

It also gets us over the analytical problems. Here, and I’m thinking of the Sawtooth Max-Diff module, each variable in question is ultimately scored with an index that clearly shows the relative preference or rejection. I can see the degree to which each was chosen as “best” or as “worst” – or I can see the net score (best minus worst.)

Thus we can see that Variable1 is easily preferred – and by a big margin – over the second ranked variable. From the data we might understand that six of the 12 variables were testing cluster together – neither particularly liked, nor disliked. In other words we get to see something of the thinking that goes behind the ranking scores. We understand better the nature of the consumer decisions.

If you applied this software to my various wine list selections over the past 10 years you would understand that Duncan Stuart – wine-wise – is driven by a rejection of high prices, and a rejection of certain varieties. By contrast, the sad facts would reveal, this writer is neither driven by supreme wine knowledge, or by a fail-safe palate. There: Max-Diff has just revealed my dodgy wine selection secrets.

Max-Diff is an easy module to use, it works well in online surveys, and the analysis is blindingly simple. As a research tool it fulfils the basic need we have to understand human choice-making. It understands that people are not all that mathematical in their approach, but are intuitive and sometimes less logical than the wine waiter and assorted guests might ever suspect.

Could an old-fashioned Italian grocery store be the future of grocery buying?

Montepulciano Italy - this micro store gave us foodie moments that don't exist in the mass marketing model of the supermarkets.
Montepulciano Italy – this micro store gave us foodie moments that don’t exist in the mass marketing model of the supermarkets.

Sometimes, during our recent holiday in northern Italy, we needed the Detective skills of a Sherlock Holmes to find the local grocery store. Having been bought up to look for a vast car park and a well lit supermarket as the signatures for fresh food buying, it was disconcerting to live in a series of villages – in Tuscany and Lake Garda – that appeared to be closed up for the season. Where do locals go for their daily bread?  The answer, as Holmes would have told us, is ‘alimentary’ which is the Italian word for corner foodstore. Go past one of these during siesta hour and all you’d see are shuttered doors.

These little shops, some of them little bigger than a hole in the wall liquor joint, turned out to be more like Aladdin’s cave when it came to tasty produce. We were never fluent in Italian, but the store owners – once it was our turn to be served – gave us full undivided attention. When we asked the cheese for example, they asked us in turn: local?

They meant the cheese. You could generally choose from a range of half a dozen non-local varieties, or from half a dozen cheeses that represented the locality. The same was true of prosciutto, and the bread of course was locally baked that day. Each day apart from our travellers budget breakfast of yoga and grapes, we dined on simple, artisanal, and fully delicious foods. I ate a lot of pasta, a ton of cheese, a truckload of bread – and over the five weeks I actually lost 5 kg. The diet had no added sugars and glutens and other bogus flavorings.

In fact it struck me one day that far from experiencing a traditional European grocery-buying process, I was in fact getting a taste of the future of FMCG.

What the supermarket model does is actually slow FMCG down. Instead of being delivered daily, so-called fresh bread is delivered every two or three days – and to compensate, and to retain that fresh fluffy feel, the bakers add gluten, sugar and various other nasties to simulate the fresh bread experience. Fail. The first thing we’ve chosen to do upon returning to New Zealand is to purchase one of those breadmakers, and to never again by the crap we purchased in the past from the supermarkets. The fresh Italian bread was really that life-changingly good!

But the same comment could be applied also to the local-ness of the products. We travelled extensively in northern Italy, and in each village the specialty foods – for example those cheeses – differed from region to region. Very often the manager of the food store knew the cheesemaker. Likewise he or she knew the bakers who delivered product fresh each morning.

This was the polar opposite of the New Zealand supermarket experience. Consider well-known New Zealand brands which are in fact grown, processed and packaged offshore. Consider health products – fish oil for example – which say fresh New Zealand on the pack, but originate, months previous, in factories based in South America. How can consumers enjoy the rich variety that comes from different regions, and from different seasons, when the supermarkets seek – for whatever reason – to homogenise their choices? In supermarket land, Braeburn apples grow 365 days a year. Strawberries are never out of season, but by the same token never quite in season either. The bread, as I said, is a simulation of the real thing.

Supermarkets represent the pinnacle of the production and distribution and marketing model circa 1959. Supermarkets were an amazing thing 60 years ago, but now in an age of RFID, social media, customised communications channels et cetera – not to mention a pickier health-conscious society, perhaps the old-fashioned ailmentary is closer to the future than the mass marketing model created during the age of the Jetsons. Support local I say. And start enjoying your food!


If there’s one piece of feedback I get most from respondents to questionnaires I’ve written, it is that I don’t always give them Don’t Know as an option.  I think there are times when it is useful to exclude this option, and times when Don’t Know should be mandatory in a questionnaire. 

Why would I exclude Don’t Know? I do so when the question is of a lower order – for example where I am trying to get a rough measure on a minor issue. From my reading of the literature, the exclusion of don’t know will prod a few more people to express which way they lean – and frankly the more who do so, the easier it is to analyse the data. But this comes at a cost.

Research studies have suggested that there is some reluctance by respondents, at least in some surveys, to admit that they don’t know which way they stand. To include Don’t Know as an option makes it explicitly acceptable to tick that box if that’s the way the respondent feels. What we may miss is an indication of which way the respondent might be leaning – and US political polls very frequently ask people who Don’t Know a subsequent question: yes, but which way do you lean? And a high proportion of those who said they don’t know, then indicate that actually they lean this way or the other.

So as I see it, there are competing arguments about the inclusion of Don’t Know.

But there are circumstances in which I try always to include Don’t Know option. these circumstances where I am trying to get an accurate measure not of general attitudes, but of projected behaviours. Which brand will you buy? Which way would you vote?

I have a rule of thumb that for any given question around 10 to 15% of respondents can be counted on to be unsure which box to tick. If that’s true, then we have an effect that rivals or even exceeds the variance that may be caused by the survey’s margin of error.  In other words, if you don’t provide a Don’t Know option when it matters, you could easily be invalidating your own conclusions. That lack of opinion may be very powerful stuff.


The Adam Sandler effect

sandlerI was thinking a little more about the different rules we apply when we make our customer choices, and how these nuances may be lost if we ask research questions in the wrong way.

A really simple example, and one I’ve mentioned before illustrates what I call the Adam Sandler effect.

What happens is this: Five of you decide to go to the movies this Saturday. So far so good.

But which movie? You and your group love movies, and surely you have a collective favourite to see. So you start discussing what’s on and four of you agree the new Clooney film is the one you want.

“Ah… but I’ve already seen that film,” says the fifth member of your clique.

The veto rule.

Okay what’s our next best choice? And so it goes. Whatever you choose, somebody has either already seen it, or has read a tepid review.

What you have here is a collision between two competing sets of rules. You set out to see your favourite film, and instead you and your group end up seeing the “least objectionable” film that, actually, nobody has wanted to see. This is where Adam Sandler, I swear, has earned his place as one of Hollywood’s three top grossing actors of the past 10 years.

Apart from The Wedding Singer which was a great little film, the rest have been an appalling bunch of half witted comedies. Little Nicky anyone?

It doesn’t matter. Every weekend at the movies there is a blockbuster or three –  and then there is Adam Sandler, lurking there: his movies ready to pick up the fallout from your friends well-meaning decision process.

Now for researchers this has serious implications. If we only ask about what things people want, then we may end up with a theoretical ideal – but our research will never pick up the kind of goofy, half-assed, slacker productions that actually gross the big dollars. In our questionnaires we need to think about how we might pick up the Adam Sandler effect. Good luck to the guy. He has the knack of reading the Saturday night crowd much more accurately than most of our surveys could ever hope to achieve. We should learn from that.

  • Choices depend on positives as well as vetoes
  • When two or more people make a decision, the outcome depends more strongly on vetoes than on positives
  • There is always a market for things that are least objectionable.



A solution to the increasing volume and complexity of research reporting is to increase our story-telling skills. Here are 10 useful guidelines.

Storytelling has become one of the hot topics in business circles in the last couple of years. One reason for this is the sheer explosion of the amount of information that must be processed by organisations and communicated to their various stakeholders. By some measures the amount of data in this world is growing by something like 45 per cent per annum. So how do business people communicate all this information?

Market researchers, before the age of the PC and the datashow projector, used to communicate by two means only. One was to physically get up, shuffle papers and present a virtual lecture to the client. The second means was to present a written report. We were famous for them and even until recently market research firms were criticised for their delivery of doorstopper reports.
No wonder so much of our work ended up populating the bottom drawer of the client’s desk. It was like this from the early decades of the 20th-century when pioneer Charles Parlin would submit reports hundreds of pages long, right through to the 1980s, when the advent of the PC and PowerPoint began to change the way we told our stories to our clients.

At first the use of visuals and a PowerPoint medium was an exciting new thing for market researchers. The medium suits our use of statistical charts, though most senior professionals will remember the heady days when assistants would come charging into their office saying ‘look at this!’ and show how they’d used clipart to help deliver the visual metaphor to whatever was going on inside the data. Fortunately the fad of adding whoosh sound effects passed quickly.

But did it lead to better story telling? By and large the answer is no. Over time market research slide decks have turned into gargantuan productions showing slide after slide of pies and bars. In short this process has commoditised a lot of market research. Many senior researchers may deny this but their staff gauge the success of their productive day by the number of slides they have produced. Presentations are described and measured by being a deck of 60 or being a major “120+” kind of presentation. Whole MR organisations are structured around the production and delivery of these slide decks.

This is a tragedy. Technology has led us to focus more on presenting greater volumes of supporting statistical evidence rather than the quality of insight delivered. With the amount of data increasing exponentially the problem is only getting worse.

Volume is not the only issue. The typical insights we deliver as market researchers in 2014 are, surely, deeper and more complex than the insights delivered 20 or 30 years ago. I remember joining a very good market research firm in the 1990s and in the bowels of the filing room I discovered a set of political polling reports from the 1970s. The charts were rudimentary and hand drawn. The reports were very basic. There was no segmentation work, nor any kind of underlying driver analysis: there was nothing except simple descriptive statistics.

Today statistical analysis may be quite advanced and require some explanation in order for the clients to understand how we have reached our conclusions. Researchers may also be dealing with several streams of data including sales data consumer survey data and verbatim feedback collected by the client’s own call centre. These various rivers of information may be compiled into one particularly rich report that goes beyond descriptive statistics and into the world of strategic thinking or what-if modelling.

At this point our reports may get bogged down not just in absolute volume but growing complexity as well. The solution to this problem is surely not “the same, but more of it.” We require a step-change in our reporting style, and I’m not alone in arguing that we need to shift from evidence-based reporting toward a story-telling emphasis.

My own uncle first alerted me to this problem back in the 1990s when he was an engineer in charge of major hydro projects worldwide. Montreal-based Uncle Rod told me a true story about how he had received an urgent phone call from Hugo Chavez, then president of Venezuela. The President wanted help to decipher a huge report about where to build a major hydro dam. The report had been put together by acknowledged experts in hydro construction and civil engineering. They considered the financial, engineering, geo-technical and social costs attached to each option. In short, the report , which was hundreds of pages long, set out the upside and downsides of two competing locations. My uncle explained to the President that the authors of the report had practically written the book on these kinds of complex decisions. “That’s the problem!” exclaimed Chavez, “they wrote a book. All I want is the answer!”

My uncle told me the story to impart two lessons. First he wanted to show me that even with billion-dollar decisions such as hydro projects, and the Venezuelan project is one of the 10 biggest in the world, one can get too bogged down in decimal points. To paraphrase those hundreds of pages of expertise, the choice between Location A and Location B was about 50-50. In the end, the experts should have had the courage to put it in those simple terms. The second point was that the report was too big and too technical for the audience. Hugo Chavez was no fool, quite the contrary, but neither was he a qualified engineer. As he said, all he wanted was to make a decision.

Market researchers think long and hard about the engagement level of respondents to the surveys we conduct. We are fully aware in questionnaire construction that we must keep things simple, brief, easily understood as well as engaging. Yet, at the same time, many of us fail to think of our reporting along these same terms. Why do we need to show page after page of pie charts? What is the benefit of making a deck 90 slides long? What processes do we implement to boil down all our information into one easily understood story that passes the Hugo Chavez test?

Here is where storytelling technique becomes a useful tool in the armoury of the professional market researcher. Many organisations instil presentation skills by giving younger researchers practice internally and then in front of clients in the process of sharing decks of PowerPoint slides – but this training process only covers half the story. We get very good at presenting, but the stories we present are underdeveloped or dull and overcomplicated.

Yet stories are an elegant solution to the problem of too much information. Humans are wired to process stories and understand them. Stories act as a kind of cognitive coathanger on which we can drape emotions, characters as well as the sense of actions and consequences that are the hallmark of human dramas.

Even a 4-year-old can hear the story of Little Red Riding Hood and gasp in the knowledge that Grandma’s house is now occupied by a wolf. In doing so that four-year-old is handling irony, and processing a moral universe that is in-fact quite complicated. I doubt if a deck of thirty PowerPoint slides showing pie charts (and various KPIs,) of right and wrong could impart the same level of wisdom. Aesop’s fables are another example of simple stories being able to impart rich life lessons.

And get this. A pre-schooler may not have the mathematical skills to interpret statistical charts, but even at age 5 they have the intellectual horsepower to comprehend the complexities and film grammar of a two-hour movie. The storytelling techniques of moviemaking should in fact package up the rich and complicated story that comes out of our market research work.

So what are the basics of filmmaking? What storytelling techniques do script writers and directors and film editors use to keep us engaged for 15 gripping weeks of a TV series such as Breaking Bad?

My own career as it turns out was blessed by the fact that I spent eight years in TV drama scripting. I was a script editor and writer for a host of shows predominantly soaps and cop dramas. This early career was entertaining and made a great dinner party conversations, though to be honest by the time I quit television in my early 30s I felt as if the experience had taken me down a professional cul-de-sac.

Not so, as it turned out. Over those eight years I was immersed in the world of storytelling and never realised what a universal skill-set this turned out to be, at least not until recently. So here’s my list of ten techniques that are useful for market research storytelling.

1. Include some back story. Before you launch into the main thrust of the report, it helps to recapture why the research was conducted in the first place. In a recent report for a bank I recounted how during the observational research project we had witnessed a customer who attempted to open an account, but failed in their quest. It was a minor drama compared to the bigger questions we were going to explore in the report, but the incident illustrated how even small and incidental details contributed to a failure by the bank. For the sake of two minutes the bank forfeited the lifetime value of their customer. So I framed the report in terms of this incident. My subtitle for the report was: The two minutes that cost $50,000. That little back story framed the rest of the discussion: it set the theme.

2. Develop good characters. Whether qualitative or quantitative, professional research prides itself on keeping respondents anonymous. For the sake of privacy this anonymity is a good thing, but it makes for lousy storytelling. This is why I love verbatim questions in my questionnaires. Without naming names I can refer to the lady who complained about the coffee. Without divulging identifying details I can refer to the grumpy old guy who just wouldn’t be pleased. In script writing good characterisation does not come out of demographic descriptions, it comes out of the decision-making by these characters. The Denzel Washington character in the train movie Unstoppable can be described demographically, but what makes him interesting and trustworthy are the decisions he makes along the way. The same in our data: here is the lady who is prepared to pay a premium price! Over there, the customer who yearns for the old-style products. By introducing a few of these characters into our narrative we can explain later results quite simply. Instead of pointing to slide after slide of NPS scores, we may simply conclude that the new strategy got the thumbs down from Mr Grumpy. Everybody in the room gets it.

3. Find suitable metaphors. Sometimes very complicated things can be explained by using a good simple illustration. When asked to explain a factor analysis, I ask the audience to picture a new kitchen device called the un-blender. Where a blender turns diverse ingredients into gray statistical soup, an un-blender starts off with grey soup and after 30 seconds reveals the underlying ingredients: the factors that made up the soup. So far my layman’s explanation of factor analysis has received warm reviews from all my clients including, uh oh, two PhDs in statistics. Far better the metaphor that gives the gist than the full technical explanation.

4. Structuring a story very carefully. One of the biggest challenges in film writing is to find a structure that produces a compelling tale. I quite like movies where two or three different strands either click together or collide just before the end of the movie. When you have 45 minutes to convey the rich discovery and the insights of a research project you have the same time available to you as that available to the writers of say an episode of CSI or Law And Order. In other words you have room to introduce a couple of twists and turns as you piece together the bloodstains, fingerprints and ballistic details required to reach a conclusion. Clients don’t mind if in the course of that presentation you show them a little bit about your forensic techniques. Your audience doesn’t mind seeing some of the story behind the main story. When we put together cop shows, the question of whodunit was always less interesting than the question of how to the cops find the guilty party. Market research follows the same narrative arc.

5. Involve the audience. The audience of the drama can at any one moment be either up with the play, ahead of the play or behind the action. A skilled storyteller varies the pace so that sometimes the audience knows what is coming around the bend before our main character does. “Don’t go down the alley!” we yell at the hero. “There’s a bad guy waiting for you with a gun!” We love those moments, at least in moderation. If we get ahead of the protagonist too often however we begin to wonder why we are bothering to watch such a klutz.

On the flip side, sometimes the hero does things and we don’t understand what he or she is up to. All will be revealed later! In TV storylining we used to refer to these as mysterioso moments. A few of these add spice to the drama, and they allow the audience to revel in the intelligence of the protagonist. At other times within the movie, we are simply up with the play, neither ahead of it all behind the protagonist.

Alfred Hitchcock was a master of control when it came to these three audience statuses. Within a heartbeat he could take us from being ahead of the action to being 12 steps behind. Just when we think we’ve figured everything out, we realise we are embroiled in something much bigger and more complicated! Now I’m not suggesting that market researchers go for that effect too often. But there is a lot to be said for having a kind of rhythm between the lean back and listen elements of the presentation and the lean forward moments in which the audience is challenged. Rhetoric questions, for example, signify a change in audience status.

6. Remind the audience of what’s at stake. Don’t forget we are in the business of providing the information required for our clients to make important and sometimes very expensive decisions. If we work in FMCG, then perhaps we need to remind the client that in this business 80% of new product launches fail on average every year. The stakes are high! One reason I used the story of the lost bank customer was that I wanted to reinforce that our modest project was not about measuring customer resources at the bank, but about mitigating risk of failure. I wanted that top of mind, so that even during the prosaic bar charts that I had to present, these were contextualised by what was at stake.

7. Seek storytelling variety. When I worked on a cop show in Australia we used to crank out two episodes every single week. As a group of storyliners we recognised that cops only do a certain number of things. They examine crime scenes, they grill the bad guys, they chase suspects, they observe from the anonymous grey van parked over the road. We boiled this down to eight modes of behaviour, and we made sure that in any given episode of the cop show each mode was used no more than once. In other words we didn’t have a car chase followed by a foot chase. Or have an interrogation scene followed later by another one. In marketing research reporting we also have a shortlist of reporting modes. But this is why I get critical of seeing a deck of slides it features a whole stream of descriptive charts, followed by yet another stream of descriptive charts. It is useful to break down our reports into chapters, and for each chapter to be fundamentally quite different to those previous. So after introducing what’s at stake in chapter 1, I might present a series of descriptive slides in chapter two before searching for strategies using different techniques in chapters 3,and 4. This keeps the storytelling interesting.

8. Don’t be afraid to develop a theme of a deeper nature. Very often in market research we get to study a subject but in the course of that study we ruminate on deeper material. We may be tracking the performance of the brand but at the same time we are witnessing a shift in the zeitgeist. Development of such scenes in the movie or TV program adds a lot of richness to the storytelling. We are not just witnessing a story about a person; we are reflecting on the human condition. In my own presentations I refer to these parts as the “I’ve been thinking” zone, and it may consist of a single photo, or a discussion about a relevant and fascinating book that I have been reading in conjunction with the research study. Sometimes these pauses in the narrative spark a much greater discussion than might be expected. Just as the film will resonate with the public because it seems to capture the mood of the audience, so a thematic discussion may capture and resonate with the mood of the client.

9. Action is better than talk. Charts are stepping stones in a forward moving narrative. Your headlines ought to be spiced up with verbs, your summary findings should lead to consequences. You’re building a case and working toward a verdict.

10. Finally, good storytelling always has an authenticity. What made that terrific, nail-biting movie Captain Phillips so genuinely exciting was the absolute authenticity of the Somali pirates. It was brilliant casting. The director allowed a degree of improvisation from his cast also, so that the scenes were never over-polished or slick. Those actors didn’t look like they were acting; they were the real thing: lean and desperate. In market research reports our statistics and charts and evidence and conclusions must all reek of similar authenticity. I quite frequently add an anecdote in my presentation about my own journey of doubt during the project, or about the difficulties of fieldwork. I will mention the fact that one respondent, a sleepless parent perhaps, completed the online survey at three o’clock in the morning, or that another respondent wrote 1200 words in response to a question about why they would recommend a product or service. I want the client to breathe-in and smell the reality of our work.

Storytelling places on us one demand that challenges many corporate style guides. Your firm may specify a certain tone, look and feel to its reports. I find storytelling by nature is more personal than that. A good story requires the emotional investment of the storyteller. In an age of more data than conventional reporting systems can deal with; storytelling demands that you lay your heart bare in the telling and sharing of the tale. You up for that?

Choices depend on different rules

Last week I invested US $12,000 in new software. That’s a ridiculous amount, and frankly worth about three times more than my car. Strangely, the decision to invest in this software was very simple. It was fuelled by an ideal: am I interested in doing what I do, to the highest standard that I can? The answer is an idealistic yes. Idealistic for sure. My financial adviser and good partner for 34 years just shook her head.

The software, by the way, included Sawtooth’s well-known Max-Diff module as well as their pricey but promising MBC module which takes choice modelling to a whole new level. In terms of learning new technology, MBC promises to stretch me to the limit. It is neither intuitive, nor pretty. But more about that in a later blog.

As soon as I had Max-Diff out of the box I used it on a client survey as part of a conjoint exercise. I’ve long been a fan of conjoint because that emulates realistic decision situations that people face in real life. We learn not just what they choose, but the architecture of their decision-making as well.

In this case I could compare the two approaches. In effect, the conjoint choice modelling and the Max-Diff exercise ran in parallel, testing more or less the same variables, but using different approaches.

With conjoint the respondent chooses (on-line) from a small array of cards, each with a different combination of attributes and features. They select the most optimal.

With Max-Diff the respondent chooses their favourite combination, as well is the least favourite.

Were the results similar? Well yes, they converge on the same truths, generally, but the results also revealed telling differences. One of the least important attributes, according to conjoint, proved to be one of the most important attributes according to Max-Diff. How could this be?

The lesson went back to some wonderful insights I learned from Alistair Gordon when we were working on the subject of heuristics – those rules of thumb that people use to evaluate complicated choices.

Most of us, when asked “how do people make choices?” figure that mentally we prepare a list, based on the variables, and we set about finding the best: in fact conjoint is predicated on exactly this process.

But Alistair introduced me to a fabulous concept: the veto rule. Put simply, if I was choosing between one brand of breakfast cereal and another, I may have a number of variables that contribute to optimality (Flavour, naturalness, organic-ness,) and no doubt my brain has worked up a complex algorithm that balances these things against the presence of raisins, puffed wheat, stone ground oats and dried apricots. Good luck trying to model that!

But I also have a few simple veto rules. If a competing breakfast cereal contains more than x% of sugar, then bingo – I drop it from the list of competitors.

This explained why some variables scored as important with Max-Diff, but scarcely registered with conjoint. Among the variables were a few conditions that might be described as veto conditions. Those who use Max-Diff alone seldom discuss these different effects.

So which approach – conjoint or Max-Diff – should one use? As ever, I think one should try both. My favourite research metaphor is about the blind men and the elephant, each discovering a different aspect of the animal, and each giving a different version of events. They are all correct, even if they have different answers. Together, they converge on the same answer: the whole elephant.

I do like the way research tools can give us these honest, statistically reliable, yet conflicting answers. They give us pause for thought, and they highlight the fact that numbers are merely numbers: quite useless without confident interpretation.RESEARCH

Here’s why I think Market Research companies are flying backwards.

Yes, technically we’re flying, but only just. Kitty Hawk thinking doesn’t help high-flying clients.

The very last meeting I attended when I was with my previous employer did not go particularly well. It was the weekly meeting of senior execs and on the agenda were things like staff training, client account management, and the usual HR concerns. I already knew that I was going to be leaving the organisation, so I wanted to end by contributing something which I felt was very important. “I want to run some staff training on SPSS.”

I didn’t expect the answer from my senior colleague. She asked simply: “why?”

I was quite taken aback, and I must admit I fumbled for an answer, mumbling things like; “extracting more value from the data we collect.” I can’t remember my exact words but they were along those lines. Anyway the workshop suggestion got kiboshed, and I left the organisation a few weeks later.

Three weeks ago one of the bigger clients of that same organisation announced that they were getting SPSS in-house, and they introduced me to a new member of the team, a particularly bright analyst who had previously worked with the banks.

I realised I had just witnessed a well expressed closure to my fumbling argument from 18 months earlier. The reason we need to smarten up our analytical skills as market researchers is because if we don’t,  we will simply get overshadowed by our clients.

In fact I see this all over the place. In-house analytical units within banks are using SPSS and SAS quite routinely, and most of the major banks have adopted text analysis software to help them swiftly code and analyse their vast streams of incoming verbal feedback. The text analysis centre-of-gravity in New Zealand is located client side, and not on the side of the market research industry. The same could be said of Big Data analytics. MR companies are scarcely in the picture.

Meanwhile, what is happening within market research companies? Well this last week I’ve been in conversation with two analysts looking for better more challenging roles than they have been given within their market research firms. One of them, an employee with one of New Zealand’s leading MR firms, (and one I largely admire – they are by every measure a world-class operator,) had for several years been asked to produce crosstabs and other descriptive outputs, and had on no occasion, ever, had his mental capabilities even remotely stretched. I’m talking about a graduate in statistics who has effectively been cudgeled to death by the rote boredom of low calorie market research thinking. I asked what he was equipped with, software-wise and he told me: “Microsoft Excel.”

This is simply not good enough both professionally or stragically. While globally the volume of marketing data analytics is growing by something like 45% per annum, the market research industry is relatively flat-lining or showing single digit growth at best. In other words most of the growth is happening over at the client’s place. And they aren’t even specialists in data.

If the market research industry wishes to gain relevancy, then it has got to offer something that clients can’t provide for themselves. It used to be the market researchers provided unrivalled capabilities in the design and execution and analysis of market research projects. The key word here is “unrivalled” but I’m afraid the leading research firms are being simply outstripped by their own clients.

The mystery to me is why the larger firms appear blind to this phenomenon. Perhaps in building their systems around undernourished, under-equipped DP departments, they have a wonderfully profitable business model. Pay monkey wages, and equip them with Excel. And for that matter, keep them at arms length from the client so they never get to see how their work even made a difference. The old production line model. Tighten the screws and send it along the line.

Or perhaps the big firms are simply comfortable in doing things the way that I’ve always done, or perhaps senior managers, having grown up in Kitty Hawk thinking lack the imagination or the will to fly into the stratosphere.

Either way, if I was a young analyst and looking at my career prospects my attention would be over on the client side, or on dedicated data analytics operators such as Infotools. That’s actually a painful thing for me to say, speaking as a life member of my market research professional body. But if the status quo prevails, then we are going to see not just the relative decline, but the absolute decline of our industry.

What can market research firms do to rectify this problem?  Here are 4 suggestions:

  1. Invest in decent analytical software. Just do it. A few thousand dollars – for a much better return than sending your exec to that overseas conference.
  2. Reignite the spirit of innovation becomes from your groovy team of analysts. Rather than focus merely on descriptive data, let them loose on the meta data – the stuff that explains the architecture of the public mood.
  3. Develop a value add proposition to take to clients. Look, all of us can describe the results of a survey, but we market researchers know how to make that data sing and drive decision-making.
  4. Employ specialists in big data, so that as market research companies we can integrate the thinking that comes from market surveys, and qualitative work, with the massive data sitting largely untouched in the world of the client.

In my view market research industry has been going off-course for the last 20 years. We are stuck at Kitty Hawk. We stopped shooting for the moon.




The role play of respondents. The Pantomime of Polls.

Used car, low mileage…contact your local, friendly Member of Parliament

I do think a lot of social opinion research misses a big point. We’re not supposed to love Politicians. It’s our role as voters to distrust them. We revel in it. That’s how we’re wired. Sure, we trudge off to the polling booth to vote for them. And of course we devote hours immersed in the news cycles each month following their every move. But trust them??

And such is the disconnect between many earnest poll questions and the realities of public opinion. Today a trusted colleague, a professional I greatly respect, tweeted over a link to a disturbing new poll. New polls are always disturbing. (They play a role too.)  The figures said that few of us trust either the Government or Market Researchers with their data. Maybe I’m just getting ho-hum about having my profession bagged in every latest disturbing study, but I wondered whether my colleagues had missed something in their research. 

For sure, it is one thing to know that only 30% of us trust our Governments with our data. (Frankly I’m surprised that it is that much.) But if the other 70% have some or serious distrust on the issue – do they actually give a damn about it?

In other words we are used to measuring the breadth of sentiment, but often we do little to measure the depth of sentiment. Have those 70% written to their MP about Privacy? Have they signed a petition? Shared an angry tweet? Marched down main street?

Polls going back for decades have demonstrated the fixed social role the public allocates politicians and us pollsters. We’re always down near the bottom of the trusted profession list: cellar-dwellers with our favorite bad guys: the used  car salesmen.  But these results are a role play; a social construct – they are part of our culture. So I’m no longer shocked or amused about who wears the bad hats in this public play.

In Pantomime we know there’s going to be a big bad wolf. I do think it is time to ask deeper questions that dig beneath these paradoxes:

  • If you distrust retailers (or car salespeople) – how come you still buy from them?
  • If you  ‘hate the media” how come you watch the news and TV for as many hours per week as in the 1970s?
  • If you distrust polls, how come you read them?

These are paradoxes that, were we to find some answers, would reveal a lot about the trade-offs and real values of the public we love to measure.

Who is she and why is she saying such horrible things?


In my country twitter was running hot this week over a suicide of a TV and Woman’s Mag celebrity. More to the point it was running hot about the commentaries ABOUT the suicide. One in particular received a particularly vitriolic reaction in the social media – it was by Herald columnist Deborah Hill-Cone who savaged the individual who committed suicide and, although claiming never to have met the victim, put forward a full diagnosis (age, fading beauty, lack of attention) as to why suicide was the answer. It was an insensitive, in fact odious piece of writing,

The reaction was swift and angry and by my count only two tweeters reacted positively to the piece (“on the money”) while the vast majority attacked the article and the writer.

Does Deborah Hill Cone have twitter? I’d like to tell her how wrong & terrible her post is. Id tell her respectfully, not like she has been.

“Today, I’m alive and she’s dead.” By Deborah Hill Cone. Unacceptable breach of humanity.

Wow. I don’t read the NZ Herald regularly, but Deborah Hill Cone‘s article today is just plain ghastly and abhorrent. One to miss for sure.

In which Deborah Hill Cone uses telepathy to work out what it was “that claimed Charlotte”.

Just read Deborah Hill Cone‘s article in NZ Herald. I wish I hadn’t. What a cruel, callous human being.

To Deborah Hill Cone of the @nzherald regarding her article on Charlotte Dawson – you are a disgusting human being

90 percent of the media are maggots, Debra Hill Cone climbed the top of that heap. #kickingsome1whentherdown

You get the drift. There were three reasons that people were so inflamed by the piece which was, at its most generous, judged to be ill-timed and unfortunately worded.

  1. The moral question of decency. Kicking somebody when they were only hours dead broke a cultural taboo.
  2. The fundamental inaccuracy of the column. How on earth could a columnist who had never met the victim even guess at what was or wasn’t going through the mind of the suicidal individual?
  3. The lack of authority of the writer. Put simply: who was she to judge?

This credibility or authenticity and authority is an important component of reputation. When I did a big desk study on reputation a few years ago I was struck by how we could be impressed by people even if we didn’t like their work. Madonna? Don’t like her music – but I admire her ability to work hard, to read the market and to constantly reinvent herself. She’s a trouper, and you can’t knock that. She has built up these bona fides over a 35 year career.  Quite a feat.

Or take Tupac Shakur, (above) a rap legend who didn’t have time to build up his bona fides. Yes, but he still spoke and sang with authority because of the depth of his experience. His time inside jail for example, or his life amidst the gangstas of LA. I disliked his music when it first came out, but nowadays, long after he was gunned down, I can finally appreciate the artistry, creativity and authenticity of his music. It took me 20 years to get here, while his fans have been loyal from the second he opened his mouth. They heard; they recognised his authority.

Newspaper columnists and media commentators can also build up their credibility. I’m thinking of Alastair Cook who, over decades, earned our trust. He had access to top subjects, but more than that, he demonstrated a deep fascination for the little telling details and the beautifully crafted sentence. I’m thinking of Chicago Trib columnist Mike Royko who, in his heyday of the 1970s, kept scratching below the layers of political, commercial and social bullshit to bring us true stories. They weren’t always big stories, but you could trust their veracity.

But this week’s minor blow up showed how, without established bona fides, a person’s reputation can easily collapse. I’ve often talked about “forgivability” as a factor of reputation, and this year have been puzzling over what it is that makes one action forgivable, or not. Why might we stay loyal to some brand, or sports star – even after they stumble – while at the same time why might we bay for blood the moment somebody, in this case a NZ Herald columnist, put a foot wrong.

The answer is trust. And trust is built up through credibility: through a solid track record that we can scrutinise and understand.

What an idiot. She doesn’t understand depression, and has a very skewed and narrow view on life. IDIOT.

what a great way to start an article “I know nothing about you, but …”

Deborah Hill Cone your piece says more about who you R than anything about Charlotte. Anything […more..] narcissistic than an Opinion Piece?

In the words of the tweet critics our author didn’t know the subject, didn’t know the victim and was basically engaged in an exercise of narcissism.  What was missing from the discussion were people defending her. There were no, “look, I think she raised a fair point…” or “give Deborah her due – she gets it right 95% of the time, I happen to disagree with her this time…”  None of that.

What Deborah Hill-Cone does in future is up to her. But if she wants to position herself as the columnist who speaks cruel truths, then she needs to build her bona fides with her public. It could be a long hard slog.