Tag Archives: questionnaires

The Adam Sandler effect

sandlerI was thinking a little more about the different rules we apply when we make our customer choices, and how these nuances may be lost if we ask research questions in the wrong way.

A really simple example, and one I’ve mentioned before illustrates what I call the Adam Sandler effect.

What happens is this: Five of you decide to go to the movies this Saturday. So far so good.

But which movie? You and your group love movies, and surely you have a collective favourite to see. So you start discussing what’s on and four of you agree the new Clooney film is the one you want.

“Ah… but I’ve already seen that film,” says the fifth member of your clique.

The veto rule.

Okay what’s our next best choice? And so it goes. Whatever you choose, somebody has either already seen it, or has read a tepid review.

What you have here is a collision between two competing sets of rules. You set out to see your favourite film, and instead you and your group end up seeing the “least objectionable” film that, actually, nobody has wanted to see. This is where Adam Sandler, I swear, has earned his place as one of Hollywood’s three top grossing actors of the past 10 years.

Apart from The Wedding Singer which was a great little film, the rest have been an appalling bunch of half witted comedies. Little Nicky anyone?

It doesn’t matter. Every weekend at the movies there is a blockbuster or three –  and then there is Adam Sandler, lurking there: his movies ready to pick up the fallout from your friends well-meaning decision process.

Now for researchers this has serious implications. If we only ask about what things people want, then we may end up with a theoretical ideal – but our research will never pick up the kind of goofy, half-assed, slacker productions that actually gross the big dollars. In our questionnaires we need to think about how we might pick up the Adam Sandler effect. Good luck to the guy. He has the knack of reading the Saturday night crowd much more accurately than most of our surveys could ever hope to achieve. We should learn from that.

  • Choices depend on positives as well as vetoes
  • When two or more people make a decision, the outcome depends more strongly on vetoes than on positives
  • There is always a market for things that are least objectionable.

Benchmarking Studies – are they worthwhile?

How are we doing?  That’s the basic question everyone asks in the workplace. Back in 1999 I read a fabulous meta-study of hundreds of employee surveys and their methodologies, and the core question – the very heart of the employee’s anxiety was summed up as: “How am I doing?”

Managers adopt the royal “we” and ask the same thing: how are we doing?

Compared to whom? In the 1980 and 1990s this question usually lead us down the track toward benchmark studies. Benchmarking was quite the buzzword in the 90s, driven as it was by TQM and various Best Practice initiatives. Back then everyone wanted to be like IBM.

These days everyone wants to be like Apple (presumably they want to operate like anal control freaks – unhappy but proud of their work,) but that’s not the issue. The issue is around the whole idea of benchmarking.

It used to be that research companies loved these studies because if you could generate a big enough pool of normative data, then everyone had to come to you – the experts who have 890 studies of soap powder studies from around the globe. It was a license to print money. And better still, the client could tell their stakeholders: “Look – we’re in the top quartile!”  (The score may be crap but we’re better than most.)

But sooner or later benchmarks come unstuck. For a start there is the IBM effect. IBM used to be the company that everyone wanted to emulate – and just about every company that did went over the same cliff that IBM steered toward in the early 90s. They lost their way when the computing industry moved from a box focus to a software focus. Suddenly IBM had lost its mojo.  So the clones all disappeared while those with a unique story – the Apples and the Microsofts – who didn’t benchmark themselves to IBM, merrily went their own successful or at least adventurous way.

Then there is the problem of relevancy. If you benchmarked your bank’s performance against all the other banks in the world, would that even be useful? (As a reference point, how about the bank I deal with in Cambodia that didn’t put an auto-payment through because – and I quote – “we were having a party that afternoon.”) Is it actually useful for local banks here in NZ to compare themselves to these other operators? Does it make a local customer quietly glad that their branch rates higher than does the leading bank in Warsaw? I think not.

Would our client prefer to benchmark his salary against global norms? (Hey, we’ve got third world data included.)

But here, for the researcher is the ultimate rub. If a client insists on benchmarking by adjusting their survey to ask the same questions as “the survey they used in Canada,” then what we’re buying into is not just the idea of comparing results, but also into the idea that we may have to dumb down our questions; that we may need to run average research so we can really compare apples with apples.

I’ve tasted those apples. They’re dry and they’re floury.

Please. Give me crisp, fresh locally grown produce, and leave normative data for those who want to compare themselves to …er, normal.


Questionnaire Writing: My 33:33:33 ratio.

How much value can one extract from a questionnaire?  I think of these documents as data generation cubes, and there are three types of data.  One set is about the respondent – their demographics maybe, but also their attitudes and strategies towards the category in question.

The second set – Type B – is about the brands or competitors within the category in question.  Rating their features, discovering our drivers, comparing their benefits etc.

Streams of data. We can greatly enhance the value of a questionnaire by making sure we are not too customer focused nor too brand focused.

Type C, or the third set of questions, typically, are the “end of the day’ questions.  And also “how do we reach you?’ questions – such as media or social media consumption so that we can identify the likely customers, but also work out how to reach these golden people.

Now specific questionnaires will have their own topics, but for the most part those three main types of question just about cover it.

The secret to getting more value is to maintain something like a 33:33:33 ratio between the three types of question.  If you had just 12 questions and asked 6 Type A questions, 4 Type B question and were left with just 2 Type C questions, then you’d basically get 6x4x2=48 units of data. Not bad. But if you asked 4x4x4 you’d get a lot more information: 64 units of data or 33% more information just by re-jigging your questionnaire.