Monthly Archives: February 2013

If you’re doing a questionnaire – here are three ways to make it a goodie

Questionnaires don’t get any easier to write. Last year I slogged over several days to develop a questionnaire, and I’m sure the client must have been wondering: “What’s taking so long? How hard can it be to ask questions?”

Well the answer: it is damned easy to write questions. But to write great questions – now that’s a different story. For these you need to dig deep into the mind of the likely respondent. How do they evaluate the issues? What are their needs and motivations?

In fact that degree of empathy, I think, is the the biggest difference between a regular, low value questionnaire versus a high value exercise that delivers real understanding. Too many questionnaires are framed in the language of the client, or of the researcher. Not only is the wording starchy or full of client-speak, but the perspective is simply wrong. Many questionnaires end up like that well-worn joke: “But enough about me…tell me, what do you think about me?”

A second way to make a questionnaire better is to think about the poor analyst. Ask yourself first: is this survey simply about describing stuff?  If so, then write lots of cross-tab-able questions. That’s how most of my colleagues in big companies seem to do it and frankly I feel sorry for them. Where’s the joy in running cross-tabs?

But if you want to explore the data – for example to conduct a social network analysis, or perhaps a factor analysis in order to understand the underlying themes – then you need to ask your questions in a quite specific way.  Questionnaires are not about asking questions – ultimately – they are really about generating useful data. Not just data, but useful data.

The third way to add value to the exercise is to inject yourself – if possible – into the process. Now that most surveys are conducted online, they are blessed with having a single voice – which obviates the need to reduce the language of the questionnaire down to a vanilla monotone (a necessary step to remove biases when using a typical call center.)

So you can voice the questionnaire in a more conversational style. Not too conversational however. (Feedback on one questionnaire I wrote was: “Love the chatty style Duncan…but don’t get too chatty.”)

The personalisation of a questionnaire is not just a matter of tone however. While it is certainly more pleasant to fill in a friendly rather than a dry questionnaire, the real benefit is in the quality of the answers. An engaged respondent is going to deliver more nuanced, well-considered answers than is a respondent who is bored and simply wants to get out of the questionnaire via the quickest route possible – usually by marking 5,5,5,5,in all those boxes.

Image

Stories have more traction

I did an experiment a few years back, to test the relative traction and pass-on-ability Imageof stories versus facts. What I did was take a trending topic on Twitter (Toyota recall) and classify 100 tweets about the story. Then I tracked whether these got retweeted or otherwise. To be honest it was a pretty low-rent experiment.

Anyway, the tweets fell into three categories.

1) Facts.  Example: “One million Camry sedans may be affected by recall.”

2) Opinions. Example: “I never trusted Toyota. Buy American!”

3) Narratives (Explicit or Implied.)  Example: “How many more recalls before Toyota actually apologises?”

Well, hardly any fact-based tweets got retweeted. I’d say as a form of social currency, mere facts are something of a FAIL.

A number of Opinions got retweeted – and my view is that the successes, here, were those opinions that tapped into the Zeitgeist. At the time the Toyota story ran headlong into another story: about the failure of Detroit. (This was before the rescue that Mitt Romney would not have approved. Gee, what’s happened to that guy? He’s vanished.)

The tweets that were easily most likely to get re-tweeted were the narratives. I characterised these as any tweet which had an explicit beginning, middle or end to some unfolding story. At the time the CEO of Toyota was obstinately refusing to apologise for the recall mess – and this storyline added a human interest dimension to the vehicle recall tale.

Stories have value because they provide a memorable coat hanger on which we can drape all kinds of facts, figures, opinions and sundry details. The structure of a story helps the audience locate and contextualise all kinds of information in one easily retrieved package. A story is the difference between buying an assortment or different garments, versus buying a complete business suit – pants, jacket, shirt and tie – where everything seems to fit and match.

My first job was to edit and later to write TV scripts, and I spent 8 years learning about different facets of storytelling – how to build suspense, how to capture interest and engagement, how to use quiet moments (puts on kettle, reads a postcard)  to highlight (screech, screech, screech of violins) the sudden violence as the axe murderer smashes into the kitchen.

At the time I left TV (soaps and cop shows are, admittedly, pretty crappy in the end) I thought, well, that was a cul de sac.

But if there’s one core skill I have in market research it is the one based on what I learned in those 8 years. A good story, well-told, lives on in the memory of the audience. By contrast, as I saw in the tweet experiment, a deck of facts – without narrative structure – simply won’t get passed on and used.

Data from the call center…how to improve it

Recently I’ve been working with rich verbatim data from a customer call center – and boy, this is where you hear it direct: complaints, hassles, confusions and also stories of unreasonable clients who feel they ought to be exempt from all fees and charges. Some people seem to think that institutions ought to run for free.

Image
String data – may be hard to work from if the questions are wrong to begin with.

Having worked with the client to develop a pretty accurate code-frame, we still found that our analysis wasn’t getting deep enough. For example take all those customers who felt some mistake had been made.  We had a well-constructed codeframe that picked these people up into one sizeable bucket – but somehow, through some combination of our coding architecture and the sheer diversity of the English language, we hadn’t adequately picked up the nature of those mistakes and errors.

So back we went and hand-coded our shortlist of Mistakes & Errors, and we found four basic causes for these. We also asked ourselves if we might somehow have changed our coding architecture to get a sharper result next time. The answer is: no – the real problem was the way the verbatims had been recorded. A lot of the story was between the lines and inferred. There was no way that automated text analysis could pick a lot of this up.

The bigger answer is this. There are good ways to ask open-enders as well as less-good ways and one of our suggestions was to tweak the way the call center asks customers to tell their stories by asking clearly what the cause of the error had been (in their view.)  We also suggested a check-box to precode some of the answers where possible. For example: Was a mistake or error made? Check. Then what general type of error was this? Type 1, 2, 3 or 4. 

Text analysis is seldom really simple, but often analysts wrestle with the raw data – we struggle to make it sing. Sometimes we need to go back to the collection point. We don’t have to be victims in the process.

 

Open Enders – one way to get more value from these

Image 

With the broad arrival of text analysis as an increasingly mainstream analytical activity, a lot of heat is coming back on the question of how to ask more useful open-ended questions. Is there a “best way?”

Yes, there is. If traditional coding was a process of blunt word counts, then newer forms of text analysis are more about the relationships between ideas. Is this mentioned frequently in terms of that? 

For that reason you get much better mileage by asking for two or three thigs instead of just one. Instead of asking; “What is the main reason for choosing brand x?” you are better to ask: “What are your reasons for buying brand x? Please give us two or three reasons.”

This approach isn’t just about driving longer answers (it may help perhaps) but is more about being able to see how ideas associate: you can start to understand the structure or architecture of the public’s mindset.  A single answer “Name the main reason…” gives you lego blocks of data. A “give us two or three” type question enables you to assemble those blocks.

Benchmarking Studies – are they worthwhile?

How are we doing?  That’s the basic question everyone asks in the workplace. Back in 1999 I read a fabulous meta-study of hundreds of employee surveys and their methodologies, and the core question – the very heart of the employee’s anxiety was summed up as: “How am I doing?”

Managers adopt the royal “we” and ask the same thing: how are we doing?

Compared to whom? In the 1980 and 1990s this question usually lead us down the track toward benchmark studies. Benchmarking was quite the buzzword in the 90s, driven as it was by TQM and various Best Practice initiatives. Back then everyone wanted to be like IBM.

These days everyone wants to be like Apple (presumably they want to operate like anal control freaks – unhappy but proud of their work,) but that’s not the issue. The issue is around the whole idea of benchmarking.

It used to be that research companies loved these studies because if you could generate a big enough pool of normative data, then everyone had to come to you – the experts who have 890 studies of soap powder studies from around the globe. It was a license to print money. And better still, the client could tell their stakeholders: “Look – we’re in the top quartile!”  (The score may be crap but we’re better than most.)

But sooner or later benchmarks come unstuck. For a start there is the IBM effect. IBM used to be the company that everyone wanted to emulate – and just about every company that did went over the same cliff that IBM steered toward in the early 90s. They lost their way when the computing industry moved from a box focus to a software focus. Suddenly IBM had lost its mojo.  So the clones all disappeared while those with a unique story – the Apples and the Microsofts – who didn’t benchmark themselves to IBM, merrily went their own successful or at least adventurous way.

Then there is the problem of relevancy. If you benchmarked your bank’s performance against all the other banks in the world, would that even be useful? (As a reference point, how about the bank I deal with in Cambodia that didn’t put an auto-payment through because – and I quote – “we were having a party that afternoon.”) Is it actually useful for local banks here in NZ to compare themselves to these other operators? Does it make a local customer quietly glad that their branch rates higher than does the leading bank in Warsaw? I think not.

Would our client prefer to benchmark his salary against global norms? (Hey, we’ve got third world data included.)

But here, for the researcher is the ultimate rub. If a client insists on benchmarking by adjusting their survey to ask the same questions as “the survey they used in Canada,” then what we’re buying into is not just the idea of comparing results, but also into the idea that we may have to dumb down our questions; that we may need to run average research so we can really compare apples with apples.

I’ve tasted those apples. They’re dry and they’re floury.

Please. Give me crisp, fresh locally grown produce, and leave normative data for those who want to compare themselves to …er, normal.

DSC_0372

Lifting Productivity in your Research Company – Part 3

I have a mix of very positive and very ho-hum feelings towards the book “Quiet” which starts off well with an argument to say we westerners live in a very rah rah extrovert culture, and that it is ill-geared for those of us who are introverts. By the end of the book I find the author going right off the rails – and I find her claim that entire cultures (Asians) are basically way more introvert than extrovert somewhat spurious. Put me right somebody, but isn’t introversion mostly a genetic trait rather than socially driven? In which case the book gets into the same dangerous territory as that book The Bell Curve which appeared a few years back, claiming that some races are more intelligent than others.

However, one section of Quiet was on much more solid ground: the discussion about modern office space and how it is built around the idea of “the team”, but may be more about the idea of “how many people can we fit in, per square metre?”

Now if you’re an introvert, the constant buzz of being in an open-plan may be a spectacularly suitable thing. You feel so connected! You’re plugged in to a dozen conversations. You can quickly discuss the latest project.

But for introverts who have no real need for this continuous sense of being plugged-in, the open plan is just a distraction – and a very significant one. Try focusing on a structural equation model in a room full of ping pong players. That’s what it’s like.

In Quiet, a number of figures are quoted about loss of productivity surrounding the open-plan office compared to a more individualistic floorplan – and these figures are worthy of closer inspection. I’d be willing to bet that a typical research company could lift overall productivity by at least 5% simply by offering more quiet spaces for researchers to damn well focus and think.

Right now many researchers are already signalling a desire to do this – with their white earbuds or their choice of working from home.

Image

Once upon a time statisticians only explored. Then they learned to confirm exactly – to confirm a few things exactly, each under very specific circumstances. As they emphasized exact confirmation, their techniques inevitably became less flexible. The connection of the most used techniques with past insights was weakened.” John Wilder Tukey. The Godfather of 20th Century Statistics