Category Archives: New Analytical Tools

Measuring the 2 million combo menu.

Upsize this. Downsize that. No wonder we have whiplash: a fast food menu presents us with something like 2 million possible combinations.

In an age of mass customisation we now have the tools to measure really complex customer choices.

Scenario one. A young guy walks into Burger King and he’s starving. He’s got $10 in his pocket and while he waits in the queue he stares up at the menu board. Does he go for the tender crisp chicken wrap with honey mustard? Or does he stick with his usual Double Cheese Whopper? Or should that be a single cheese but with an upsized Coke? Next please. He makes his order.

Scenario two. Life Insurance. The young mother sits at the dining table with her tablet opened to show the Life Direct website. She is comparison shopping – looking for the best life cover for herself and her husband. She enters the details into the online calculator and nominates how much cover they need (enough to cover the mortgage,) and six competing offers pop-up within 15 seconds. These are priced marginally differently per month. She has heard of some good things about some insurance companies but bad things about one of the six. And a few of the competing offers come with additional conditions and benefits. She weighs everything up and clicks one.

Scenario three. The couple have all but signed up for the new Ford. They tested other competing brands, but this is the hatchback they really like the best. “Of course,” says the salesman as he starts to fill in the sales order form; “we haven’t discussed your options yet. Do you prefer automatic, or manual? How about the sport model with the mag wheels? That’s only $1200 extra. Of course you have the two-door, but how about the four-door option? And it’s a bit extra for the metallic paint – though it tends to age a lot better than the straight enamel colours.”

“You mean it’s not standard?” she asks. The couple look at each other. Suddenly the decision seems complicated. She murmurs to her partner that maybe the Mazda with the free “on road package” is back in the running. The salesman begins to sweat. The deal is slipping through his fingers.

We are in the age of mass customisation

Three very common consumer scenarios. In an age of mass customisation, consumer choices don’t just come down to brand preference or advertising awareness – the two staples of consumer research – but rather come down to an astonishingly complex human algorithm which somehow handles what we might call menu-based choice architecture or MBC.

How complex? One estimation I have seen of the typical burger menu board from where during the 90 seconds in which you queue up you must make a choice, is that there are in excess of 2 million possible combinations and permutations for a typical $15 order. Double cheese. Extra fries. Upsized Coke. Hold the beetroot. Everything comes with a cost and a trade-off, governed by personal tastes, one’s own sense of value and the various conditional skews of this week’s promotions. Somehow humans manage to make a decision, though to be honest, you can see how habits develop. They become an efficient way of dealing with information overload.

A big challenge for market researchers

How do researchers even begin to test the attractiveness of 2 million options? The problem for market researchers is not just one of volume. The complication comes from the fact that the consumer choices are not strictly rational. When you think about it, paying an extra dollar for an additional slice of cheese in a double cheeseburger is bloody expensive. And if I’m constrained by the $10 note in my back pocket, then who can tell whether I would prefer an upsize Coke and small fries with my burger, or a Jumbo coke and no fries at all? Or no Coke at all, but extra fries? Or Coke and Fries but hold the extra cheese? Or…

Menu-based choice modelling is a relatively new field of market research that springs from its roots in conjoint analysis.
Conjoint, I learned recently to my delight, was a term first applied by my statistical hero John Tukey, who was fascinated by the question of how we can measure things that are measured not discretely, but considered jointly – hence con – joint.
Since the 1970s conjoint approaches have been applied with increasing confidence.

At first these were paper-based, and a typical example of this kind of study gave respondents a set of 16, 24, or 32 cards on which various combinations and permutations of product description were portrayed. Generally this might be enough to efficiently calculate the relative attractiveness of competing offers.

For example if five competing brands of peanut butter came in four different price points, low or medium saltiness, super crunchy, regular crunchy or smooth, then we have 5 x 4 x 2 x 3 = 120 potential combinations of product to test. Good design might reduce this to a fairly representative 32 physical cards each with a specific combo of price, brand and flavour etc.
The conjoint approach – with carefully calibrated options to test – enables us to determine which factors are most driving consumer decisions (price is often the main driver,) and the degree to which consumers consider the trade-offs between, say a preferred brand at a higher price, versus a lower priced offer from a brand we don’t trust so well.

Conjoint has the advantage of boiling each variable down to utility scores, and – since price was one of these variables – allowing us to put a price-tag on brand appeal, or flavour.

Even so, the paper-based system, because it still requires at least 32 cards to give us enough data on the peanut butter survey, still puts on respondents a high cognitive load.

By the 1980s conjoint studies were made possible on computers, and this approach enabled some clever variations to be tried. First, the computer approach eased the cognitive load by offering three or four cards at a time so that respondents could more easily choose their preferred option (or none of these) over 9 or 10 iterations. Another innovation: useful in some situations but not universally, is adaptive conjoint, which in cycling a respondent through a series of choice exercises, may quickly decide that this respondent never chooses Sanitarium brand, so it begins testing variations among the preferred ETA and Pam’s brands. It focuses on the useful part of the choice landscape.

These approaches have been increasingly honed and refined, and I have always admired the developers of Sawtooth for working and reworking their algorithms and methodologies to create increasingly predictive conjoint products. They are willing to challenge their own products. They moved away from favouriting Adaptive Conjoint a few years ago.

Up until 2010 the softwares offered respondents a choice between “competing cards.” This approach works best on mass produced products such as peanut butter, where there may be less than 200 combinations and permutations available. However in the last decade or so, and with the rise of e-commerce and the presence of bundled offers (think how phone, energy and broadband packages are now being bundled,) classic choose-one-card conjoint only goes some of the way to explaining what goes on in the consumer mind.

Enter MBC.

This is a Sawtooth product and is a particularly expensive software. I recently forked out New Zealand around $12,000 for a copy, and dismayingly, instead of receiving a sexy drag-and-drop statistical package, had the equivalent of a jumbo bucket of Lego bits offloaded onto my computer. It isn’t pretty. It requires HTML coding, and requires some real steep learning to get a handle on.

In my case, I’m just lucky that I have worked with conjoint for a few years now, and conceptually can see where this is coming from. But even so, I can think of no software that I have dealt with in 22 years that has required such a demanding user experience. Luckily by the time the respondent sees it, things aren’t so complicated.

Not pretty for the researcher – but emulates reality for the respondent

With the right HTML coding, the online survey presents – metaphorically – a replica of the Burger King menu board. There you can choose the basic flavour of burger (beef, chicken, fish, cheese,) and then top it up with your own preferred options. Choose the beef value meal if you want – and upsize those fries. As you select your options the price goes up or down so respondents can very accurately replicate the actual choice experience.

And that’s the point of MBC: to emulate as closely as possible real choice situations. We might discover that consumer choice architecture radically transforms itself depending on whether there is a price point of $10, or whether there is a price point of $12 or $15. Sawtooth experience shows that many of the trade-offs are not strictly rational. There may be an overriding decision about whether to order a whopper or double whopper, but regardless, the shopper may always prefer to have a Coke of a certain size. In other words there is no trade-off here, or there is: but it applies to the fries only.

In a typical example the respondent is shown 10 variations of menu boards, and each time given a price limit and asked to choose. Given that the questionnaire needs HTML coding, it will cost the client somewhat more to conduct an MBC survey compared to a regular out-of-the-box online survey. The investment should be well worth it.

Another consideration: given the 2 million permutations available, this is not something to be tested with small sample sizes. A thousand respondents may be required as a minimum, each doing ten choices to generate 10,000 data points. Picture these as trigs on data landscape.

Given enough of these reference points, the software then has enough information to fill in a complete picture of the consumer-choice landscape. You simply can’t do this when you break down the survey into constituent questions about price, taste, brand, options etc – that is, in considering these elements not jointly, but singly.

Now comes the good part. By assembling this data, you could more or less model the optimum menu board. If I was Burger King for example, I could probably massage the menu so that the average punter with $10 in their back pocket would spend not $9.20, but something closer to $9.80 a 9% lift in revenue willingly forked out by happy customers. If I wanted to try a special offer on the flame grilled chicken burger, I could see what impact this would have on revenue and on the demand for other options.

How accurate is MBC?

As with any conjoint approach, the accuracy is largely dependent on the questionnaire design, and on the survey design which can be tested before it goes into field – using random data and running it through the MBC simulator.

If I was doing a landscape map of New Zealand and had 10,000 trig points I could probably come up with a pretty accurate picture of the geological landscape. There would be enough points to suggest the Southern Alps, and the ruggedness of the King country for example. But it wouldn’t be totally granular or perfect, and I would be extrapolating a lot of the story.

So similarly, given that we are testing, say, 2 million combinations of menu with just 10,000 data points, don’t expect the modelling to be perfect. Sawtooth has tested MBC using actual data, (but held out for comparison,) and found the software chooses the right options with 80% accuracy or thereabouts. So for my money, the MBC accuracy is right up there with Neural Networks for accuracy, but much more useful than NNs for testing menu-based choices and explaining the interactions that are going on.

Currently I’m employing the software for a client who is looking at bundled products, in a particularly complex market, that’s why I bought the software. There is simply no other tool available to do the job. Without giving any client information away, I will within a few weeks disguise the data and make available a case study to show how MBC works, and how it can be used.
I mention this because I am a strong believer in the idea that market research professionals ought to share their experiences for mutual benefit. As a profession we are under mounting challenge from the big data mob. However, so long as they stay in the world of Excel, and so long as market researchers remain aggressive in the way we trial and purchase new software advances, we will continue to have a healthy profession. My first reaction to MBC is that it is an answer – at last – to a world where customers can now customise just about anything. You are welcome to contact me if you want to learn more.

Duncan Stuart
64-9 366-0620

The big thing we forget to measure

RISKIn our market research reports we generally make a concerted stab at the idea that our data is both precise and reliable. We gleefully report the sample size,  and we pinch our fingers together as we cite the maximum margin of error – which in many surveys is plus, or minus, 3.1%. Talk about pinpoint accuracy!

Yet we blithely ignore the fact that our clients work in a fuzzy universe where things go right, or horribly wrong. If you were the brand or marketing manager for Malaysian Airlines this year, I really wonder if your standard market research measures – brand awareness, consideration, advertising awareness etcetera – would have anything remotely to do with the fact that you have lost, very tragically, two airliners within the space of three months. Risk happens. Regardless of your marketing investment, passengers aren’t flying MH.

Or if you are the marketing manager for Coca-Cola in your country, do you think honestly that the subtle shifts of brand awareness, ad recall and consideration have as much effect as whether or not this summer proves to be wet and dismal, or an absolute scorcher?

We may not have a crystal ball when it comes to weather forecasting, but we do have decades of accurate climate data.  When I did this exercise a few years ago I gathered 30 years worth of data, popped it into Excel, then used a risk analysis tool to come up with a reasonable distribution curve based on that data. It looked something like the chart above.  Then I could set my temperature parameter – below x° – and based on that, could fairly reasonably calculate that my city had a 20% chance of having a dismal summer. The risk was high enough, I felt, that any marketing manager for any weather sensitive product or service should have a contingency plan in case the clouds rolled over and the temperatures dropped.

Why don’t we do this more often?  Why don’t we build in the various risks that accompany the work of our clients? If we did, then we could better help them to make decisions as circumstances arise.  We consider some risks – what if the competitor achieves greater brand awareness and consideration – yet we treat many of the other risks (weather,  currency or price fluctuations, whether or not supermarkets choose to stock us or otherwise, whether or not some kind of health scare will affect our category, etcetera,) as if these were off-limits and outside our scope.

Though these data are not outside our scope to all. The data may not come in via our surveys, but they are just as relevant.  Going back to the weather example: we did observational research at a local lunch bar, and found that on wet cool days the pattern of drinks consumption was quite different to that on hot sunny days. It wasn’t just a question of volume. On cool days people switched from CSD’s – slightly – towards juices, as well as choc-o-milk type drinks.

So if I was a soft drink marketer, and I had a market research firm supplying me with climate risk data, as well is an idea of how people behave when it is hot, average, or cool – then I might come up with a marketing plan for each of those circumstances. I would go into summer expecting it to be average, but as soon as the long-range weather forecast told me  that indeed the weather is going to be cool – I would think, well, I had a 20% expectation that this would happen. Time to wheel out Plan B. I’d be prepared.

The risk analysis tool that I use is called @Risk and it is frighteningly simple to use. It works as a plug-in to Excel, and takes roughly 10 minutes to learn.  Since using the software, my outlook toward what we do as market researchers has totally changed.

We are not in the survey business. We are in the business of assisting our clients to make informed, evidence-based decisions.

Sometimes the evidence comes from our survey work, bravo! But sometimes the evidence comes from the weather office, or from the war zone of the Ukraine.

Choices depend on different rules

Last week I invested US $12,000 in new software. That’s a ridiculous amount, and frankly worth about three times more than my car. Strangely, the decision to invest in this software was very simple. It was fuelled by an ideal: am I interested in doing what I do, to the highest standard that I can? The answer is an idealistic yes. Idealistic for sure. My financial adviser and good partner for 34 years just shook her head.

The software, by the way, included Sawtooth’s well-known Max-Diff module as well as their pricey but promising MBC module which takes choice modelling to a whole new level. In terms of learning new technology, MBC promises to stretch me to the limit. It is neither intuitive, nor pretty. But more about that in a later blog.

As soon as I had Max-Diff out of the box I used it on a client survey as part of a conjoint exercise. I’ve long been a fan of conjoint because that emulates realistic decision situations that people face in real life. We learn not just what they choose, but the architecture of their decision-making as well.

In this case I could compare the two approaches. In effect, the conjoint choice modelling and the Max-Diff exercise ran in parallel, testing more or less the same variables, but using different approaches.

With conjoint the respondent chooses (on-line) from a small array of cards, each with a different combination of attributes and features. They select the most optimal.

With Max-Diff the respondent chooses their favourite combination, as well is the least favourite.

Were the results similar? Well yes, they converge on the same truths, generally, but the results also revealed telling differences. One of the least important attributes, according to conjoint, proved to be one of the most important attributes according to Max-Diff. How could this be?

The lesson went back to some wonderful insights I learned from Alistair Gordon when we were working on the subject of heuristics – those rules of thumb that people use to evaluate complicated choices.

Most of us, when asked “how do people make choices?” figure that mentally we prepare a list, based on the variables, and we set about finding the best: in fact conjoint is predicated on exactly this process.

But Alistair introduced me to a fabulous concept: the veto rule. Put simply, if I was choosing between one brand of breakfast cereal and another, I may have a number of variables that contribute to optimality (Flavour, naturalness, organic-ness,) and no doubt my brain has worked up a complex algorithm that balances these things against the presence of raisins, puffed wheat, stone ground oats and dried apricots. Good luck trying to model that!

But I also have a few simple veto rules. If a competing breakfast cereal contains more than x% of sugar, then bingo – I drop it from the list of competitors.

This explained why some variables scored as important with Max-Diff, but scarcely registered with conjoint. Among the variables were a few conditions that might be described as veto conditions. Those who use Max-Diff alone seldom discuss these different effects.

So which approach – conjoint or Max-Diff – should one use? As ever, I think one should try both. My favourite research metaphor is about the blind men and the elephant, each discovering a different aspect of the animal, and each giving a different version of events. They are all correct, even if they have different answers. Together, they converge on the same answer: the whole elephant.

I do like the way research tools can give us these honest, statistically reliable, yet conflicting answers. They give us pause for thought, and they highlight the fact that numbers are merely numbers: quite useless without confident interpretation.RESEARCH

Why we need to measure corporate forgivability

Image
One day before the explosion in 1984, Union Carbide were the friendly guys who brought you the Eveready brand and the Cat with 9 Lives. The NPS would have been pretty good. And pretty useless at measuring the forgivability, or lack of forgivability of the once proud Union Carbide name.

Most organisational and market research measures – the KPIs – are readings of static concepts rather than of processes.

The well trodden NPS measure is a measurement of hope, for example (how many of our customers would recommend us?) but it does little to indicate whether this score might be in danger of imploding.  It is like a measurement of speed, but without the other vital measure on your journey: how full is the tank?  The speedometer and fuel gauge  together measure a process and offer a predictive capability. “At this speed we’re using up too much gas to make it the the destination.”

Here’s another measure: Staff satisfaction. It may be high, it may be low, but the score itself says little of any predictive use.  Thus everybody might be deliriously happy at work (and I hope they are,) but we have no idea from the happiness Index whether they’ll stay happy once the take-over goes through, or when the job cuts are announced.  In fact the measure is pretty useless. It is a thermometer when really it might be more predictive to have some kind of staff barometer which hints at soft or deep changes immediately ahead.

Now brands are measured along similarly useless lines. They’re measured statically but not predictively. We measure the status of the brand, but that gives no feel for whether it is heading toward a cliff; or whether it would survive the fall, even if it did.

When we judge the people around us, we don’t just stop at “he’s a great guy” or “she’s 5 foot 11,” or “she’s a real leader,” we almost always add what I refer to as the moral dimension. “He’s a great guy…BUT I wouldn’t trust him with my money, or “She’s a real leader …AND did you see the way she showed so much respect, even to the people she had to make redundant.”  In other words we don’t just settle for today’s status update, we also tend to throw in the prognosis of how that person will act if they’re put in a conflicted, morally challenging position. We have a “Moral Vocab” with which we assess the likely behaviours of those we judge. Great guy – but morally iffy. Great leader, and puts people first no matter what.

The star term, I reckon, in this Moral Vocabulary is forgivability. It is a useful concept because it accepts that all people will fail at some point, or that all organisations will have their crisis and every brand will have its “New Coke”/”Ford Edsel” moment. They’re bound to. Forgivability measures how people will respond if and when that crisis occurs.

In terms of the resilience of the brand, or the company, the real test is not how you rate when everything goes according the plan – but how quickly you can bounce back if you falter.

To take an obvious case, Tiger Woods was the undisputed star of the golfing world (and still the highest paid athlete in 2013 despite not winning a major tournament.) A typical brand measure, up to the point of his personal and media scandal, would have given him stellar results on, say, an NPS scorecard, or on any other brand index I can think of.

But look how quickly that turned. The moment Woods fell to earth in December 2009 (or should I say, the moment he ran his SUV into a fire hydrant), everything changed. The tank of public goodwill suddenly showed “Less than half full” and sponsors started to walk away. Suddenly the values we valued in this amazing sportsman were reframed and seen in new light. Determination? Or simply arrogance?  Success? Or just runaway greed?  Perfection? Or just a sham facade? Everything that a brand measure might have rated as superb one day was shattered within 24 hours.

Four years later forgiveness has largely occurred. The gallery is generous once more when he makes a great shot, and descriptions of this golfer are laced more with qualifications about his pay check or his climb back to form, than they are about who he has a personal relationship with.

Organisations can rate well in terms of forgivability, or they can rate poorly.  It depends, as it did for Tiger Woods, on the seriousness of the sin and the forgiveability of the sinner.

In my view, Tiger Woods forgivability was undermined by the woeful stage-managed response by his sponsors. Remember those bleak Nike ads where a voice over (supposedly Tiger’s own dad) remonstrated mournfully with our hero? These attempted to package the redemption of Tiger Woods into the space of a 30 second TVC. The ads assumed that with a quick show of humility we’d be swift to forgive the golfing superstar.  Instead the ads gave us evidence that Woods  – he behaved not a man but as a marketing juggernaut – was attempting to media-manage his way out of his mess. It looked merely like insincere spin doctoring. Another sin! And so for weeks the Woods machine kept heaping more fuel onto the fire.

But what of your organisation? It may be sailing along – the speedomoter is reading high, the thermometer reading nice and warm, but what if it made a blunder? It will happen. How will your stakeholders or customers respond?

Apple, that golden child of the business media, has a string of business and product blunders a mile long. But was it forgivable?  Absolutely.  Why? Because the products are cool and because Steve Jobs never really deviated from his vision. The public understood his quest and knew that some failures will litter the pathway to success. No problem.

But post-Jobs, I think the forgivability factor is trending down. Steve’s quest is over and what we perceive is the hulking cash-cow of an organisation he built. The product may be designed in California, but the cash is domiciled wherever the company can avoid tax. Things like that start to reframe Apple not as “one man’s passion” but as just another bloody corporate.  In that light, every new launch looks less like Steve’s marvellous march of innovation, and more like the CFO’s latest plan to sucker the public. You can almost hear Mr Burns from the Simpsons. “A masterstroke Smithers! We’ll do what Microsoft used to do with Windows. Yet our fans will still think we’re the anti-Microsoft!”

Some sins are purely business as normal. Coke really did believe their New Coke formula was a better, more preferred option. They didn’t think things through.

But some sins are simply not forgivable. Union Carbide, that industrial fortress of a company that made Eveready Batteries, and Pesticides and Glad Wrap, was responsible for one of the worst industrial accidents in human history with the Bhopal disaster in India, back in December 1984.

Here was a company that was deliberately trading-off the cost of safety in order to boost profits from its poorly resourced pesticide plant located in a heavily populated area. As a result of an MIC gas explosion an estimated 40,000 individuals were either permanently disabled, maimed, or suffering from serious illness.

That was bad enough. But then after the disaster, Union Carbide tried overtly to avoid culpability or to pay any compensation to the families of the accidents thousands of victims. There was no mea culpa – instead the company fought a legal battle before finally being sued by the Indian Government for US$470 million, 5 years after the disaster. The guts of their defense was was that they weren’t responsible as a company for Bhopal – it was the fault of their employees in India. It was a massive squirm. The head of the company Warren Anderson was never brought to justice in India after the American fled India while on bail, and has since fought extradition from the USA. Today the company no longer owning its flagship brand (Eveready) and is part of the Dow Chemical company who have inherited the mess. In 2010 (25 years after the disaster) eight former executives of Union Carbide India Ltd. were finally found guilty of death by negligence. Dow, themselves burnished by the reputation of their own history with Napalm and Agent Orange are still assisting with the highly toxic Bhopal site cleanup.

Mistakes, blunders and sins can be made by any organisation. But how soon can these organisations recover – how soon can they be forgiven? In a dynamic world researchers need to measure these things. In my next blog I’m going to dissect the elements of forgivability. Get it wrong and your organisation will tread an unnecessarily risky path.

Going deeper not cheaper in research.

Image
Can social media really help us deeply understand the human landscape?

I’ve been ruminating lately on all kinds of subjects as diverse as twitter and suicide – thinking about the way people connect but don’t really connect even in an age of social media. Today in the daily newspaper was a coroner’s report on a boy who ended his life, and really all the warnings were there in his facebook messages. Nobody, it seems, tangibly connected with the depressed teenager: a tragedy.

But the story stands as a sad metaphor I feel for the work we do as market researchers who, despite being armed with the best technology, and despite the excitement of being able to conduct research in real-time via smartphone apps (etc etc etc) languish in the shallows of human understanding. Discussion papers about new methodologies tend mostly to take a channel-centric view of the new research media, rather than exploring the question of what kind of depth of understanding will we achieve? Can we use social media to get us something deeper – more immersive – than what we get in a typical CATI or online survey? I’m not seeing too many papers on the subject.

So I embarked on a personal experiment to see whether it is possible to gain a deeper, more experiential understanding of a different culture via standard social media including FB, twitter and the notorious but interesting Ask.fm.

I’m not going into details here, other than to make some points which I may explore in later posts.

  1. Social media demand that we develop a persona. They gives us around 20 words and room to post one photo and that – effectively – is our identity or mask.  Social media are thus quite limiting.
  2. Masks may be held tight – and some online personas are heavily protective or overly managed (or branded) so some subjects are thereby more responsive or open to engagement than others.
  3. A second dimension to online persona is the focus of the person. Online, if not physically, some people tweet and msg fundamentally to report on themselves rather than to engage with others. Here’s a pix of my lunch. A minority fundamentally enter social media to listen, to engage or ask questions of others.
  4. The third dimension is – for researchers the most potentially interesting. Most people engage or friend or follow people within their own circles of interest. This is heavily geographical (my workplace, my college, my neighbourhood) but also expressed in terms of my interests. Us market researchers use a hashtag on twitter to find each other. But a minority of users open themselves up to random and ‘foreign’ links. They do this because they are inquisitive, and because they are interested also in helping other people.

It strikes me that if I were to use social media as a channel via which I could conduct some kind of social anthropology I’d get a vastly different set of insights from those who are:

  • Type One: Heavily masked, ego-centric and confined to their circle. In fact on ask.fm I was lucky to get answers at all from these people, despite their invitation to “ask me anything.”  They showed a very low engagement level.versus:
  • Type Two: Open and upfront people, at least semi focused on others not just themselves, and open to joining “foreign circles.” These people are engaged, interesting and open to discussing questions.

The conversations one encounters via social media are conversations in the true sense: they happen over time. So for that reason I’m less trusting of the idea that we can necessarily achieve a true understanding of individuals or customers, if we rely (as we’ve always done) on snapshots.  When engaging with a set of strangers, in my personal journey, I was struck by their mood swings, perhaps amplified by the nature of written communications and the starkness of the pictures they chose for tumblr, and the shifting nature of their opinion.

Underneath this was the basis of any good conversation: the degree to which both parties know about each other – and whether they can they develop a shorthand, and use shared references and metaphors as foundations on which to build trust and then converse on deeper ideas.  In most cases, as I engaged with strangers, I could do this: but not all. In every case the process took time.

What are the implications of this? My first inclination is that we can and should use social media in the same way as a spider can use their web – they can feel an insect landing in the far reaches of their domain.  So what we need are agents or listeners at the far reaches of our own webs. For example if I were asked to use social media to explore a ‘foreign’ market (users of herbal remedies, say, or San Francisco football fans) I would seek a shortlist of people who are Type Two.  Then I would ask them all about their worlds. I could get far more insights via them than I could via hundreds of completed responses from the relatively unengaged Type Ones.

In saying so, I’m consciously moving away from scientific sampling and classic design, and moving toward something else.

I do think we should be looking for systematic ways for us to use the strengths and account for the weaknesses of social media. It isn’t enough to say, “Oh, we now do research via social media.” That may point to truly immersive conversations, or it may paper over the cracks of particularly shallow, noon-insightful feedback.  Comments?

This blog reflects the paper I presented to the MRSNZ 2013 Conference – which won the David O’Neill Award for Innovation.

Social Network Analysis – a quietly useful research approach

Image
Social Network Analysis helps us understands systems, repertoires and peer influences that help drive our behaviours.

The first social network diagram was, I believe, drawn in 1935 by a sociologist Mareno, who was describing the interactions between a handful of people he was studying.

Decades later of course, we now have the computational power to describe networks not just of small groups, but larger groups – hundreds, thousands or, I suppose – millions, as we see in the network clouds that map the political blogosphere, or the social interest clouds that define the woolly landscape of Facebook membership.

Social Network Analysis isn’t hard to conduct and of course there are very good freewares available sufficient to make it quite easy for any MR professional to spend an afternoon getting themselves familiar with the possibilities. I’m surprised I don’t see the fruits of SNA everywhere. Why is it so useful?

The answer is simple. First: people are social and are influenced by peers. So robust studies (the Framingham study for example) demonstrate quite simply that smoking is not just an individual choice, but very much peer driven thing. Smokers, it turns out happen to live within networks of other smokers. Obesity follows a similar pattern. Put simply, if you are surrounded by large people, then large is your “normal.”  I would imagine this social network effect applies to brand usage (the new product that everybody in the book group was raving about) and other individual choices which – if you put your network glasses on – become a lot more peer-driven than anyone might guess.

So that’s the main reason for SNA. People are social.

But there’s a second reason also. We tend to view things in terms of repertoires, clusters and systems.  When I say that I prefer to avoid rush-hour, what I’m really saying is that I’m trying to avoid a whole system of issues that culminate in lengthy travel time.  If you ask me what fruit I buy each week, my answer isn’t simply based on my favourite fruits in ranked order, but by my belief that I need a balanced diet – citrus, apples, stone fruit and bananas and maybe kiwis.  I wouldn’t dream of a purchase without some citrus but also bananas.  I see my choices not as a collection of individual choices, but as system that gives me a balance of flavour, goodness and value. In fact what Carlo Magni and I did was use fruit purchase data to create a SNA based not on people, but on fruit in a typical fruitbowl – in Japan the “system view” is very different from the “system view” in my home country of New Zealand. Bar charts would not have helped us visualise this so clearly. SNA gave us a real insight into the working heuristics of the Tokyo fruit buyer. We could also understand how seldom-mentioned fruit (yummy persimmons) fitted into the larger system, and why certain types of fruit – through poor definition – have trouble “breaking into” the social network of the typical fruitbowl.

There are two more reasons for using SNA and I’ll touch on these very briefly.

Reason three is that SNA’s produce a plethora of measures you didn’t expect to get.  When you generate a diagram, as above, you also generate a number of statistics for each node (or individual) in the network. Two useful measures are:

1) Between-ness. The degree to which a player connects two or more quite disparate groups within the network. In an organisation there may be just a few people linking Silo 1 to Silo 2.

2: Eigenvector.  The degree to which a player is plugged in to the wider network.

The fourth reason for using SNA is clarity. Clients easily ‘get’ a social network diagram, and can easily see how products, or people might glue together – or be disconnected.

Of course they easily get it. They’re human.

Text Analytics – a well regarded software.

Clarabridge –  is a well-regarded text analysis software that is cloud based. The company offers the product both in Professional as well as Corporate packages – and for the former charge per line of data. The system is geared around several inputs including social media and for that reason might be a good alternative to SPSS Text Analysis that I currently champion – one benefit for users is that the pay per usage business model may have less immediate impact on the bottom line.

 

Giving more power to Excel

I never thought I’d start a blog by discussing Excel – a platform I regard as quite unsuitable for market research. However thanks to various developers who have the skill to help Excel move out of descriptive accountancy mode (filters, look-ups, totals, bars and pies etc) there are some plug-ins worth checking out.

PowerPivot is one of these, and it works on Excel 2010 or newer. Microsoft give you a free download and it makes charting quicker and easier.