Tag Archives: Market Research

Measuring the 2 million combo menu.

Upsize this. Downsize that. No wonder we have whiplash: a fast food menu presents us with something like 2 million possible combinations.

In an age of mass customisation we now have the tools to measure really complex customer choices.

Scenario one. A young guy walks into Burger King and he’s starving. He’s got $10 in his pocket and while he waits in the queue he stares up at the menu board. Does he go for the tender crisp chicken wrap with honey mustard? Or does he stick with his usual Double Cheese Whopper? Or should that be a single cheese but with an upsized Coke? Next please. He makes his order.

Scenario two. Life Insurance. The young mother sits at the dining table with her tablet opened to show the Life Direct website. She is comparison shopping – looking for the best life cover for herself and her husband. She enters the details into the online calculator and nominates how much cover they need (enough to cover the mortgage,) and six competing offers pop-up within 15 seconds. These are priced marginally differently per month. She has heard of some good things about some insurance companies but bad things about one of the six. And a few of the competing offers come with additional conditions and benefits. She weighs everything up and clicks one.

Scenario three. The couple have all but signed up for the new Ford. They tested other competing brands, but this is the hatchback they really like the best. “Of course,” says the salesman as he starts to fill in the sales order form; “we haven’t discussed your options yet. Do you prefer automatic, or manual? How about the sport model with the mag wheels? That’s only $1200 extra. Of course you have the two-door, but how about the four-door option? And it’s a bit extra for the metallic paint – though it tends to age a lot better than the straight enamel colours.”

“You mean it’s not standard?” she asks. The couple look at each other. Suddenly the decision seems complicated. She murmurs to her partner that maybe the Mazda with the free “on road package” is back in the running. The salesman begins to sweat. The deal is slipping through his fingers.

We are in the age of mass customisation

Three very common consumer scenarios. In an age of mass customisation, consumer choices don’t just come down to brand preference or advertising awareness – the two staples of consumer research – but rather come down to an astonishingly complex human algorithm which somehow handles what we might call menu-based choice architecture or MBC.

How complex? One estimation I have seen of the typical burger menu board from where during the 90 seconds in which you queue up you must make a choice, is that there are in excess of 2 million possible combinations and permutations for a typical $15 order. Double cheese. Extra fries. Upsized Coke. Hold the beetroot. Everything comes with a cost and a trade-off, governed by personal tastes, one’s own sense of value and the various conditional skews of this week’s promotions. Somehow humans manage to make a decision, though to be honest, you can see how habits develop. They become an efficient way of dealing with information overload.

A big challenge for market researchers

How do researchers even begin to test the attractiveness of 2 million options? The problem for market researchers is not just one of volume. The complication comes from the fact that the consumer choices are not strictly rational. When you think about it, paying an extra dollar for an additional slice of cheese in a double cheeseburger is bloody expensive. And if I’m constrained by the $10 note in my back pocket, then who can tell whether I would prefer an upsize Coke and small fries with my burger, or a Jumbo coke and no fries at all? Or no Coke at all, but extra fries? Or Coke and Fries but hold the extra cheese? Or…

Menu-based choice modelling is a relatively new field of market research that springs from its roots in conjoint analysis.
Conjoint, I learned recently to my delight, was a term first applied by my statistical hero John Tukey, who was fascinated by the question of how we can measure things that are measured not discretely, but considered jointly – hence con – joint.
Since the 1970s conjoint approaches have been applied with increasing confidence.

At first these were paper-based, and a typical example of this kind of study gave respondents a set of 16, 24, or 32 cards on which various combinations and permutations of product description were portrayed. Generally this might be enough to efficiently calculate the relative attractiveness of competing offers.

For example if five competing brands of peanut butter came in four different price points, low or medium saltiness, super crunchy, regular crunchy or smooth, then we have 5 x 4 x 2 x 3 = 120 potential combinations of product to test. Good design might reduce this to a fairly representative 32 physical cards each with a specific combo of price, brand and flavour etc.
The conjoint approach – with carefully calibrated options to test – enables us to determine which factors are most driving consumer decisions (price is often the main driver,) and the degree to which consumers consider the trade-offs between, say a preferred brand at a higher price, versus a lower priced offer from a brand we don’t trust so well.

Conjoint has the advantage of boiling each variable down to utility scores, and – since price was one of these variables – allowing us to put a price-tag on brand appeal, or flavour.

Even so, the paper-based system, because it still requires at least 32 cards to give us enough data on the peanut butter survey, still puts on respondents a high cognitive load.

By the 1980s conjoint studies were made possible on computers, and this approach enabled some clever variations to be tried. First, the computer approach eased the cognitive load by offering three or four cards at a time so that respondents could more easily choose their preferred option (or none of these) over 9 or 10 iterations. Another innovation: useful in some situations but not universally, is adaptive conjoint, which in cycling a respondent through a series of choice exercises, may quickly decide that this respondent never chooses Sanitarium brand, so it begins testing variations among the preferred ETA and Pam’s brands. It focuses on the useful part of the choice landscape.

These approaches have been increasingly honed and refined, and I have always admired the developers of Sawtooth for working and reworking their algorithms and methodologies to create increasingly predictive conjoint products. They are willing to challenge their own products. They moved away from favouriting Adaptive Conjoint a few years ago.

Up until 2010 the softwares offered respondents a choice between “competing cards.” This approach works best on mass produced products such as peanut butter, where there may be less than 200 combinations and permutations available. However in the last decade or so, and with the rise of e-commerce and the presence of bundled offers (think how phone, energy and broadband packages are now being bundled,) classic choose-one-card conjoint only goes some of the way to explaining what goes on in the consumer mind.

Enter MBC.

This is a Sawtooth product and is a particularly expensive software. I recently forked out New Zealand around $12,000 for a copy, and dismayingly, instead of receiving a sexy drag-and-drop statistical package, had the equivalent of a jumbo bucket of Lego bits offloaded onto my computer. It isn’t pretty. It requires HTML coding, and requires some real steep learning to get a handle on.

In my case, I’m just lucky that I have worked with conjoint for a few years now, and conceptually can see where this is coming from. But even so, I can think of no software that I have dealt with in 22 years that has required such a demanding user experience. Luckily by the time the respondent sees it, things aren’t so complicated.

Not pretty for the researcher – but emulates reality for the respondent

With the right HTML coding, the online survey presents – metaphorically – a replica of the Burger King menu board. There you can choose the basic flavour of burger (beef, chicken, fish, cheese,) and then top it up with your own preferred options. Choose the beef value meal if you want – and upsize those fries. As you select your options the price goes up or down so respondents can very accurately replicate the actual choice experience.

And that’s the point of MBC: to emulate as closely as possible real choice situations. We might discover that consumer choice architecture radically transforms itself depending on whether there is a price point of $10, or whether there is a price point of $12 or $15. Sawtooth experience shows that many of the trade-offs are not strictly rational. There may be an overriding decision about whether to order a whopper or double whopper, but regardless, the shopper may always prefer to have a Coke of a certain size. In other words there is no trade-off here, or there is: but it applies to the fries only.

In a typical example the respondent is shown 10 variations of menu boards, and each time given a price limit and asked to choose. Given that the questionnaire needs HTML coding, it will cost the client somewhat more to conduct an MBC survey compared to a regular out-of-the-box online survey. The investment should be well worth it.

Another consideration: given the 2 million permutations available, this is not something to be tested with small sample sizes. A thousand respondents may be required as a minimum, each doing ten choices to generate 10,000 data points. Picture these as trigs on data landscape.

Given enough of these reference points, the software then has enough information to fill in a complete picture of the consumer-choice landscape. You simply can’t do this when you break down the survey into constituent questions about price, taste, brand, options etc – that is, in considering these elements not jointly, but singly.

Now comes the good part. By assembling this data, you could more or less model the optimum menu board. If I was Burger King for example, I could probably massage the menu so that the average punter with $10 in their back pocket would spend not $9.20, but something closer to $9.80 a 9% lift in revenue willingly forked out by happy customers. If I wanted to try a special offer on the flame grilled chicken burger, I could see what impact this would have on revenue and on the demand for other options.

How accurate is MBC?

As with any conjoint approach, the accuracy is largely dependent on the questionnaire design, and on the survey design which can be tested before it goes into field – using random data and running it through the MBC simulator.

If I was doing a landscape map of New Zealand and had 10,000 trig points I could probably come up with a pretty accurate picture of the geological landscape. There would be enough points to suggest the Southern Alps, and the ruggedness of the King country for example. But it wouldn’t be totally granular or perfect, and I would be extrapolating a lot of the story.

So similarly, given that we are testing, say, 2 million combinations of menu with just 10,000 data points, don’t expect the modelling to be perfect. Sawtooth has tested MBC using actual data, (but held out for comparison,) and found the software chooses the right options with 80% accuracy or thereabouts. So for my money, the MBC accuracy is right up there with Neural Networks for accuracy, but much more useful than NNs for testing menu-based choices and explaining the interactions that are going on.

Currently I’m employing the software for a client who is looking at bundled products, in a particularly complex market, that’s why I bought the software. There is simply no other tool available to do the job. Without giving any client information away, I will within a few weeks disguise the data and make available a case study to show how MBC works, and how it can be used.
I mention this because I am a strong believer in the idea that market research professionals ought to share their experiences for mutual benefit. As a profession we are under mounting challenge from the big data mob. However, so long as they stay in the world of Excel, and so long as market researchers remain aggressive in the way we trial and purchase new software advances, we will continue to have a healthy profession. My first reaction to MBC is that it is an answer – at last – to a world where customers can now customise just about anything. You are welcome to contact me if you want to learn more.

Duncan Stuart
64-9 366-0620

The big thing we forget to measure

RISKIn our market research reports we generally make a concerted stab at the idea that our data is both precise and reliable. We gleefully report the sample size,  and we pinch our fingers together as we cite the maximum margin of error – which in many surveys is plus, or minus, 3.1%. Talk about pinpoint accuracy!

Yet we blithely ignore the fact that our clients work in a fuzzy universe where things go right, or horribly wrong. If you were the brand or marketing manager for Malaysian Airlines this year, I really wonder if your standard market research measures – brand awareness, consideration, advertising awareness etcetera – would have anything remotely to do with the fact that you have lost, very tragically, two airliners within the space of three months. Risk happens. Regardless of your marketing investment, passengers aren’t flying MH.

Or if you are the marketing manager for Coca-Cola in your country, do you think honestly that the subtle shifts of brand awareness, ad recall and consideration have as much effect as whether or not this summer proves to be wet and dismal, or an absolute scorcher?

We may not have a crystal ball when it comes to weather forecasting, but we do have decades of accurate climate data.  When I did this exercise a few years ago I gathered 30 years worth of data, popped it into Excel, then used a risk analysis tool to come up with a reasonable distribution curve based on that data. It looked something like the chart above.  Then I could set my temperature parameter – below x° – and based on that, could fairly reasonably calculate that my city had a 20% chance of having a dismal summer. The risk was high enough, I felt, that any marketing manager for any weather sensitive product or service should have a contingency plan in case the clouds rolled over and the temperatures dropped.

Why don’t we do this more often?  Why don’t we build in the various risks that accompany the work of our clients? If we did, then we could better help them to make decisions as circumstances arise.  We consider some risks – what if the competitor achieves greater brand awareness and consideration – yet we treat many of the other risks (weather,  currency or price fluctuations, whether or not supermarkets choose to stock us or otherwise, whether or not some kind of health scare will affect our category, etcetera,) as if these were off-limits and outside our scope.

Though these data are not outside our scope to all. The data may not come in via our surveys, but they are just as relevant.  Going back to the weather example: we did observational research at a local lunch bar, and found that on wet cool days the pattern of drinks consumption was quite different to that on hot sunny days. It wasn’t just a question of volume. On cool days people switched from CSD’s – slightly – towards juices, as well as choc-o-milk type drinks.

So if I was a soft drink marketer, and I had a market research firm supplying me with climate risk data, as well is an idea of how people behave when it is hot, average, or cool – then I might come up with a marketing plan for each of those circumstances. I would go into summer expecting it to be average, but as soon as the long-range weather forecast told me  that indeed the weather is going to be cool – I would think, well, I had a 20% expectation that this would happen. Time to wheel out Plan B. I’d be prepared.

The risk analysis tool that I use is called @Risk and it is frighteningly simple to use. It works as a plug-in to Excel, and takes roughly 10 minutes to learn.  Since using the software, my outlook toward what we do as market researchers has totally changed.

We are not in the survey business. We are in the business of assisting our clients to make informed, evidence-based decisions.

Sometimes the evidence comes from our survey work, bravo! But sometimes the evidence comes from the weather office, or from the war zone of the Ukraine.

The Adam Sandler effect

sandlerI was thinking a little more about the different rules we apply when we make our customer choices, and how these nuances may be lost if we ask research questions in the wrong way.

A really simple example, and one I’ve mentioned before illustrates what I call the Adam Sandler effect.

What happens is this: Five of you decide to go to the movies this Saturday. So far so good.

But which movie? You and your group love movies, and surely you have a collective favourite to see. So you start discussing what’s on and four of you agree the new Clooney film is the one you want.

“Ah… but I’ve already seen that film,” says the fifth member of your clique.

The veto rule.

Okay what’s our next best choice? And so it goes. Whatever you choose, somebody has either already seen it, or has read a tepid review.

What you have here is a collision between two competing sets of rules. You set out to see your favourite film, and instead you and your group end up seeing the “least objectionable” film that, actually, nobody has wanted to see. This is where Adam Sandler, I swear, has earned his place as one of Hollywood’s three top grossing actors of the past 10 years.

Apart from The Wedding Singer which was a great little film, the rest have been an appalling bunch of half witted comedies. Little Nicky anyone?

It doesn’t matter. Every weekend at the movies there is a blockbuster or three –  and then there is Adam Sandler, lurking there: his movies ready to pick up the fallout from your friends well-meaning decision process.

Now for researchers this has serious implications. If we only ask about what things people want, then we may end up with a theoretical ideal – but our research will never pick up the kind of goofy, half-assed, slacker productions that actually gross the big dollars. In our questionnaires we need to think about how we might pick up the Adam Sandler effect. Good luck to the guy. He has the knack of reading the Saturday night crowd much more accurately than most of our surveys could ever hope to achieve. We should learn from that.

  • Choices depend on positives as well as vetoes
  • When two or more people make a decision, the outcome depends more strongly on vetoes than on positives
  • There is always a market for things that are least objectionable.

Here’s why I think Market Research companies are flying backwards.

Image
Yes, technically we’re flying, but only just. Kitty Hawk thinking doesn’t help high-flying clients.

The very last meeting I attended when I was with my previous employer did not go particularly well. It was the weekly meeting of senior execs and on the agenda were things like staff training, client account management, and the usual HR concerns. I already knew that I was going to be leaving the organisation, so I wanted to end by contributing something which I felt was very important. “I want to run some staff training on SPSS.”

I didn’t expect the answer from my senior colleague. She asked simply: “why?”

I was quite taken aback, and I must admit I fumbled for an answer, mumbling things like; “extracting more value from the data we collect.” I can’t remember my exact words but they were along those lines. Anyway the workshop suggestion got kiboshed, and I left the organisation a few weeks later.

Three weeks ago one of the bigger clients of that same organisation announced that they were getting SPSS in-house, and they introduced me to a new member of the team, a particularly bright analyst who had previously worked with the banks.

I realised I had just witnessed a well expressed closure to my fumbling argument from 18 months earlier. The reason we need to smarten up our analytical skills as market researchers is because if we don’t,  we will simply get overshadowed by our clients.

In fact I see this all over the place. In-house analytical units within banks are using SPSS and SAS quite routinely, and most of the major banks have adopted text analysis software to help them swiftly code and analyse their vast streams of incoming verbal feedback. The text analysis centre-of-gravity in New Zealand is located client side, and not on the side of the market research industry. The same could be said of Big Data analytics. MR companies are scarcely in the picture.

Meanwhile, what is happening within market research companies? Well this last week I’ve been in conversation with two analysts looking for better more challenging roles than they have been given within their market research firms. One of them, an employee with one of New Zealand’s leading MR firms, (and one I largely admire – they are by every measure a world-class operator,) had for several years been asked to produce crosstabs and other descriptive outputs, and had on no occasion, ever, had his mental capabilities even remotely stretched. I’m talking about a graduate in statistics who has effectively been cudgeled to death by the rote boredom of low calorie market research thinking. I asked what he was equipped with, software-wise and he told me: “Microsoft Excel.”

This is simply not good enough both professionally or stragically. While globally the volume of marketing data analytics is growing by something like 45% per annum, the market research industry is relatively flat-lining or showing single digit growth at best. In other words most of the growth is happening over at the client’s place. And they aren’t even specialists in data.

If the market research industry wishes to gain relevancy, then it has got to offer something that clients can’t provide for themselves. It used to be the market researchers provided unrivalled capabilities in the design and execution and analysis of market research projects. The key word here is “unrivalled” but I’m afraid the leading research firms are being simply outstripped by their own clients.

The mystery to me is why the larger firms appear blind to this phenomenon. Perhaps in building their systems around undernourished, under-equipped DP departments, they have a wonderfully profitable business model. Pay monkey wages, and equip them with Excel. And for that matter, keep them at arms length from the client so they never get to see how their work even made a difference. The old production line model. Tighten the screws and send it along the line.

Or perhaps the big firms are simply comfortable in doing things the way that I’ve always done, or perhaps senior managers, having grown up in Kitty Hawk thinking lack the imagination or the will to fly into the stratosphere.

Either way, if I was a young analyst and looking at my career prospects my attention would be over on the client side, or on dedicated data analytics operators such as Infotools. That’s actually a painful thing for me to say, speaking as a life member of my market research professional body. But if the status quo prevails, then we are going to see not just the relative decline, but the absolute decline of our industry.

What can market research firms do to rectify this problem?  Here are 4 suggestions:

  1. Invest in decent analytical software. Just do it. A few thousand dollars – for a much better return than sending your exec to that overseas conference.
  2. Reignite the spirit of innovation becomes from your groovy team of analysts. Rather than focus merely on descriptive data, let them loose on the meta data – the stuff that explains the architecture of the public mood.
  3. Develop a value add proposition to take to clients. Look, all of us can describe the results of a survey, but we market researchers know how to make that data sing and drive decision-making.
  4. Employ specialists in big data, so that as market research companies we can integrate the thinking that comes from market surveys, and qualitative work, with the massive data sitting largely untouched in the world of the client.

In my view market research industry has been going off-course for the last 20 years. We are stuck at Kitty Hawk. We stopped shooting for the moon.

DSC00104

 

 

Why we need to measure corporate forgivability

Image
One day before the explosion in 1984, Union Carbide were the friendly guys who brought you the Eveready brand and the Cat with 9 Lives. The NPS would have been pretty good. And pretty useless at measuring the forgivability, or lack of forgivability of the once proud Union Carbide name.

Most organisational and market research measures – the KPIs – are readings of static concepts rather than of processes.

The well trodden NPS measure is a measurement of hope, for example (how many of our customers would recommend us?) but it does little to indicate whether this score might be in danger of imploding.  It is like a measurement of speed, but without the other vital measure on your journey: how full is the tank?  The speedometer and fuel gauge  together measure a process and offer a predictive capability. “At this speed we’re using up too much gas to make it the the destination.”

Here’s another measure: Staff satisfaction. It may be high, it may be low, but the score itself says little of any predictive use.  Thus everybody might be deliriously happy at work (and I hope they are,) but we have no idea from the happiness Index whether they’ll stay happy once the take-over goes through, or when the job cuts are announced.  In fact the measure is pretty useless. It is a thermometer when really it might be more predictive to have some kind of staff barometer which hints at soft or deep changes immediately ahead.

Now brands are measured along similarly useless lines. They’re measured statically but not predictively. We measure the status of the brand, but that gives no feel for whether it is heading toward a cliff; or whether it would survive the fall, even if it did.

When we judge the people around us, we don’t just stop at “he’s a great guy” or “she’s 5 foot 11,” or “she’s a real leader,” we almost always add what I refer to as the moral dimension. “He’s a great guy…BUT I wouldn’t trust him with my money, or “She’s a real leader …AND did you see the way she showed so much respect, even to the people she had to make redundant.”  In other words we don’t just settle for today’s status update, we also tend to throw in the prognosis of how that person will act if they’re put in a conflicted, morally challenging position. We have a “Moral Vocab” with which we assess the likely behaviours of those we judge. Great guy – but morally iffy. Great leader, and puts people first no matter what.

The star term, I reckon, in this Moral Vocabulary is forgivability. It is a useful concept because it accepts that all people will fail at some point, or that all organisations will have their crisis and every brand will have its “New Coke”/”Ford Edsel” moment. They’re bound to. Forgivability measures how people will respond if and when that crisis occurs.

In terms of the resilience of the brand, or the company, the real test is not how you rate when everything goes according the plan – but how quickly you can bounce back if you falter.

To take an obvious case, Tiger Woods was the undisputed star of the golfing world (and still the highest paid athlete in 2013 despite not winning a major tournament.) A typical brand measure, up to the point of his personal and media scandal, would have given him stellar results on, say, an NPS scorecard, or on any other brand index I can think of.

But look how quickly that turned. The moment Woods fell to earth in December 2009 (or should I say, the moment he ran his SUV into a fire hydrant), everything changed. The tank of public goodwill suddenly showed “Less than half full” and sponsors started to walk away. Suddenly the values we valued in this amazing sportsman were reframed and seen in new light. Determination? Or simply arrogance?  Success? Or just runaway greed?  Perfection? Or just a sham facade? Everything that a brand measure might have rated as superb one day was shattered within 24 hours.

Four years later forgiveness has largely occurred. The gallery is generous once more when he makes a great shot, and descriptions of this golfer are laced more with qualifications about his pay check or his climb back to form, than they are about who he has a personal relationship with.

Organisations can rate well in terms of forgivability, or they can rate poorly.  It depends, as it did for Tiger Woods, on the seriousness of the sin and the forgiveability of the sinner.

In my view, Tiger Woods forgivability was undermined by the woeful stage-managed response by his sponsors. Remember those bleak Nike ads where a voice over (supposedly Tiger’s own dad) remonstrated mournfully with our hero? These attempted to package the redemption of Tiger Woods into the space of a 30 second TVC. The ads assumed that with a quick show of humility we’d be swift to forgive the golfing superstar.  Instead the ads gave us evidence that Woods  – he behaved not a man but as a marketing juggernaut – was attempting to media-manage his way out of his mess. It looked merely like insincere spin doctoring. Another sin! And so for weeks the Woods machine kept heaping more fuel onto the fire.

But what of your organisation? It may be sailing along – the speedomoter is reading high, the thermometer reading nice and warm, but what if it made a blunder? It will happen. How will your stakeholders or customers respond?

Apple, that golden child of the business media, has a string of business and product blunders a mile long. But was it forgivable?  Absolutely.  Why? Because the products are cool and because Steve Jobs never really deviated from his vision. The public understood his quest and knew that some failures will litter the pathway to success. No problem.

But post-Jobs, I think the forgivability factor is trending down. Steve’s quest is over and what we perceive is the hulking cash-cow of an organisation he built. The product may be designed in California, but the cash is domiciled wherever the company can avoid tax. Things like that start to reframe Apple not as “one man’s passion” but as just another bloody corporate.  In that light, every new launch looks less like Steve’s marvellous march of innovation, and more like the CFO’s latest plan to sucker the public. You can almost hear Mr Burns from the Simpsons. “A masterstroke Smithers! We’ll do what Microsoft used to do with Windows. Yet our fans will still think we’re the anti-Microsoft!”

Some sins are purely business as normal. Coke really did believe their New Coke formula was a better, more preferred option. They didn’t think things through.

But some sins are simply not forgivable. Union Carbide, that industrial fortress of a company that made Eveready Batteries, and Pesticides and Glad Wrap, was responsible for one of the worst industrial accidents in human history with the Bhopal disaster in India, back in December 1984.

Here was a company that was deliberately trading-off the cost of safety in order to boost profits from its poorly resourced pesticide plant located in a heavily populated area. As a result of an MIC gas explosion an estimated 40,000 individuals were either permanently disabled, maimed, or suffering from serious illness.

That was bad enough. But then after the disaster, Union Carbide tried overtly to avoid culpability or to pay any compensation to the families of the accidents thousands of victims. There was no mea culpa – instead the company fought a legal battle before finally being sued by the Indian Government for US$470 million, 5 years after the disaster. The guts of their defense was was that they weren’t responsible as a company for Bhopal – it was the fault of their employees in India. It was a massive squirm. The head of the company Warren Anderson was never brought to justice in India after the American fled India while on bail, and has since fought extradition from the USA. Today the company no longer owning its flagship brand (Eveready) and is part of the Dow Chemical company who have inherited the mess. In 2010 (25 years after the disaster) eight former executives of Union Carbide India Ltd. were finally found guilty of death by negligence. Dow, themselves burnished by the reputation of their own history with Napalm and Agent Orange are still assisting with the highly toxic Bhopal site cleanup.

Mistakes, blunders and sins can be made by any organisation. But how soon can these organisations recover – how soon can they be forgiven? In a dynamic world researchers need to measure these things. In my next blog I’m going to dissect the elements of forgivability. Get it wrong and your organisation will tread an unnecessarily risky path.

Marketers need to shed some skin in 2014

You are your own brand! Well, sadly, that’s what motivational experts are telling us.

People who know me will know that I feel some disdain for the concept of personal branding. “You are your own brand!” state dozens of personal branding experts, and in so doing they ignore both the inadequacies of branding to convey the rich complex story of who you really are, and they ignore the ugly human history in which slaves were literally branded.

The shallowness, the sheer glibness of the “you are a brand” thinking is revealed all over the place. Sports people after turning in a losing performance no longer kick themselves or admit they played poorly. No, these days they’ll only  admit the lopsided score was “bad for the brand.” Clearly it’s not whether you win or lose, that counts, it’s how you affected the business value of the franchise.

In corporations brand similar summaries are given when management has made a flawed set of decisions, the wrong widgets have been launched, the customers don’t buy them and 10,000 workers are given one month’s notice for a mistake they didn’t make. Up in HQ the conversation goes something like: “Those widgets, gentlemen, they did nothing for our brand.”

That’s one thing that rubs me the wrong way about elevating the importance of the brand so high that people will trade in their own identity in order to be packaged-up. The brand isn’t so important as many marketers think.

The hyper-valuation of company brand-equity began during the hectic years of the 1980s, shortly before the catastrophic 1987 Wall St collapse. Companies with ailing turnover figures and slack marketshare suddenly realised that despite everything, the brand itself had valuable equity. This is true, to an extent. If you measure something like “consideration”  (which brands of new car would you consider?) then brands explain part of the story. They help explain why Toyota might be forgiven the occasional safety recall, or why Skodas may be good cars but will never be even considered by a sizeable chunk of their markets. Not with their East-European legacy.  But the accountants and CFOs forgot something along the way.

The moment accountants started treating Brands as a tangible asset,  things got confusing. You have to treat assets according to certain rules – for example in terms of depreciation, or market value.  But the moment a brand gets given a dollar valuation, other questions such as fit or positioning play second fiddle when it comes to boardroom decisions. The only measure that has clout, really, is the bottom line dollar. So the brand, whatever it stood for, can easily get screwed around by financially focused directors.  When all you see are dollar signs, then any brand value looks like cash.

Reuters in 2010 when reporting the sale of Cadbury to Kraft quoted Felicity Loudon, a fourth-generation member of Cadbury’s founding family. She said she was appalled that the company looked destined to fall to Kraft, predicting jobs would be lost and its chocolate would never taste the same.

“We shouldn’t give up,” she told Reuters. “For a quintessentially, philanthropic iconic brand to sell out to a plastic cheese company — there’s no mix there.”

She has a point, though of course it fell on deaf ears.  Four years later, at least in my market, Cadbury product is still being discounted to hell, to undo some of the damage wrought by that sale. King Size block for block – it is routinely a dollar cheaper than its nearest rival. The problem was, the takeover was measured in dollars and not in any other values.

This seizure of brand valuation by the CFOs and accountants leads me to my main point.Branding itself has become commoditised.

I don’t think this rapid decline in the purpose and nature of brands has been particularly helped by Marketers (who all too often get little representation at boardroom level) or by my own profession Market Researchers who have watched on, with little reaction or understanding,  as the dynamics of corporate decision making have changed. The things we used to champion (brands, ideas, packaging, product concepts) have been grabbed and redefined by the finance boys. (And a few finance girls too.)

So they talk about things being good for the brand, or bad for the brand, yet they appoint underpowered Brand Managers who prescribe undercooked, old-fashioned brand research that belongs to the 1950s. Good for the brand?

Today’s market place, meanwhile, is being liberally peppered by stories of unknown start-ups that have taken on the big brands and are aggressively eating into the stalwart’s marketshare. The sticker-value on the old brands prove poor defense against products with better ideas.

If you accept that the concept of branding is under siege – and I’m sure a heap of readers will disagree – then the prescription is to get under the burned skin of branding, and start examining more closely the heart values that dwell below. These days my marketing language is more apt to be enriched with talk about “forgiveness” and “resilience” and other words that refer not to the bottom-line but to the human condition.

I also think modern history is on my side. Since 2008 there has been a rapid lift in the conversation about the wealth gap, about poverty, about massive corporate tax evasion, about 3rd world exploitation and about sustainability.  In my view the ice is getting mighty thin for organisations that measure shareholder return as if it’s the only thing matters.

As I tell the personal branding experts. I am not a brand: I am Spartacus.

Going deeper not cheaper in research.

Image
Can social media really help us deeply understand the human landscape?

I’ve been ruminating lately on all kinds of subjects as diverse as twitter and suicide – thinking about the way people connect but don’t really connect even in an age of social media. Today in the daily newspaper was a coroner’s report on a boy who ended his life, and really all the warnings were there in his facebook messages. Nobody, it seems, tangibly connected with the depressed teenager: a tragedy.

But the story stands as a sad metaphor I feel for the work we do as market researchers who, despite being armed with the best technology, and despite the excitement of being able to conduct research in real-time via smartphone apps (etc etc etc) languish in the shallows of human understanding. Discussion papers about new methodologies tend mostly to take a channel-centric view of the new research media, rather than exploring the question of what kind of depth of understanding will we achieve? Can we use social media to get us something deeper – more immersive – than what we get in a typical CATI or online survey? I’m not seeing too many papers on the subject.

So I embarked on a personal experiment to see whether it is possible to gain a deeper, more experiential understanding of a different culture via standard social media including FB, twitter and the notorious but interesting Ask.fm.

I’m not going into details here, other than to make some points which I may explore in later posts.

  1. Social media demand that we develop a persona. They gives us around 20 words and room to post one photo and that – effectively – is our identity or mask.  Social media are thus quite limiting.
  2. Masks may be held tight – and some online personas are heavily protective or overly managed (or branded) so some subjects are thereby more responsive or open to engagement than others.
  3. A second dimension to online persona is the focus of the person. Online, if not physically, some people tweet and msg fundamentally to report on themselves rather than to engage with others. Here’s a pix of my lunch. A minority fundamentally enter social media to listen, to engage or ask questions of others.
  4. The third dimension is – for researchers the most potentially interesting. Most people engage or friend or follow people within their own circles of interest. This is heavily geographical (my workplace, my college, my neighbourhood) but also expressed in terms of my interests. Us market researchers use a hashtag on twitter to find each other. But a minority of users open themselves up to random and ‘foreign’ links. They do this because they are inquisitive, and because they are interested also in helping other people.

It strikes me that if I were to use social media as a channel via which I could conduct some kind of social anthropology I’d get a vastly different set of insights from those who are:

  • Type One: Heavily masked, ego-centric and confined to their circle. In fact on ask.fm I was lucky to get answers at all from these people, despite their invitation to “ask me anything.”  They showed a very low engagement level.versus:
  • Type Two: Open and upfront people, at least semi focused on others not just themselves, and open to joining “foreign circles.” These people are engaged, interesting and open to discussing questions.

The conversations one encounters via social media are conversations in the true sense: they happen over time. So for that reason I’m less trusting of the idea that we can necessarily achieve a true understanding of individuals or customers, if we rely (as we’ve always done) on snapshots.  When engaging with a set of strangers, in my personal journey, I was struck by their mood swings, perhaps amplified by the nature of written communications and the starkness of the pictures they chose for tumblr, and the shifting nature of their opinion.

Underneath this was the basis of any good conversation: the degree to which both parties know about each other – and whether they can they develop a shorthand, and use shared references and metaphors as foundations on which to build trust and then converse on deeper ideas.  In most cases, as I engaged with strangers, I could do this: but not all. In every case the process took time.

What are the implications of this? My first inclination is that we can and should use social media in the same way as a spider can use their web – they can feel an insect landing in the far reaches of their domain.  So what we need are agents or listeners at the far reaches of our own webs. For example if I were asked to use social media to explore a ‘foreign’ market (users of herbal remedies, say, or San Francisco football fans) I would seek a shortlist of people who are Type Two.  Then I would ask them all about their worlds. I could get far more insights via them than I could via hundreds of completed responses from the relatively unengaged Type Ones.

In saying so, I’m consciously moving away from scientific sampling and classic design, and moving toward something else.

I do think we should be looking for systematic ways for us to use the strengths and account for the weaknesses of social media. It isn’t enough to say, “Oh, we now do research via social media.” That may point to truly immersive conversations, or it may paper over the cracks of particularly shallow, noon-insightful feedback.  Comments?

This blog reflects the paper I presented to the MRSNZ 2013 Conference – which won the David O’Neill Award for Innovation.

Social Currency seldom gets measured. Why not? It has the power to build a brand and to empower social change.

Image
Fashion – of any sector, is the most reliant on social currency. Do you measure it?

Market researchers have traditionally treated the buying public as an aggregate of individuals. Every mainstream statistical routine used in survey analysis does this: so when we see mean scores, medians, top-2 box scores, factor analysis or segmentation work what we’re seeing is the aggregation of individual results followed by some dissection of these numbers.

Yet the buying public is not made up of individuals. Buyers shop for families – and their tastes are shaped as much by the lactose intolerance of their 13 year old daughter as they are by the chit-chat at the Tuesday morning coffee group. We buy according not just to our own tastes, but to the tastes of those around us. Our peers, research consistently shows us, shape the norms around which we operate. Even questions of whether we’re overweight, or smoke are shaped to a considerable degree by what our peer groups view as normal.

For this reason we need a measure of the social index of brands and services. I may love brand X but if my peers are all chatting enthusiastically about brand y, then brand y is more likely to become my choice too.

Social Currency is a name for this measure and as the name implies, there’s a degree of pass-on-value or trade-ability in talking about the brand or service. Right now in the USA Beyonce is enjoying immense social currency – her Mrs Carter tour has been a conversation piece. Have you seen the footage? Did you see that moment when Jay Z crashed the stage?  Those outfits. The music.  She is worth talking about. Meanwhile Chris Brown is not getting talked about, much, except in a negative way.  I cannot imagine anyone starting a conversation with: “have you heard the new Chris Brown song?”  But I can imagine a conversation opening with: “You planning to go to the Beyonce concert?” There’s a buzz about her.

Social Currency implies trade-ability, so to measure it we need to know why information or gossip is even traded. Why do we do it? There are several motivations.

  • Social currency is a form of social glue. By talking about Beyonce I can join the lunchtime conversation – we have a shortcut to affirming our similarities. I’m one of “us.”
  • Social currency affirms our usefulness to our peers. By tipping me off about the latest music release, or about the awful over-sweet taste of the latest confectionery, you prove worth knowing – you affirm your usefulness at least in a symbolic way.
  • Social currency may take the form of truly vital information for my community. When Rosa Parks refused to stand up on her bus, word of the incident spread like wildfire through the black communities, and gained traction through the churches. This wasn’t idle gossip – this was a deep social ‘moment’ coursing through the veins of the community.

For these reasons, social currency is relevant to just about everything that market researchers measure. Whether the word of mouth around a product or service, or event, or political scene – social currency is at work whether we measure this or not.

By measuring it we get a picture not just of the aggregate feelings of the market, but a view of how quickly and how dynamic the peer to peer conversations are liable to be.  Media analysts in Montgomery Alabama wouldn’t have picked up on the swift undercurrents of dialogue that occurred following Rosa Parks’ arrest on Dec 1st 1955. She was not the first black woman to disobey the bus driver’s ruling on that local bus line – but in this case the grapevine was already running hot.

I’ve seen brands decimated and even destroyed by social currency, and I’ve seen new brands launch spectacularly on the back of viral marketing and word of mouth.  Yet I’ve seldom seen social currency used as one of the core measures.

The Rosa Parks story is a fantastic yet simple illustration of how a piece of news spread via word of mouth, not because it was a big story (woman stays seated on bus) but because it was imbued with social importance at the time when racial tensions were close to boiling point. This was a month after a black teenager Emmet Till was murdered for talking to a white girl. It was a case where social currency was much more valuable than media currency or – presumably – the brand values of the Montgomery Bus Company.   

I put the blame for this oversight on the shoulders of lazy old-fashioned ‘let’s not rock the boat with innovation’ research companies, lazy ad agencies who talk about currency but dictate the use of bog standard measures such as brand recall (yawn), and I put the blame on marketers and their somewhat limited view (thanks to the universities that train them,) of how humans really operate.  This is one bus that researchers don’t sit on, so much as miss completely.

How to find the needle in a haystack of 30 billion lines of straw.

When I was in NYC two years ago the Occupy Wall St movement was in full swing.  I wandered to nearby Wall St (the protest was in a park several hundred metres away) and the famous street was being protected by dozens of police mounted on horseback. There were cordons everywhere and brokers rushed from building to building during their lunch hour in what felt palpably like a state of siege. A pretzel cart operated in the heart of the empty street – selling drinks and snacks – and I asked the vendor, an Indian man, how business was going for him. Trading on Wall St, he assured me, was slow.

Still if trading has come down from the record highs set in late 2008, the NYSE still turns over something like 1.5 billion transactions per day, or around 30 billion every month. And somewhere, lurking in those 30 billion lines of data is evidence of insider trading. The question is, how do you find it?  The needle is small – the haystack unimaginably big.

The answer is that the SEC is often informed by the investigative team of FiNRA – the Financial Industry Regulatory Authority spearheaded by a man who used to be in the DA’s office: a prosecutor you wouldn’t want on your tail. His name is Cameron Funkhouser, and he’s a big thickset man described by one colleague as an “I can smell a rat, kinda guy.”

But get this – the investigative team he’s put together to delineate all those billions of lines of data, looking for patterns of fraud and insider trading (do two apparent strangers make the same trades on the same days, repeatedly – could they be connected?) this team he’s put together is not chiefly characterised by data geeks and statistical nerds. Heading the unit, underneath Funkhouser’s command is a team spearheaded by:

  • Whistle-blower unit – Joe Ozag. (Ex terrorism detective.)
  • Anthony Callo – once prosecuted homicides at the DA’s office.
  • Laura Gansler – screenwriter and the “brains” of the group.

This team is characterised by diversity, out-of-the-box thinking, and an understanding not so much about statistics as about people: our drives, our motivations and our craftiness: in short; our narratives. Laura Gansler might be the least statistical member of the group – she is an accredited screenwriter and author – but she’s well versed in developing credible storylines and getting inside the head of crooked characters.

And this is the point about dealing with big data. For sure, your programmers and number grunts can dig around and reveal black and white statistical evidence of those occasional insider trading schemes – but the real value in this analysis comes from those who can furnish a credible story and, using logic, suggest where best to look for the telltale finger prints, bloodstains and data trails that mark the crime.

FINRA has proved remarkably effective at helping nail the bad guys, and Funkhouser speaks highly of his team.

The lesson for researchers and business people is that the best way to deal with massive data – 30 billion lines of the stuff every month – is to remember two things.

1)  Every bit of data reflects human activity. It isn’t about numbers – it is about people.

2) Get inside the mind of the bad guys – and you know where to look.

Incidentally the FINRA unit reflects, surprisingly closely, the fictitious Department S – the 1970s show that gave us Stewart Sullivan, conventional crime-fighting agent, Annabel Hurst, computer expert, and the redoubtable Jason King, cravat wearing international novelist of mystery.  Department S was born of a time when traditional crime (bank jobs and murders) were giving way to global operations. (The French Connection was another movie that wrestled with the same escalation of crime.) So it is interesting that the escalation of data into Big Data should meet with a similar response: put talent and creativity onto the case otherwise there will never be enough resource.

Image

 

Je Suis Un Rockstar?

Image
For Harry – the preso was going particularly well. Now it was time to present the conclusions.

Last summer I read the autobiography of rock guitarist Slash (Guns’n’Roses) with a very specific frame of reference. I was conscious that many researchers, locally, and world-wide like to bill themselves as rock stars of research. Ye Gods! If being a rock star means living anywhere near 10% of what Slash got up to (the drug intake of a mid-sized hospital) then being a rock star needs to come with a health warning. Could you – even metaphorically – stand up at the end of a particularly good presentation and strip naked and play an awesome guitar solo as an encore. I just don’t think most of us researchers are wired that way – and while I love what musicians do (they make our lives much, much richer) I’m not sure we need go to them to borrow our look. Nevertheless, here’s an article with Rock Star and Analyst in the title. Good advice, though I’m not sure I could fill a stadium if I took it.

8 principles that can make you an analytics rock star.