Category Archives: Producing Better Value

Measuring the 2 million combo menu.

Upsize this. Downsize that. No wonder we have whiplash: a fast food menu presents us with something like 2 million possible combinations.

In an age of mass customisation we now have the tools to measure really complex customer choices.

Scenario one. A young guy walks into Burger King and he’s starving. He’s got $10 in his pocket and while he waits in the queue he stares up at the menu board. Does he go for the tender crisp chicken wrap with honey mustard? Or does he stick with his usual Double Cheese Whopper? Or should that be a single cheese but with an upsized Coke? Next please. He makes his order.

Scenario two. Life Insurance. The young mother sits at the dining table with her tablet opened to show the Life Direct website. She is comparison shopping – looking for the best life cover for herself and her husband. She enters the details into the online calculator and nominates how much cover they need (enough to cover the mortgage,) and six competing offers pop-up within 15 seconds. These are priced marginally differently per month. She has heard of some good things about some insurance companies but bad things about one of the six. And a few of the competing offers come with additional conditions and benefits. She weighs everything up and clicks one.

Scenario three. The couple have all but signed up for the new Ford. They tested other competing brands, but this is the hatchback they really like the best. “Of course,” says the salesman as he starts to fill in the sales order form; “we haven’t discussed your options yet. Do you prefer automatic, or manual? How about the sport model with the mag wheels? That’s only $1200 extra. Of course you have the two-door, but how about the four-door option? And it’s a bit extra for the metallic paint – though it tends to age a lot better than the straight enamel colours.”

“You mean it’s not standard?” she asks. The couple look at each other. Suddenly the decision seems complicated. She murmurs to her partner that maybe the Mazda with the free “on road package” is back in the running. The salesman begins to sweat. The deal is slipping through his fingers.

We are in the age of mass customisation

Three very common consumer scenarios. In an age of mass customisation, consumer choices don’t just come down to brand preference or advertising awareness – the two staples of consumer research – but rather come down to an astonishingly complex human algorithm which somehow handles what we might call menu-based choice architecture or MBC.

How complex? One estimation I have seen of the typical burger menu board from where during the 90 seconds in which you queue up you must make a choice, is that there are in excess of 2 million possible combinations and permutations for a typical $15 order. Double cheese. Extra fries. Upsized Coke. Hold the beetroot. Everything comes with a cost and a trade-off, governed by personal tastes, one’s own sense of value and the various conditional skews of this week’s promotions. Somehow humans manage to make a decision, though to be honest, you can see how habits develop. They become an efficient way of dealing with information overload.

A big challenge for market researchers

How do researchers even begin to test the attractiveness of 2 million options? The problem for market researchers is not just one of volume. The complication comes from the fact that the consumer choices are not strictly rational. When you think about it, paying an extra dollar for an additional slice of cheese in a double cheeseburger is bloody expensive. And if I’m constrained by the $10 note in my back pocket, then who can tell whether I would prefer an upsize Coke and small fries with my burger, or a Jumbo coke and no fries at all? Or no Coke at all, but extra fries? Or Coke and Fries but hold the extra cheese? Or…

Menu-based choice modelling is a relatively new field of market research that springs from its roots in conjoint analysis.
Conjoint, I learned recently to my delight, was a term first applied by my statistical hero John Tukey, who was fascinated by the question of how we can measure things that are measured not discretely, but considered jointly – hence con – joint.
Since the 1970s conjoint approaches have been applied with increasing confidence.

At first these were paper-based, and a typical example of this kind of study gave respondents a set of 16, 24, or 32 cards on which various combinations and permutations of product description were portrayed. Generally this might be enough to efficiently calculate the relative attractiveness of competing offers.

For example if five competing brands of peanut butter came in four different price points, low or medium saltiness, super crunchy, regular crunchy or smooth, then we have 5 x 4 x 2 x 3 = 120 potential combinations of product to test. Good design might reduce this to a fairly representative 32 physical cards each with a specific combo of price, brand and flavour etc.
The conjoint approach – with carefully calibrated options to test – enables us to determine which factors are most driving consumer decisions (price is often the main driver,) and the degree to which consumers consider the trade-offs between, say a preferred brand at a higher price, versus a lower priced offer from a brand we don’t trust so well.

Conjoint has the advantage of boiling each variable down to utility scores, and – since price was one of these variables – allowing us to put a price-tag on brand appeal, or flavour.

Even so, the paper-based system, because it still requires at least 32 cards to give us enough data on the peanut butter survey, still puts on respondents a high cognitive load.

By the 1980s conjoint studies were made possible on computers, and this approach enabled some clever variations to be tried. First, the computer approach eased the cognitive load by offering three or four cards at a time so that respondents could more easily choose their preferred option (or none of these) over 9 or 10 iterations. Another innovation: useful in some situations but not universally, is adaptive conjoint, which in cycling a respondent through a series of choice exercises, may quickly decide that this respondent never chooses Sanitarium brand, so it begins testing variations among the preferred ETA and Pam’s brands. It focuses on the useful part of the choice landscape.

These approaches have been increasingly honed and refined, and I have always admired the developers of Sawtooth for working and reworking their algorithms and methodologies to create increasingly predictive conjoint products. They are willing to challenge their own products. They moved away from favouriting Adaptive Conjoint a few years ago.

Up until 2010 the softwares offered respondents a choice between “competing cards.” This approach works best on mass produced products such as peanut butter, where there may be less than 200 combinations and permutations available. However in the last decade or so, and with the rise of e-commerce and the presence of bundled offers (think how phone, energy and broadband packages are now being bundled,) classic choose-one-card conjoint only goes some of the way to explaining what goes on in the consumer mind.

Enter MBC.

This is a Sawtooth product and is a particularly expensive software. I recently forked out New Zealand around $12,000 for a copy, and dismayingly, instead of receiving a sexy drag-and-drop statistical package, had the equivalent of a jumbo bucket of Lego bits offloaded onto my computer. It isn’t pretty. It requires HTML coding, and requires some real steep learning to get a handle on.

In my case, I’m just lucky that I have worked with conjoint for a few years now, and conceptually can see where this is coming from. But even so, I can think of no software that I have dealt with in 22 years that has required such a demanding user experience. Luckily by the time the respondent sees it, things aren’t so complicated.

Not pretty for the researcher – but emulates reality for the respondent

With the right HTML coding, the online survey presents – metaphorically – a replica of the Burger King menu board. There you can choose the basic flavour of burger (beef, chicken, fish, cheese,) and then top it up with your own preferred options. Choose the beef value meal if you want – and upsize those fries. As you select your options the price goes up or down so respondents can very accurately replicate the actual choice experience.

And that’s the point of MBC: to emulate as closely as possible real choice situations. We might discover that consumer choice architecture radically transforms itself depending on whether there is a price point of $10, or whether there is a price point of $12 or $15. Sawtooth experience shows that many of the trade-offs are not strictly rational. There may be an overriding decision about whether to order a whopper or double whopper, but regardless, the shopper may always prefer to have a Coke of a certain size. In other words there is no trade-off here, or there is: but it applies to the fries only.

In a typical example the respondent is shown 10 variations of menu boards, and each time given a price limit and asked to choose. Given that the questionnaire needs HTML coding, it will cost the client somewhat more to conduct an MBC survey compared to a regular out-of-the-box online survey. The investment should be well worth it.

Another consideration: given the 2 million permutations available, this is not something to be tested with small sample sizes. A thousand respondents may be required as a minimum, each doing ten choices to generate 10,000 data points. Picture these as trigs on data landscape.

Given enough of these reference points, the software then has enough information to fill in a complete picture of the consumer-choice landscape. You simply can’t do this when you break down the survey into constituent questions about price, taste, brand, options etc – that is, in considering these elements not jointly, but singly.

Now comes the good part. By assembling this data, you could more or less model the optimum menu board. If I was Burger King for example, I could probably massage the menu so that the average punter with $10 in their back pocket would spend not $9.20, but something closer to $9.80 a 9% lift in revenue willingly forked out by happy customers. If I wanted to try a special offer on the flame grilled chicken burger, I could see what impact this would have on revenue and on the demand for other options.

How accurate is MBC?

As with any conjoint approach, the accuracy is largely dependent on the questionnaire design, and on the survey design which can be tested before it goes into field – using random data and running it through the MBC simulator.

If I was doing a landscape map of New Zealand and had 10,000 trig points I could probably come up with a pretty accurate picture of the geological landscape. There would be enough points to suggest the Southern Alps, and the ruggedness of the King country for example. But it wouldn’t be totally granular or perfect, and I would be extrapolating a lot of the story.

So similarly, given that we are testing, say, 2 million combinations of menu with just 10,000 data points, don’t expect the modelling to be perfect. Sawtooth has tested MBC using actual data, (but held out for comparison,) and found the software chooses the right options with 80% accuracy or thereabouts. So for my money, the MBC accuracy is right up there with Neural Networks for accuracy, but much more useful than NNs for testing menu-based choices and explaining the interactions that are going on.

Currently I’m employing the software for a client who is looking at bundled products, in a particularly complex market, that’s why I bought the software. There is simply no other tool available to do the job. Without giving any client information away, I will within a few weeks disguise the data and make available a case study to show how MBC works, and how it can be used.
I mention this because I am a strong believer in the idea that market research professionals ought to share their experiences for mutual benefit. As a profession we are under mounting challenge from the big data mob. However, so long as they stay in the world of Excel, and so long as market researchers remain aggressive in the way we trial and purchase new software advances, we will continue to have a healthy profession. My first reaction to MBC is that it is an answer – at last – to a world where customers can now customise just about anything. You are welcome to contact me if you want to learn more.

Duncan Stuart
64-9 366-0620

The big thing we forget to measure

RISKIn our market research reports we generally make a concerted stab at the idea that our data is both precise and reliable. We gleefully report the sample size,  and we pinch our fingers together as we cite the maximum margin of error – which in many surveys is plus, or minus, 3.1%. Talk about pinpoint accuracy!

Yet we blithely ignore the fact that our clients work in a fuzzy universe where things go right, or horribly wrong. If you were the brand or marketing manager for Malaysian Airlines this year, I really wonder if your standard market research measures – brand awareness, consideration, advertising awareness etcetera – would have anything remotely to do with the fact that you have lost, very tragically, two airliners within the space of three months. Risk happens. Regardless of your marketing investment, passengers aren’t flying MH.

Or if you are the marketing manager for Coca-Cola in your country, do you think honestly that the subtle shifts of brand awareness, ad recall and consideration have as much effect as whether or not this summer proves to be wet and dismal, or an absolute scorcher?

We may not have a crystal ball when it comes to weather forecasting, but we do have decades of accurate climate data.  When I did this exercise a few years ago I gathered 30 years worth of data, popped it into Excel, then used a risk analysis tool to come up with a reasonable distribution curve based on that data. It looked something like the chart above.  Then I could set my temperature parameter – below x° – and based on that, could fairly reasonably calculate that my city had a 20% chance of having a dismal summer. The risk was high enough, I felt, that any marketing manager for any weather sensitive product or service should have a contingency plan in case the clouds rolled over and the temperatures dropped.

Why don’t we do this more often?  Why don’t we build in the various risks that accompany the work of our clients? If we did, then we could better help them to make decisions as circumstances arise.  We consider some risks – what if the competitor achieves greater brand awareness and consideration – yet we treat many of the other risks (weather,  currency or price fluctuations, whether or not supermarkets choose to stock us or otherwise, whether or not some kind of health scare will affect our category, etcetera,) as if these were off-limits and outside our scope.

Though these data are not outside our scope to all. The data may not come in via our surveys, but they are just as relevant.  Going back to the weather example: we did observational research at a local lunch bar, and found that on wet cool days the pattern of drinks consumption was quite different to that on hot sunny days. It wasn’t just a question of volume. On cool days people switched from CSD’s – slightly – towards juices, as well as choc-o-milk type drinks.

So if I was a soft drink marketer, and I had a market research firm supplying me with climate risk data, as well is an idea of how people behave when it is hot, average, or cool – then I might come up with a marketing plan for each of those circumstances. I would go into summer expecting it to be average, but as soon as the long-range weather forecast told me  that indeed the weather is going to be cool – I would think, well, I had a 20% expectation that this would happen. Time to wheel out Plan B. I’d be prepared.

The risk analysis tool that I use is called @Risk and it is frighteningly simple to use. It works as a plug-in to Excel, and takes roughly 10 minutes to learn.  Since using the software, my outlook toward what we do as market researchers has totally changed.

We are not in the survey business. We are in the business of assisting our clients to make informed, evidence-based decisions.

Sometimes the evidence comes from our survey work, bravo! But sometimes the evidence comes from the weather office, or from the war zone of the Ukraine.

Here’s why I think Market Research companies are flying backwards.

Image
Yes, technically we’re flying, but only just. Kitty Hawk thinking doesn’t help high-flying clients.

The very last meeting I attended when I was with my previous employer did not go particularly well. It was the weekly meeting of senior execs and on the agenda were things like staff training, client account management, and the usual HR concerns. I already knew that I was going to be leaving the organisation, so I wanted to end by contributing something which I felt was very important. “I want to run some staff training on SPSS.”

I didn’t expect the answer from my senior colleague. She asked simply: “why?”

I was quite taken aback, and I must admit I fumbled for an answer, mumbling things like; “extracting more value from the data we collect.” I can’t remember my exact words but they were along those lines. Anyway the workshop suggestion got kiboshed, and I left the organisation a few weeks later.

Three weeks ago one of the bigger clients of that same organisation announced that they were getting SPSS in-house, and they introduced me to a new member of the team, a particularly bright analyst who had previously worked with the banks.

I realised I had just witnessed a well expressed closure to my fumbling argument from 18 months earlier. The reason we need to smarten up our analytical skills as market researchers is because if we don’t,  we will simply get overshadowed by our clients.

In fact I see this all over the place. In-house analytical units within banks are using SPSS and SAS quite routinely, and most of the major banks have adopted text analysis software to help them swiftly code and analyse their vast streams of incoming verbal feedback. The text analysis centre-of-gravity in New Zealand is located client side, and not on the side of the market research industry. The same could be said of Big Data analytics. MR companies are scarcely in the picture.

Meanwhile, what is happening within market research companies? Well this last week I’ve been in conversation with two analysts looking for better more challenging roles than they have been given within their market research firms. One of them, an employee with one of New Zealand’s leading MR firms, (and one I largely admire – they are by every measure a world-class operator,) had for several years been asked to produce crosstabs and other descriptive outputs, and had on no occasion, ever, had his mental capabilities even remotely stretched. I’m talking about a graduate in statistics who has effectively been cudgeled to death by the rote boredom of low calorie market research thinking. I asked what he was equipped with, software-wise and he told me: “Microsoft Excel.”

This is simply not good enough both professionally or stragically. While globally the volume of marketing data analytics is growing by something like 45% per annum, the market research industry is relatively flat-lining or showing single digit growth at best. In other words most of the growth is happening over at the client’s place. And they aren’t even specialists in data.

If the market research industry wishes to gain relevancy, then it has got to offer something that clients can’t provide for themselves. It used to be the market researchers provided unrivalled capabilities in the design and execution and analysis of market research projects. The key word here is “unrivalled” but I’m afraid the leading research firms are being simply outstripped by their own clients.

The mystery to me is why the larger firms appear blind to this phenomenon. Perhaps in building their systems around undernourished, under-equipped DP departments, they have a wonderfully profitable business model. Pay monkey wages, and equip them with Excel. And for that matter, keep them at arms length from the client so they never get to see how their work even made a difference. The old production line model. Tighten the screws and send it along the line.

Or perhaps the big firms are simply comfortable in doing things the way that I’ve always done, or perhaps senior managers, having grown up in Kitty Hawk thinking lack the imagination or the will to fly into the stratosphere.

Either way, if I was a young analyst and looking at my career prospects my attention would be over on the client side, or on dedicated data analytics operators such as Infotools. That’s actually a painful thing for me to say, speaking as a life member of my market research professional body. But if the status quo prevails, then we are going to see not just the relative decline, but the absolute decline of our industry.

What can market research firms do to rectify this problem?  Here are 4 suggestions:

  1. Invest in decent analytical software. Just do it. A few thousand dollars – for a much better return than sending your exec to that overseas conference.
  2. Reignite the spirit of innovation becomes from your groovy team of analysts. Rather than focus merely on descriptive data, let them loose on the meta data – the stuff that explains the architecture of the public mood.
  3. Develop a value add proposition to take to clients. Look, all of us can describe the results of a survey, but we market researchers know how to make that data sing and drive decision-making.
  4. Employ specialists in big data, so that as market research companies we can integrate the thinking that comes from market surveys, and qualitative work, with the massive data sitting largely untouched in the world of the client.

In my view market research industry has been going off-course for the last 20 years. We are stuck at Kitty Hawk. We stopped shooting for the moon.

DSC00104

 

 

Marketers need to shed some skin in 2014

You are your own brand! Well, sadly, that’s what motivational experts are telling us.

People who know me will know that I feel some disdain for the concept of personal branding. “You are your own brand!” state dozens of personal branding experts, and in so doing they ignore both the inadequacies of branding to convey the rich complex story of who you really are, and they ignore the ugly human history in which slaves were literally branded.

The shallowness, the sheer glibness of the “you are a brand” thinking is revealed all over the place. Sports people after turning in a losing performance no longer kick themselves or admit they played poorly. No, these days they’ll only  admit the lopsided score was “bad for the brand.” Clearly it’s not whether you win or lose, that counts, it’s how you affected the business value of the franchise.

In corporations brand similar summaries are given when management has made a flawed set of decisions, the wrong widgets have been launched, the customers don’t buy them and 10,000 workers are given one month’s notice for a mistake they didn’t make. Up in HQ the conversation goes something like: “Those widgets, gentlemen, they did nothing for our brand.”

That’s one thing that rubs me the wrong way about elevating the importance of the brand so high that people will trade in their own identity in order to be packaged-up. The brand isn’t so important as many marketers think.

The hyper-valuation of company brand-equity began during the hectic years of the 1980s, shortly before the catastrophic 1987 Wall St collapse. Companies with ailing turnover figures and slack marketshare suddenly realised that despite everything, the brand itself had valuable equity. This is true, to an extent. If you measure something like “consideration”  (which brands of new car would you consider?) then brands explain part of the story. They help explain why Toyota might be forgiven the occasional safety recall, or why Skodas may be good cars but will never be even considered by a sizeable chunk of their markets. Not with their East-European legacy.  But the accountants and CFOs forgot something along the way.

The moment accountants started treating Brands as a tangible asset,  things got confusing. You have to treat assets according to certain rules – for example in terms of depreciation, or market value.  But the moment a brand gets given a dollar valuation, other questions such as fit or positioning play second fiddle when it comes to boardroom decisions. The only measure that has clout, really, is the bottom line dollar. So the brand, whatever it stood for, can easily get screwed around by financially focused directors.  When all you see are dollar signs, then any brand value looks like cash.

Reuters in 2010 when reporting the sale of Cadbury to Kraft quoted Felicity Loudon, a fourth-generation member of Cadbury’s founding family. She said she was appalled that the company looked destined to fall to Kraft, predicting jobs would be lost and its chocolate would never taste the same.

“We shouldn’t give up,” she told Reuters. “For a quintessentially, philanthropic iconic brand to sell out to a plastic cheese company — there’s no mix there.”

She has a point, though of course it fell on deaf ears.  Four years later, at least in my market, Cadbury product is still being discounted to hell, to undo some of the damage wrought by that sale. King Size block for block – it is routinely a dollar cheaper than its nearest rival. The problem was, the takeover was measured in dollars and not in any other values.

This seizure of brand valuation by the CFOs and accountants leads me to my main point.Branding itself has become commoditised.

I don’t think this rapid decline in the purpose and nature of brands has been particularly helped by Marketers (who all too often get little representation at boardroom level) or by my own profession Market Researchers who have watched on, with little reaction or understanding,  as the dynamics of corporate decision making have changed. The things we used to champion (brands, ideas, packaging, product concepts) have been grabbed and redefined by the finance boys. (And a few finance girls too.)

So they talk about things being good for the brand, or bad for the brand, yet they appoint underpowered Brand Managers who prescribe undercooked, old-fashioned brand research that belongs to the 1950s. Good for the brand?

Today’s market place, meanwhile, is being liberally peppered by stories of unknown start-ups that have taken on the big brands and are aggressively eating into the stalwart’s marketshare. The sticker-value on the old brands prove poor defense against products with better ideas.

If you accept that the concept of branding is under siege – and I’m sure a heap of readers will disagree – then the prescription is to get under the burned skin of branding, and start examining more closely the heart values that dwell below. These days my marketing language is more apt to be enriched with talk about “forgiveness” and “resilience” and other words that refer not to the bottom-line but to the human condition.

I also think modern history is on my side. Since 2008 there has been a rapid lift in the conversation about the wealth gap, about poverty, about massive corporate tax evasion, about 3rd world exploitation and about sustainability.  In my view the ice is getting mighty thin for organisations that measure shareholder return as if it’s the only thing matters.

As I tell the personal branding experts. I am not a brand: I am Spartacus.

What tomorrow’s market research companies will look like.

Image
What’s trending in market research companies? This fall I see two directions in MR style. Meanwhile, I’m predicting the end of the button-down Mad Men 60s referencing. It is dead.

Market Research as an industry has hit a disruptive fork in the road. As currently configured – around the distribution, collection, processing and reporting of surveys – the industry has basically flat lined. For sure it does great work, but look at the industry growth figures, and meanwhile look at the exponential growth in usable business and customer information and tell me seriously that MR isn’t missing out.  MR is an industry geared around a broadcast, mass marketing paradigm.  It is petrol when the world is going electric.

Among the disruptive forces faced by the industry are:

  • The shift from face to face to PC to smartphone delivery. Each new medium opens the scope but limits the depth of respondent engagement.
  • The shift from descriptive statistics (yawn) to predictive. Question, how many analysts are regularly using Neural Networks or other predictive what-if scenario building tools? Any?
  • The integration of survey feedback with other channels of customer feedback including complaints data, sales figures, RFID-driven Big Data etc.
  • The acceleration of everything.
  • The shift to a socially networked society in which unpredictable things can happen and where classic trend-lines simply don’t apply.

All these elements, you can think of many more, render the classic MR model somewhat flatfooted and irrelevant to so many business decisions.  We live in a world where superstar marketers such as P&G are getting beaten by surprising upstarts. The old world of 4Ps marketing simply isn’t as reliable. (And it never produced more than 20% success rates to begin with.)

So what directions will MR go in to survive? I think two basic directions.

Light, quick and cheap. New research firms will go with the emergent technologies and provide close to real-time customer feedback gathered via panels equipped with smartphones. They’ll process streams of quick-snap shots and use these to populate a “movie” of the typical customer experience before, during and after purchase. The technology will reduce the need for human input or even analysis. The objective: fine tune the customer experience through tweaking and optimisation. Success through a thousand slightly improved interactions.

Deeper and more ruminative. The polar opposite of light, quick and cheap is to go deeper, and more thoughtful – focusing on the analytics and the contextualisation of information. This kind of research will be similar to anthropology, but with more back-end modelling and scenario testing and hypothesis building.  The objective here is not to deliver streamed feedback rather to look for the Next Big Insight – and get there before the rivals do.  These research teams will be made up of diverse teams of people with wide-ranging skill sets: story-tellers, anthropologists, data scientists, fashion mavens, subject matter experts.    This sort of research will be comparatively expensive, and comparatively risky – but when it hits the Insight Jackpot, this is the research approach that will make the biggest difference.

Right now most market research firms are heading toward Type One – doing the same as before, but quicker, cheaper and with less human input – at least of much value. My bet is that the world will be awash with these providers and they won’t be much fun to work for.

Give me the Big Concept Seeker model.  Now that looks real fun.

Going deeper not cheaper in research.

Image
Can social media really help us deeply understand the human landscape?

I’ve been ruminating lately on all kinds of subjects as diverse as twitter and suicide – thinking about the way people connect but don’t really connect even in an age of social media. Today in the daily newspaper was a coroner’s report on a boy who ended his life, and really all the warnings were there in his facebook messages. Nobody, it seems, tangibly connected with the depressed teenager: a tragedy.

But the story stands as a sad metaphor I feel for the work we do as market researchers who, despite being armed with the best technology, and despite the excitement of being able to conduct research in real-time via smartphone apps (etc etc etc) languish in the shallows of human understanding. Discussion papers about new methodologies tend mostly to take a channel-centric view of the new research media, rather than exploring the question of what kind of depth of understanding will we achieve? Can we use social media to get us something deeper – more immersive – than what we get in a typical CATI or online survey? I’m not seeing too many papers on the subject.

So I embarked on a personal experiment to see whether it is possible to gain a deeper, more experiential understanding of a different culture via standard social media including FB, twitter and the notorious but interesting Ask.fm.

I’m not going into details here, other than to make some points which I may explore in later posts.

  1. Social media demand that we develop a persona. They gives us around 20 words and room to post one photo and that – effectively – is our identity or mask.  Social media are thus quite limiting.
  2. Masks may be held tight – and some online personas are heavily protective or overly managed (or branded) so some subjects are thereby more responsive or open to engagement than others.
  3. A second dimension to online persona is the focus of the person. Online, if not physically, some people tweet and msg fundamentally to report on themselves rather than to engage with others. Here’s a pix of my lunch. A minority fundamentally enter social media to listen, to engage or ask questions of others.
  4. The third dimension is – for researchers the most potentially interesting. Most people engage or friend or follow people within their own circles of interest. This is heavily geographical (my workplace, my college, my neighbourhood) but also expressed in terms of my interests. Us market researchers use a hashtag on twitter to find each other. But a minority of users open themselves up to random and ‘foreign’ links. They do this because they are inquisitive, and because they are interested also in helping other people.

It strikes me that if I were to use social media as a channel via which I could conduct some kind of social anthropology I’d get a vastly different set of insights from those who are:

  • Type One: Heavily masked, ego-centric and confined to their circle. In fact on ask.fm I was lucky to get answers at all from these people, despite their invitation to “ask me anything.”  They showed a very low engagement level.versus:
  • Type Two: Open and upfront people, at least semi focused on others not just themselves, and open to joining “foreign circles.” These people are engaged, interesting and open to discussing questions.

The conversations one encounters via social media are conversations in the true sense: they happen over time. So for that reason I’m less trusting of the idea that we can necessarily achieve a true understanding of individuals or customers, if we rely (as we’ve always done) on snapshots.  When engaging with a set of strangers, in my personal journey, I was struck by their mood swings, perhaps amplified by the nature of written communications and the starkness of the pictures they chose for tumblr, and the shifting nature of their opinion.

Underneath this was the basis of any good conversation: the degree to which both parties know about each other – and whether they can they develop a shorthand, and use shared references and metaphors as foundations on which to build trust and then converse on deeper ideas.  In most cases, as I engaged with strangers, I could do this: but not all. In every case the process took time.

What are the implications of this? My first inclination is that we can and should use social media in the same way as a spider can use their web – they can feel an insect landing in the far reaches of their domain.  So what we need are agents or listeners at the far reaches of our own webs. For example if I were asked to use social media to explore a ‘foreign’ market (users of herbal remedies, say, or San Francisco football fans) I would seek a shortlist of people who are Type Two.  Then I would ask them all about their worlds. I could get far more insights via them than I could via hundreds of completed responses from the relatively unengaged Type Ones.

In saying so, I’m consciously moving away from scientific sampling and classic design, and moving toward something else.

I do think we should be looking for systematic ways for us to use the strengths and account for the weaknesses of social media. It isn’t enough to say, “Oh, we now do research via social media.” That may point to truly immersive conversations, or it may paper over the cracks of particularly shallow, noon-insightful feedback.  Comments?

This blog reflects the paper I presented to the MRSNZ 2013 Conference – which won the David O’Neill Award for Innovation.

Abercrombie & Fitch – a reputational glitch

abercrombie & fitch marketing glitchSince the 1990s Abercrombie & Fitch has been a fashion brand that has taken the label from failure (it harks back to 1892 but by the 70s had failed as a company,) to gigantic success with an annual turnover measured now in the billions. At the heart of this story has been a fundamental disconnect between the image of the brand – clean-cut, Mid-Western, Joe College values (exemplified by their use of the Carlson twins as models who showed off their flesh more than they did the actual fashion) – and the aggressive multinational fashion company that uses cheap third world labour to manufacture faux nostalgia (college sweatshirts that you might have rescued from your dad’s top drawer) and then peddle this through a lavish chain of upmarket flagship stores.

A&F deliver dreams. They’re not alone in this, far from it, and in many respects they exemplify A+ marketing: tapping into needs and wants and through packaging, price and placement ensuring healthy profits.

But the disconnect comes at a price. Reputational Risk. It is one thing to be a brand that, deep down, offers a perfromance benefit. For some reason I’m thinking of Coleman and camping supplies. That company has been dedicated to delivering products that function, and show little details that users appreciate. A fashion label such as A&F simply delivers image. Its product quality is nothing special, it’s designs are derivative rather than original, it’s fabrics – mostly cotton – offer no USP. But the dreams and packaging and catalogues and image – well, they’ve been the reason for the brand’s success. It sells something that is accessible. You too can be part of the A&F movie. One of the gang.

But it turns out the gang is less inclusive than it has made out in the advertising. The CEO is quoted as saying that he doesn’t want to sell larger sized items because he doesn’t want fat people wearing his brand. He doesn’t want them in his stores. Instead of being genial Joe College, Abercrombie & Fitch is, deep down, the spoilt snob.

Suddenly the gap between image and reality is made, in just one quote from the CEO, as plain as day. Here’s what happend. I’m taking this straight from The Drum.

An LA-based writer is looking to give Abercrombie & Fitch a ‘brand readjustment’ by asking viewers to donate their A&F clothing to their local homeless shelter, after the CEO of the company suggested he didn’t want ‘unattractive people’ or people over a certain size shopping there.

Greg Karber created a YouTube video suggesting that people donate clothes, and send pictures of them doing so to #FitchTheHomeless.

The video went live on Monday and has so far almost received a million views.

In it, Karber insists ‘Together, we can remake the A&F brand.’

The video caught like wildfire – with more than a million “likes” on Facebook within 48 hours. With the story being widely picked-up in the media this week the twittersphere has become a pile-on of latent A&F hate. The story in itself has resonance, but I suspect the speed at which this reputational wildfire has spread comes from the degree to which the brand has already built up enmity within the public.

  • Everyone loves the A&Fitch bashing video. The brand is for 14 yr old douche bags anyways, it doesn’t need a homeless ppl “readjustment”
  • Abercrombie & Fitch would rather burn the clothes that it doesn’t need rather than give it to charities… Wow.. Um… Ya…
  • I’ve been hating on Abercrombie & Fitch for years, just because their clothes are ugly, y’all slow

There are at least 4 solid reputation management lessons to be drawn from the story so far.

  1. The wider the gap between your image and your reality, the bigger the reputation risk.
  2. Over time every little misstep or poor judgement will aggregate to form a parallel narrative to your official version. Like debris in a forest, it will prove flammable in certain conditions.
  3. Every once in a while it pays to cut some clean firebreaks through bold, well intentioned actions that are both credible and meaningful.
  4. You may stand FOR something, and FOR your target market. But don’t confuse that with the making statements that put down others. Remember, they are legion and they are armed with social media.

For his own part, film maker Greg Karber may face a firestorm of his own. I’m predicting he will on the basis that he has also ignored three of the four rules above. He has not validated his expertise on the subject of ethics – and judging by his whining, even sarcastic tone in the video – he seems more personally aggrieved than truly concerned with business ethics. I might be wrong there, but he leaves room for doubt.

But already there has been growing criticism of his use of homeless people as mere props in his production. Never once does he seek their opinion, or lend them much dignity either. His message: since A&F hates the uncool – I’ll use the uncool to highlight the fact. In so doing he’s just as guilty of snobbery as is his target.

There’s a fifth lesson in reputation management that karber should heed. Most reputational damage is self inflicted.

What a good, memorable story requires.

Image

This last two weeks I’ve been reading a textbook called Storycraft, written by journalist Jack Hart and designed to help writers of non-fiction hone and enrich their skills: to turn true events into compelling stories. I was pleasantly surprised actually, because the book is damned good, but it reminded me how much I had learned in a previous life as a script writer and as a freelance journalist. For sure, there were new insights and tips that I will dial-up in future, but the most useful function of the book for me was to set out a formal checklist of things we ought to incorporate in a compelling story – especially one based on data. Here are a few must haves.

1) A clear tone of voice and standpoint. As teller of the story are you the problem solver who was given a challenge, or are you the skeptic who is trying to disprove something? Are you an insider or an independent outsider?  Be clear on this.

2) A clear story structure. Stories usually start calmly but quickly a crisis or decision-point occurs that threatens to change everything. The problem gets worse, and then gradually the heroes (in analytics perhaps, or those amazing customers and what they told us) wrest the flight controls off the dead pilot and set about bringing this aircraft down safely.  Most story structures rack up the tension and then engage in the process of solving the problems. Always, there are decision-moments along the pathway.

3) You need characters – especially good guys. Now in crunching Big Data, you’re reporting on numbers, right? Well, not quite. Those numbers represent people – so it can be useful to pull out one line of data, give the customer a nickname  – Honest Harry – and use him as a cipher to tell the big story. Here’s where Harry faces a choice – what will he do?  Personify the data. Don’t forget there are other characters in the story as well – including the analytics team.  

4) Setting.  A good story is underscored by the setting. High Noon took place in a lonesome, Godforsaken town miles from any help.  This framed Gary Cooper’s dilemma and added to the tension. CSI uses Las Vegas or NYC to good effect to create for each series a memorable backdrop against which their problem solving skills stand out in stark relief. When you’ve got 30 minutes to stand up and report on what your analytics have found – don’t forget to devote 3 or four minutes setting the scene.

5) Satisfactory denouement. The wrap up of the story had darn well better sing – not fizzle out. So in putting together your presentation or report think hard about this.  The plane is coming in to land, there’s ice on the runway and a small child (and a few nuns, there are always a few nuns) in the passenger cabin.  Structure the story so that when the ending occurs – the 747 ploughs through the snow on its only wheel before coming to rest right outside departure gate 9 – everything wraps up tidily. The hero gets home for thanksgiving. The little girl is saved. The nuns collective faith is restored. 

Now in writing these things I come over as pretty glib. Yet I’ve seldom done a presentation without thinking of these elements. At first I thought it was just a duty, if you have a story, tell it well.

But these days there’s a much bigger reason for storytelling skills to be employed in the boardroom. Big Data deals with 8 zillion narratives. You want this to be the one that the decision-makers remember.

Story technique has been with us for 2300 years – it’s time to brush up on it

Image
2300 years ago Aristotle wrote “The Poetics” which contained his secrets of storytelling. Being human, these secrets haven’t really changed.

Many research presentations I’ve seen, including many of my own, have been bogged down by too many facts and figures. It is like reading a book which is so full of florid description that one begins to skip pages and start looking for the action.

Likewise stories can suffer from relentless action – the type we see in Peter Jackson movies where we get chase, fight, chase, fight, another chase, another fight followed by another chase – and the net result is just plain boredom. His King Kong movie is one of the few films I’d rate as un-watchable. It ignored the storytelling craft. It was all pageantry (look at our CGI techniques!) and no drama.

We do the same with research. We go heavy on descriptive results – without telling the true story. Or we go heavy on special effects (I do this too much: showing off analytic techniques) but forget to tell the story. Or we simply have a story but we don’t tell it with any craft. We muddle it up. The drama is in there somewhere but we didn’t quite extract it.

This seems crazy, because the craft of telling stories – the techniques and skills required – have been part of our pantheon of written human knowledge for 23 centuries. Storytelling goes back to the dawn of civilization  but the Greeks started thinking about the craft, and analysing it, and applying systematic rules to it since Aristotle considered the subject.

Why not? As Steven Pinker explains it, storytelling has universal elements across so many cultures to lead him to conclude that stories are part of how our brains are wired. They reflect how we think. We’re engineered to tell stories.  Stories are a means of processing complex visual, verbal and emotional information.

In the 20th Century much was written about story telling craft as writers considered modern day psychology and found, among other things, how well Shakespeare captured the human condition. You could pick apart Othello and find it stood up to a Jungian framework, or to modern theories of the human condition. Writers such as Lajos Egri who published the seminal guide for playwrights The Art of Dramatic Writing helped create the debate about what drives a good story: is it events and action, or is it character? He concluded that character was at the heart.

So here is a good definition of what makes a good non-fiction story, summed up by American Jon Franklin in his work: Writing for Story.

“A story consists of a sequence of actions that occur when a sympathetic character encounters a complicating situation that he or she confronts or solves.”

Sounds simple, and – actually – it is. But the next layer down is where the story craft gets more complicated:

  1. Giving the story some structure. Do we start at the beginning and build to a conclusion? Or do we start at a critical moment of decision – and then go back and fill in the back-story and offer the options that our lead character faces?
  2. Choosing from whose point of view we tell the story. (Do we tell it from the brand’s point of view?  Or the customers’ point of view?)
  3. Characterisation. Do we paint the Brand as a hero? Or is it a flawed everyman? Are those customers a roiling mass – a Spartacus uprising in the making? – or are they the savvy, price-seeking satisficers who are undoing the good work of marketing? Who are the goodies?

These are just some of the decisions we must make when we tell stories, and they require a lot of forethought and imagination. The process is far different from the usual art of starting a PPT deck with Q1 and working through to the results of the final Question. I wrote a teen-novel once, (The Whole of the Moon) which did quite well but I spent a month deciding whether to tell it first person or third person.

Well before then, working in TV, I learned in drama editing and writing just how important it is to find a congruency between action and character. The decisions made by the protagonist (he kills his attacker) need to be within the realm of possibility for that character. (Would Coca-Cola really do this??)

I also learned that good stories need some relief. Shakespeare would open each act with a couple of fools joking around: something to get the rowdy audience engaged before launching into a Lady MacBeth tirade.  In client presentations or conferences I try the same, and the light moments may seem like diversions, but they always have a point – they put across the enjoyment we’ve had in the project, or they give a bit of anecdotal evidence of the dramas and challenges we faced in the survey: the day the blizzard held up the fieldwork.  These diversions humanise the story, and connect the storyteller to the audience.

Writers can get into a groove and employ hundreds of these little lessons instinctively – but it is increasingly important that researchers and analysts now also learn some of these techniques. We have audiences who want to digest the main thrust of what the data is saying.

And as story tellers we don’t want them to walk out on us.

Image
Even modern theatre employs the lessons from ancient Greece. After all, it is all about people and how they make decisions when complications get in the way of their objectives.

Why story telling is going to be the requisite skill of the future.

With the rising tide of business data, our human capacity to interpret and understand all this information will need to shift gear.

Already in the last 5 years organisations have been shifting out of flat spreadsheet land, in which numbers are presented (pies and bars or simple statistics,) based directly on platforms as simple as Excel.  Ten years ago an executive could bark at their analyst: “Give me the numbers!” and really, they could still handle it – all the numbers.

But now that volumes have grown, the pies and bars and simple outputs are not enough – and one response from organisations is to dashboard the data (creating look-up systems to help wade through all the layers of data,) or ask researchers to do a better job of compiling the story from its multiple sources – those customer feedback channels combined with social media streams and integrated with sales data.  This is just about manageable at present – but with the availability of relevant data soaring by an estimated (McKinsey) 45% per annum, current day solutions are going to struggle within 18 months or so.

People are simply getting swamped.

But the answer is available, and it involves a shift of paradigm from a numbers-reporting focus, back to a storytelling focus. After all numbers only ever represented the story to begin with. As I say – data is not about data, it’s about people – and always was. The sales figures? They tell the stories of thousands of customers who made individual decisions.

Story-telling, and the capacity to understand and recall stories,  is a fabulous human capability that we develop from infancy. Through stories we learn about the complexities of our social moral codes, or about elements of human character that are to be enshrined. This is complicated stuff, yet easily interpreted once delivered within a clear story line. We are wired this way. Luckily. We can handle Shakespearian themes, we can understand great tragic turning points and the ins and outs of the complex human condition. We can do this at age 16 – we don’t need an MBA to understand a story.

Now storytelling is an art, and it involves a lot of skills and story-craft. It doesn’t need to be high literature to succeed (hey, we have John Grisham et al to prove that simple techniques can entertain us) but increasingly it will be a requisite skill of the near future.

Analysts will need to know the difference between narrative (A King died, then the Queen died,) and plot (the King died, then the Queen died, of grief.) They will need to be stronger and picking out the information that explains why things happen, and stronger at asking questions to give us better, more powerful data about human motivations. (Demographics are not a strong basis for a story.)  Most of all they’ll need empathy – a nose for a good story and the capacity to assemble the facts, interpret what’s going on (it is there somewhere amongst all those billions of lines of information,) and get up in  front of the CEO and be able to say:

“Chief, I want to tell you a story…I want to tell you a fable that reminds me of King Canute….”

In other words to boil all that information down into a drama that can be processed on a human scale by a board of directors.

All our marketing activities. All our business challenges. All those facts and figures about a changing society.  They are not about numbers. They’re about people and about the stories of those people.

How equipped are we to understand these, en masse, and to tell these tales in a form that enables our employers to understand, amongst the blizzard of numbers, that their company is seen, basically, as the Grinch that Stole Christmas?
DSC00104