Tag Archives: professional life

Here’s why I think Market Research companies are flying backwards.

Image
Yes, technically we’re flying, but only just. Kitty Hawk thinking doesn’t help high-flying clients.

The very last meeting I attended when I was with my previous employer did not go particularly well. It was the weekly meeting of senior execs and on the agenda were things like staff training, client account management, and the usual HR concerns. I already knew that I was going to be leaving the organisation, so I wanted to end by contributing something which I felt was very important. “I want to run some staff training on SPSS.”

I didn’t expect the answer from my senior colleague. She asked simply: “why?”

I was quite taken aback, and I must admit I fumbled for an answer, mumbling things like; “extracting more value from the data we collect.” I can’t remember my exact words but they were along those lines. Anyway the workshop suggestion got kiboshed, and I left the organisation a few weeks later.

Three weeks ago one of the bigger clients of that same organisation announced that they were getting SPSS in-house, and they introduced me to a new member of the team, a particularly bright analyst who had previously worked with the banks.

I realised I had just witnessed a well expressed closure to my fumbling argument from 18 months earlier. The reason we need to smarten up our analytical skills as market researchers is because if we don’t,  we will simply get overshadowed by our clients.

In fact I see this all over the place. In-house analytical units within banks are using SPSS and SAS quite routinely, and most of the major banks have adopted text analysis software to help them swiftly code and analyse their vast streams of incoming verbal feedback. The text analysis centre-of-gravity in New Zealand is located client side, and not on the side of the market research industry. The same could be said of Big Data analytics. MR companies are scarcely in the picture.

Meanwhile, what is happening within market research companies? Well this last week I’ve been in conversation with two analysts looking for better more challenging roles than they have been given within their market research firms. One of them, an employee with one of New Zealand’s leading MR firms, (and one I largely admire – they are by every measure a world-class operator,) had for several years been asked to produce crosstabs and other descriptive outputs, and had on no occasion, ever, had his mental capabilities even remotely stretched. I’m talking about a graduate in statistics who has effectively been cudgeled to death by the rote boredom of low calorie market research thinking. I asked what he was equipped with, software-wise and he told me: “Microsoft Excel.”

This is simply not good enough both professionally or stragically. While globally the volume of marketing data analytics is growing by something like 45% per annum, the market research industry is relatively flat-lining or showing single digit growth at best. In other words most of the growth is happening over at the client’s place. And they aren’t even specialists in data.

If the market research industry wishes to gain relevancy, then it has got to offer something that clients can’t provide for themselves. It used to be the market researchers provided unrivalled capabilities in the design and execution and analysis of market research projects. The key word here is “unrivalled” but I’m afraid the leading research firms are being simply outstripped by their own clients.

The mystery to me is why the larger firms appear blind to this phenomenon. Perhaps in building their systems around undernourished, under-equipped DP departments, they have a wonderfully profitable business model. Pay monkey wages, and equip them with Excel. And for that matter, keep them at arms length from the client so they never get to see how their work even made a difference. The old production line model. Tighten the screws and send it along the line.

Or perhaps the big firms are simply comfortable in doing things the way that I’ve always done, or perhaps senior managers, having grown up in Kitty Hawk thinking lack the imagination or the will to fly into the stratosphere.

Either way, if I was a young analyst and looking at my career prospects my attention would be over on the client side, or on dedicated data analytics operators such as Infotools. That’s actually a painful thing for me to say, speaking as a life member of my market research professional body. But if the status quo prevails, then we are going to see not just the relative decline, but the absolute decline of our industry.

What can market research firms do to rectify this problem?  Here are 4 suggestions:

  1. Invest in decent analytical software. Just do it. A few thousand dollars – for a much better return than sending your exec to that overseas conference.
  2. Reignite the spirit of innovation becomes from your groovy team of analysts. Rather than focus merely on descriptive data, let them loose on the meta data – the stuff that explains the architecture of the public mood.
  3. Develop a value add proposition to take to clients. Look, all of us can describe the results of a survey, but we market researchers know how to make that data sing and drive decision-making.
  4. Employ specialists in big data, so that as market research companies we can integrate the thinking that comes from market surveys, and qualitative work, with the massive data sitting largely untouched in the world of the client.

In my view market research industry has been going off-course for the last 20 years. We are stuck at Kitty Hawk. We stopped shooting for the moon.

DSC00104

 

 

What Hugo Chavez taught me about decision-making.

Image
Hugo Chavez and my Uncle made a massive decision through their joint belief in clarity.

I’ve never met the late Hugo Chavez but I am one degree of separation from Venezuela’s revered leader thanks to his dealings with my uncle, Rod Stuart who was based in Montreal. And Rod had this very instructive story for me as a researcher when once I had tried to impress him with the grunty statistical work I could do.

I thought I’d be impressing my Uncle who was a civil engineer and the man in charge, the person at the very top, of several hydro projects world-wide. He was responsible for, or consultant to (I think) 6 of the largest hydro projects on our planet including the Three Gorges project, the massive Canadian Churchill Falls project, a huge Pakistani damn built in the 1960s as well as the top-10 ranked Simon Bolivar Hydro project in Venezuela.

It was on this project that uncle Rod was asked to consult. He received a phone call in Montreal directly from Hugo Chavez asking Rod to come to his palace. “Duncan,” my uncle advised me, “if any world leader asks you to meet at their palace my best advice is to catch the next plane.”

So he reported to Chavez who was trying to sign-off the new hydro project. “What’s the problem?” my uncle asked.

“The engineers,” said the president, “they’ve recommended two sites for the hydro project – we could flood this valley over here…or,” he said, pointing to a map, “we need to drown this valley over there. But which one?”

Rod knew the engineers who had written the massive report: “Mister President, the guys who wrote that report are very good. They practically wrote the book on hydro decisions.”

“That’s the problem!” barked Chavez. “I don’t want a book, I just want an answer.”

So my uncle agreed to read the report over the next 48 hours, and deliver a recommendation for the President.

“It was an enjoyable two days,” Rod reported. He was in his 70s at the time. “My hotel room looked over the swimming pool and suddenly I realised why Miss Venezuela always wins Miss Universe. Duncan, they all looked like Miss Universe.”

Two days later he reported back at the presidential palace. The report, he said, was very thorough, and had considered geo-physical risks, return on capital, social costs, time frames, delivery of electricity, worker safety, climate…the whole rich texture of risk and return on a massive capital project.

“…and?” asked the president.

“Mister President, what the report is really saying is: it’s about 50/50. So I have two questions for you, and then we can come to a decision. My first question is this: are you certain you want to build a hydro project at all?”

“Of course I am,” said the president  “We are a growing country and we do not want to be energy dependent.”

“Then it comes to this. If on economic grounds both sites are about 50/50, and on risk terms they are about 50/50, and on engineering terms they are about 50/50 and the social cost of drowning this valley here is the same as drowning that valley over there…if it’s all about 50:50 then here is my second question. Mister President, do you have any personal reasons for choosing this valley or that valley?”

Hugo Chavez looked at the map and weighed his words delicately. He pointed to one of the valleys and reflected, “You know, my mother grew up in a little village over here….”

“In that case,” said Uncle Rod, “I suggest we build the dam in the other valley.”

Rod told me the story because he wanted me to understand that decisions, big or small, have no need to be over-complicated. Often in statistics we are compelled to test whether one number is “statistically” higher than the next. Rod’s point: if you’re having to test whether there’s a difference, then really in real terms there isn’t any difference. For that reason he and Hugo Chavez were able to reduce what started off as a complex quadratic equation, and layer by layer cancelling out the differences between Option 1 and Option 2. In the end, having taken a step back to check that any decision needed to be made at all, the difference between Option 1 and Option 2 was really a test of whether the president could sleep more comfortably with one choice over the other.

Both men knew they didn’t live in a perfectly black and white world, and both knew – to their credit – when to stop focusing on the decimal points and when to simply make a decision.

Rod found Chavez to be an open-minded, intelligent man. His words: “I don’t want a book, I just want an answer.” are a credo that researchers ought to live by.

Lifting your productivity by 20%. The reporting log-jam

Image
Cleared for take-off? In MR there’s generally a log-jam around the reporting process.

The photo above shows hundreds of gannets near a beach where I live, and these birds nest here, grow up here and then take flight to distant nations thousands of miles away. It is humbling to watch them survive on this rock face, and amazing to see how the parent birds always manage to identify which grey hatchling is their’s.  How do they do it? How does this society, this organisation of gannets manage to be so efficient?

I wonder similar thoughts about Market Research organisations also, because in these companies you see things happen that the gannets don’t bother with. The birds hold no WIP meeting, no pep talk, to team building exercises – there’s just the constant cry of their vocal equivalent of the email network. These birds are in constant communication. The parents are task focused (got to feed the young) and the local fishing grounds, with the exception of the occasional Great White Shark, are benign and plentiful. Field work, in other words is not a problem.

But the gannets do stumble at one point. Sooner or later comes the great migration and these rocks will be empty for a season. Yet not every chick is ready at the same time. While some quickly master the art of gliding in the prevailing westerly breeze, others are clumsy, and apt to crash land in an ugly flurry of gangly feathered wings and webbed feet. If this was Heathrow or JFK this would be mayhem.

Now the moment that projects get delivered to clients is similarly full of mayhem.  Some of major inefficiencies  of the typical research company occur at the reporting stage. Deadlines might be met, but these too often involve a weekend or a serious late-nighter. And if that’s the case, then something need fixing.

So here are some suggestions to contribute to my ongoing series about how to achieve an overall 20% lift in MR company efficiency. Twenty-per-cent is very achievable, and really it comes down to find a 4% gain here, a 5% gain there – as well as the courage to challenge a few things that were developed back in the 80s or 90s; for example the production-line structure.

  1. Plan the report at the questionnaire development stage. Presumably you are testing hypotheses, or measuring certain things, or unravelling mysteries – whatever you’re doing there’s going to be a story that gets developed, even if you don’t know how it is going to end. So start visualising the shape of the story and the chapters it will require and the analytics that these will involve. Let the whole team know what the plan is.
  2. Do not pass Go…until you adopt a stringent “right first time” approach to labelling errors and data errors. Test and proof the data before the report gets drafted. Are the weightings correct? Do the labels have typos?  Is the data clean? Measure twice, cut once.
  3. Challenge the multi-stage process behind the survey analysis. Having a DP department develop a whole heap of crosstabs in the hope that something should prove interesting, is just plain inefficient.  If a team is working on the project, then work together in parallel rather than in serial. Start by going through the report structure, and then allocating who works on which part. By waiting in series, the project gets held up by every little random thing and cascade delays and errors can mount up. “Sorry, I can’t work on the data yet…Dave had to go to the dentist.”  You know the …er, drill.
  4. Run these teams top-down rather than junior up.  I recently worked with a j-up style of organisation in which us seniors were encouraged to delegate the production or reports, and then to add “our bits” at the end of the process. I personally found this frustrating because what the reports often needed was much more than cosmetic. The younger researchers, good talent all, hadn’t always “got” the story that the data was telling. So precious hours were spent reworking key chapters of the report.  
  5. Examine and use suitable production platforms.  A good one to try is the Australian product “Q” delivers a combination of SPSS power, Excel level exportability to PPT and delivers slides (complete with Question number, n = size, date etc) at the touch of a button.  It can save hours of needless production time when analysts end up messing with fonts and colours.
  6. Talk to the client during the report-writing process. Run the initial findings past them in the conversation: “Delwyn, we’ve found a high level of dissatisfaction among your core customers…is this what you guys expected? What would be the best way to report this?”  Often those conversations provide a context-check which enables you to get the nuance and tone right for the audience. Delwyn might advise you to go easy with the bad news (because the new initiative was a pet project of the boss) – whereas if you didn’t know that you’d be up, sooner or later, having to redraft the report. 
  7. Monitor productivity and set expectations. I’ve never seen a research company do this. But if a PPT deck looks like it needs 6 basic chapters, and each chapter is going to need around 10 slides, then you should be able to work out how much production time needs to be allocated. Personally, working with my own data, I can put together a slide every 8-12 minutes depending if the project is descriptive or deeply detailed. That includes the analysis time. Yet meanwhile I’ve timed colleagues (they didn’t know this) and they averaged around 15 minutes per slide of descriptive results. About half my speed. The difference I think comes from: certainty of the story I’m telling, a clear sense of structure, and the general tendency to give slides a reasonable amount of white-space. Focus on the main details, don’t present a full tables of results. Each project should discuss the hours spent, after delivery to see where improvements can be found.
  8. Start on the reporting immediately. You may think you have two weeks available to get it all together. So no pressure is applied at the start. In fact, most projects lose time in the first 72 hours – and that puts a squeeze on the rest of the schedule. That’s when errors get made or compromises (we don’t have time to dig deeper) occur. 

Research companies need to ask themselves how much time is spent doing actual research (analysis and thinking) and how much time is spent crafting massive decks of slides. For sure, the report represents the basic deliverable to the client, and – absolutely – it needs to be visually attractive, and tell its story even to non-analysts. But we’ve all seen too many hours wasted in dickering around with the look of the report, or in fixing errors that got woven into the report – and not enough hours spent delivering value to the client.  

 

 

Now it gets personnel: Gurus and Geeks – the architecture of the Big Data universe

Image
The universe of analysts in the world of Big Data. Where do you live?

Over the past few months I’ve been looking at what I believe to be a major meltdown of the Market Research polar caps. The growth of the industry, once assured, has turned slushy and meanwhile the growth of Big Data as a field of endeavour remains double-digit. If anything it has accelerated.  So how much effort will market researchers have to make if they wish to hitch their caboose to the big growth engine that’s running on the track next door.

It comes down to people and their skills and outlooks, so where I started my investigation was in the employment ads relating to Big Data. Ouch. The help wanted ads are dominated globally by vacancies for “data geeks” (and that’s the phrasing they choose to use) and the qualifications revolve around technical skills (typically SQL or more advanced) as well as basic statistics. Very few ads ask for Big data architects who can visualise and steer the mountains of digital data that every large firm is accumulating. I foresee a big trainwreck occurring unless a few more subject matter gurus – architects who can see the big picture – are employed in the Big data locomotive. Wanted, a few more Denzel Washingtons.

There’s another axis to the landscape, as I see it. This borrows heavily from the thinking of John Tukey, our statistical Godfather, who classifies stats into two zones: the Descriptive side (accurate reporting, concern about margins of error and significance etc) as well as the Explorational side where new patterns are being discerned, rare events are being predicted and fixation with decimal points can be quietly put aside. This is the realm of game theory, of neural networks, of unstructured data and just about all the tools that my colleagues in Market Research generally avoid. 

But while MR practitioners seldom live in the northern hemisphere of my diagram, above, not that many Big Data analysts, really, are working in that zone either. There will be strong demand, probably increasing demand for those people.

If Big Data analysts have a centre of gravity somewhere in the yellow square of my diagram, market researchers dwell, predominantly, over in the green zone. They’re good subject matter experts though not great explorers.

In respect of tomorrow’s business needs, I’m picking that most Big Data teams will require a mix of skills – people from each quadrant, or a number of generalist experts – those exceptional individuals (and I’ve met a few in both MR and BD) who dwell in the centre of the data universe – by turn gurus and technical experts, one minute retrieving old numbers and making them sing – and the next minute devising predictive models to illuminate tomorrow’s business decisions.  

Trends?  The world of business analysis will see shrinkage of MR as more and more data is retrieved from other sources. Meanwhile organisations will get quickly swamped with descriptive data and the Gods of tomorrow will be the Guru Explorers who can see the future and what they need – and are surrounded by the Explorer Geeks who can stoke the boilers and make the engine roar.

 

You think your spreadsheets are error free?

You think your spreadsheets are error free?

Researcher Ray Panko conducted a meta-study of spreadsheet quality and his conclusions, from a wide number of studies are, frankly, disturbing.  This link comes out of debate about Excel’s capacity to have cascade errors rippling from one sheet to another, and concerning Excel’s functions to not necessarily do what you think they are doing. Scary stuff and worth a read.

If you’re doing a questionnaire – here are three ways to make it a goodie

Questionnaires don’t get any easier to write. Last year I slogged over several days to develop a questionnaire, and I’m sure the client must have been wondering: “What’s taking so long? How hard can it be to ask questions?”

Well the answer: it is damned easy to write questions. But to write great questions – now that’s a different story. For these you need to dig deep into the mind of the likely respondent. How do they evaluate the issues? What are their needs and motivations?

In fact that degree of empathy, I think, is the the biggest difference between a regular, low value questionnaire versus a high value exercise that delivers real understanding. Too many questionnaires are framed in the language of the client, or of the researcher. Not only is the wording starchy or full of client-speak, but the perspective is simply wrong. Many questionnaires end up like that well-worn joke: “But enough about me…tell me, what do you think about me?”

A second way to make a questionnaire better is to think about the poor analyst. Ask yourself first: is this survey simply about describing stuff?  If so, then write lots of cross-tab-able questions. That’s how most of my colleagues in big companies seem to do it and frankly I feel sorry for them. Where’s the joy in running cross-tabs?

But if you want to explore the data – for example to conduct a social network analysis, or perhaps a factor analysis in order to understand the underlying themes – then you need to ask your questions in a quite specific way.  Questionnaires are not about asking questions – ultimately – they are really about generating useful data. Not just data, but useful data.

The third way to add value to the exercise is to inject yourself – if possible – into the process. Now that most surveys are conducted online, they are blessed with having a single voice – which obviates the need to reduce the language of the questionnaire down to a vanilla monotone (a necessary step to remove biases when using a typical call center.)

So you can voice the questionnaire in a more conversational style. Not too conversational however. (Feedback on one questionnaire I wrote was: “Love the chatty style Duncan…but don’t get too chatty.”)

The personalisation of a questionnaire is not just a matter of tone however. While it is certainly more pleasant to fill in a friendly rather than a dry questionnaire, the real benefit is in the quality of the answers. An engaged respondent is going to deliver more nuanced, well-considered answers than is a respondent who is bored and simply wants to get out of the questionnaire via the quickest route possible – usually by marking 5,5,5,5,in all those boxes.

Image

Stories have more traction

I did an experiment a few years back, to test the relative traction and pass-on-ability Imageof stories versus facts. What I did was take a trending topic on Twitter (Toyota recall) and classify 100 tweets about the story. Then I tracked whether these got retweeted or otherwise. To be honest it was a pretty low-rent experiment.

Anyway, the tweets fell into three categories.

1) Facts.  Example: “One million Camry sedans may be affected by recall.”

2) Opinions. Example: “I never trusted Toyota. Buy American!”

3) Narratives (Explicit or Implied.)  Example: “How many more recalls before Toyota actually apologises?”

Well, hardly any fact-based tweets got retweeted. I’d say as a form of social currency, mere facts are something of a FAIL.

A number of Opinions got retweeted – and my view is that the successes, here, were those opinions that tapped into the Zeitgeist. At the time the Toyota story ran headlong into another story: about the failure of Detroit. (This was before the rescue that Mitt Romney would not have approved. Gee, what’s happened to that guy? He’s vanished.)

The tweets that were easily most likely to get re-tweeted were the narratives. I characterised these as any tweet which had an explicit beginning, middle or end to some unfolding story. At the time the CEO of Toyota was obstinately refusing to apologise for the recall mess – and this storyline added a human interest dimension to the vehicle recall tale.

Stories have value because they provide a memorable coat hanger on which we can drape all kinds of facts, figures, opinions and sundry details. The structure of a story helps the audience locate and contextualise all kinds of information in one easily retrieved package. A story is the difference between buying an assortment or different garments, versus buying a complete business suit – pants, jacket, shirt and tie – where everything seems to fit and match.

My first job was to edit and later to write TV scripts, and I spent 8 years learning about different facets of storytelling – how to build suspense, how to capture interest and engagement, how to use quiet moments (puts on kettle, reads a postcard)  to highlight (screech, screech, screech of violins) the sudden violence as the axe murderer smashes into the kitchen.

At the time I left TV (soaps and cop shows are, admittedly, pretty crappy in the end) I thought, well, that was a cul de sac.

But if there’s one core skill I have in market research it is the one based on what I learned in those 8 years. A good story, well-told, lives on in the memory of the audience. By contrast, as I saw in the tweet experiment, a deck of facts – without narrative structure – simply won’t get passed on and used.

Data from the call center…how to improve it

Recently I’ve been working with rich verbatim data from a customer call center – and boy, this is where you hear it direct: complaints, hassles, confusions and also stories of unreasonable clients who feel they ought to be exempt from all fees and charges. Some people seem to think that institutions ought to run for free.

Image
String data – may be hard to work from if the questions are wrong to begin with.

Having worked with the client to develop a pretty accurate code-frame, we still found that our analysis wasn’t getting deep enough. For example take all those customers who felt some mistake had been made.  We had a well-constructed codeframe that picked these people up into one sizeable bucket – but somehow, through some combination of our coding architecture and the sheer diversity of the English language, we hadn’t adequately picked up the nature of those mistakes and errors.

So back we went and hand-coded our shortlist of Mistakes & Errors, and we found four basic causes for these. We also asked ourselves if we might somehow have changed our coding architecture to get a sharper result next time. The answer is: no – the real problem was the way the verbatims had been recorded. A lot of the story was between the lines and inferred. There was no way that automated text analysis could pick a lot of this up.

The bigger answer is this. There are good ways to ask open-enders as well as less-good ways and one of our suggestions was to tweak the way the call center asks customers to tell their stories by asking clearly what the cause of the error had been (in their view.)  We also suggested a check-box to precode some of the answers where possible. For example: Was a mistake or error made? Check. Then what general type of error was this? Type 1, 2, 3 or 4. 

Text analysis is seldom really simple, but often analysts wrestle with the raw data – we struggle to make it sing. Sometimes we need to go back to the collection point. We don’t have to be victims in the process.

 

Benchmarking Studies – are they worthwhile?

How are we doing?  That’s the basic question everyone asks in the workplace. Back in 1999 I read a fabulous meta-study of hundreds of employee surveys and their methodologies, and the core question – the very heart of the employee’s anxiety was summed up as: “How am I doing?”

Managers adopt the royal “we” and ask the same thing: how are we doing?

Compared to whom? In the 1980 and 1990s this question usually lead us down the track toward benchmark studies. Benchmarking was quite the buzzword in the 90s, driven as it was by TQM and various Best Practice initiatives. Back then everyone wanted to be like IBM.

These days everyone wants to be like Apple (presumably they want to operate like anal control freaks – unhappy but proud of their work,) but that’s not the issue. The issue is around the whole idea of benchmarking.

It used to be that research companies loved these studies because if you could generate a big enough pool of normative data, then everyone had to come to you – the experts who have 890 studies of soap powder studies from around the globe. It was a license to print money. And better still, the client could tell their stakeholders: “Look – we’re in the top quartile!”  (The score may be crap but we’re better than most.)

But sooner or later benchmarks come unstuck. For a start there is the IBM effect. IBM used to be the company that everyone wanted to emulate – and just about every company that did went over the same cliff that IBM steered toward in the early 90s. They lost their way when the computing industry moved from a box focus to a software focus. Suddenly IBM had lost its mojo.  So the clones all disappeared while those with a unique story – the Apples and the Microsofts – who didn’t benchmark themselves to IBM, merrily went their own successful or at least adventurous way.

Then there is the problem of relevancy. If you benchmarked your bank’s performance against all the other banks in the world, would that even be useful? (As a reference point, how about the bank I deal with in Cambodia that didn’t put an auto-payment through because – and I quote – “we were having a party that afternoon.”) Is it actually useful for local banks here in NZ to compare themselves to these other operators? Does it make a local customer quietly glad that their branch rates higher than does the leading bank in Warsaw? I think not.

Would our client prefer to benchmark his salary against global norms? (Hey, we’ve got third world data included.)

But here, for the researcher is the ultimate rub. If a client insists on benchmarking by adjusting their survey to ask the same questions as “the survey they used in Canada,” then what we’re buying into is not just the idea of comparing results, but also into the idea that we may have to dumb down our questions; that we may need to run average research so we can really compare apples with apples.

I’ve tasted those apples. They’re dry and they’re floury.

Please. Give me crisp, fresh locally grown produce, and leave normative data for those who want to compare themselves to …er, normal.

DSC_0372

Why Wordle drives me nuts

In the past three years use of Wordle has spread like wildfire, profession by profession. Right now I have evidence that it is taking the HR profession by storm and late last year was at a conference of school guidance counselors where Wordle charts on PPT create a ripple of oohs and aahs from across the audience. How did the presenter create those?

In some respects I love Wordle, and I love the fact that IBM gave its developer free license to just go ahead and develop the thing, and the way they really put it in the public domain.

But in market research, a word cloud is a poor substitute for real analysis. At best it does a word count so you can see what words got used most. I guess that’s fine if the question is something like: “What is your favourite brand of coffee?” From the massive choice available, the open-ender might produce a cloud where some brands emerge as more dominant than others. But why not just do a bar chart based on mentions?

No, what drives me nuts about Wordle is the way too many researchers somehow feel that by generating a word cloud – which is akin to throwing a pack of cards on the floor – they have somehow “analysed” the content.

Analysis?  To illustrate how good this is as an analytic tool, I’ve created a Wordle to help us analyse some numbers. Hey, it works for words, so this should be brilliant.

WORDLE NUMBERS 2
Hey. This is a good analysis!

What does this tell us?  What numbers aren’t mentioned?  Are there themes at work here? Who can tell? All the process has done is throw them on the floor in a game my sister used to call “52-Pick-up.”  Well, this is all it does with words.

Analysis ought to involve more thought than this, and in a discussion I started in the LinkedIn group Innovation Insight, one member, Shannon Gray from Nashville, discussed how she develops word clouds to go just a few steps further by first of all using Excel to help look for common links between words “unfair – price” for example – and then running word clouds. She recommends Tagxedo (which is very similar to Wordle) because it offers more control.

I do think text analysis goes way, way beyond Wordle’s scope, and the pity is that too many people have been caught up in the novelty of word clouds (broadcasters were big on it last year, but it has fallen from favour with their graphics teams) without thinking: wait – is what we’re doing really adding value to the data?  If this is all we’re doing to verbatims, then it might be better to simply hand over the sheaf of responses to the client and say, “Hey, here’s what your customers are saying.”

If you wish to check out Tagxedo – CLICK HERE