If there’s one piece of feedback I get most from respondents to questionnaires I’ve written, it is that I don’t always give them Don’t Know as an option. I think there are times when it is useful to exclude this option, and times when Don’t Know should be mandatory in a questionnaire.
Why would I exclude Don’t Know? I do so when the question is of a lower order – for example where I am trying to get a rough measure on a minor issue. From my reading of the literature, the exclusion of don’t know will prod a few more people to express which way they lean – and frankly the more who do so, the easier it is to analyse the data. But this comes at a cost.
Research studies have suggested that there is some reluctance by respondents, at least in some surveys, to admit that they don’t know which way they stand. To include Don’t Know as an option makes it explicitly acceptable to tick that box if that’s the way the respondent feels. What we may miss is an indication of which way the respondent might be leaning – and US political polls very frequently ask people who Don’t Know a subsequent question: yes, but which way do you lean? And a high proportion of those who said they don’t know, then indicate that actually they lean this way or the other.
So as I see it, there are competing arguments about the inclusion of Don’t Know.
But there are circumstances in which I try always to include Don’t Know option. these circumstances where I am trying to get an accurate measure not of general attitudes, but of projected behaviours. Which brand will you buy? Which way would you vote?
I have a rule of thumb that for any given question around 10 to 15% of respondents can be counted on to be unsure which box to tick. If that’s true, then we have an effect that rivals or even exceeds the variance that may be caused by the survey’s margin of error. In other words, if you don’t provide a Don’t Know option when it matters, you could easily be invalidating your own conclusions. That lack of opinion may be very powerful stuff.