Did you like this Blog? Yes/I did/It was great

The title of this blog is a little misleading. You probably spotted right away that you can’t give a negative answer, and that it’s also a bit premature to ask you what you think of a blog before you’ve had a chance to read it. Sadly these and other types of questions actually exist in real-world surveys. Although it’s often disguised a little better. To illustrate this, let’s do a quick survey.

“Do you think the government should allow public speeches against democracy?”

Yes / No / Don’t know

Do you think your answer is dependent on how this question is asked? Maybe it isn’t, but it might be. Donald Rugg (1941) found that if the question stated above* was asked, 62% of people responded with “No”. That’s a pretty clear majority of people who think that public speeches against democracy should not be allowed, right? So what happens if you change the question to: “Do you think the government should forbid public speeches against democracy?”. Just one word, but the answers differed drastically. To this question, the response was only 46% “Yes”. What happened to the clear majority? According to Rugg: “Evidently the forbid phrasing makes the implied threat to civil liberties more apparent, and fewer people are willing to advocate suppression of anti-democratic speeches when the issue is presented in this way”. So giving people the idea that you’re taking something away from them, “forbidding”, makes them less likely to support it.

The big no-nos of question making.

Do not frame a question for a headline you hope to write” (Meyer, 2002). Like what seems to be the case with this question from Peil.nl, a Dutch pollster. A little bit of backstory is required, a couple of hospitals went bankrupt and in their weekly survey of the electorate, Peil.nl asked whose fault people thought that was (question 1) and what the government should have done about it (question 2).

Screen Shot 2018-11-06 at 18.55.23
( Image 1: Peil.nl)

On question 2, “What should the government have done about it?”, there were 4 answers possible.

  • “Make sure the hospitals kept existing”
  • “Announce that the hospitals will be closed in 6 months and in the meantime finance them to give time to patients and personnel”
  • “The way it happened now”
  • “Don’t know / no answer”

Does anything seem off about these answers? Well, there should. One of the answers is layered, answer 2 to be precise. “Announce that the hospitals will be closed in 6 months”, is already an answer, but it adds “In the meantime finance them”, which is also an answer on its own. But it gets even worse: “To give time to patients and personnel”, which adds a destination for the money with which the hospitals will be financed. Answer 2 is just bad.

This is problematic because the results of this survey could be framed as “37% of citizens wanted to keep the hospitals open for 6 more months…”. Which is simply misleading, because the answers to the question simply don’t match.

Leading the participant to the answer you want

Leading questions are questions which either in form or content “lead” someone to the desired answer (Loftus, 1975). A great example comes from CNN (2018). The US midterm elections took place on Tuesday, November 6, 2018, and CNN did an exit poll survey, that’s a survey conducted on citizens directly after they voted. One of the questions + answer possibilities you can see in the image below.

Screen Shot 2018-11-09 at 09.07.15
(Image 2: CNN.com)

U.S. House stands for the United States House of Representatives, which you could compare with the Second Chamber (Tweede-Kamer der Staten Generaal) in the Netherlands or the House of Commons in the UK. There are only 3 answer possibilities and, I hope you agree with me, the question leads people to either the answers of support or oppose Trump.  The answer “Trump not a factor” isn’t even a grammatically correct answer to the question.  Aren’t there any other reasons for someone to vote? Maybe you dislike Trump and it’s part of the reason why you voted a certain way, but this makes it seem like it was the only reason.  Do you think the result should be taken seriously?

So a headline based on bad data… So what?

There’s more sketchy news, so does it really matter? Maybe not as much. But what if a policy is based on the answers on such “bad” questions? The government does a survey and bases new laws on it. What if the questions were leading? Or too negatively or positively framed, like in the “Speeches against democracy” example. Or what if the questions contribute to the outcome of a referendum like Brexit? Suddenly how “good” the questions are matters a great deal.

In the end what’s important is that a survey, and thus a question, measures what you want it to measure. And if the researcher isn’t attentive to all the aspects of question and answer process where measurement can go wrong. If he/she isn’t attentive, it’s highly likely that the result of a survey simply isn’t valid (Fowler & Cosenza, 2008).

To hell with Surveys!

Or maybe not? Surveys are still a great way to measure what people think about certain topics. But what they think can be influenced by the questions, how we phrase them, in what way the questions are ordered and even things like how far apart the answer options are (Tourangeau, Couper & Conrad, 2004) or what pictures are displayed with the questions (Couper, Conrad & Canyon, 2004). That’s why I put that neat picture of a cheering man on a mountain at the top. An image that looks “great” might make you more inclined to answer my blog “is great”.

Thoughts

People can be influenced in many ways. To combat this we should be critical of every survey result we see. Instead of only looking at the results, let’s look at the questions. Maybe it should be mandatory for surveyors to give respondents a crash course in survey methodology, so people know what they’re getting into. Because if a question makes you give an answer which differs from the answer you would like to give, that’s someone stealing your opinion and using it for their own benefit, and that’s just wrong, right? 

 

References:

Fowler, F.J., & Cosenza, C. (2008). Writing effective questions. In E.D. de Leeuw, J.J. Hox, & D.A. Dillman (Eds.), International Handbook of Survey Methodology (pp.136-160).

Image 1, Retrieved from https://www.noties.nl/v/get.php?a=peil.nl&s=weekpoll&f=2018-11-4+rdg.pdf

Image 2, Retrieved from https://edition.cnn.com/election/2018/exit-polls

Loftus, E. & Zanni, G. (1975). Eyewitness Testimony: The Influence of the Wording of a Question. Bulletin of the Psychonomic Society, Vol.5 86-88. 

Meyer, Ph. (2002). Surveys. In Precision Journalism. A reporter’s introduction to social science methods (pp. 99-130).

Rugg, D. (1941). Experiments in wording questions: II. The Public Opinion Quarterly, 5(1), 91-92.

Tourangeau, R., Couper, M. & Conrad, F. (2004). Spacing, Position, and Order: Interpretive Heuristics for Visual Features of Survey Questions, Public Opinion Quarterly, Volume 68, Issue 3, Pages 368–393, https://doi.org/10.1093/poq/nfh035

 

 

 

 

 

 

 

 

 

10 thoughts on “Did you like this Blog? Yes/I did/It was great

  1. I think you make a really good point in your article and I totally agree with you. When people answer questions in a survey, they often don’t realize some questions are misleading. As you stated, the results of surveys aren’t valid in this way anymore. So your suggestion to give respondents a crash course in survey methodology is a good solution. I think another solution for this problem could be asking bipolar questions instead of misleading questions. In the end, people always have to be aware of misleading questions.

    Liked by 1 person

  2. Did you like this blog? Yes, it was great. No, really. You present really serious research data and sharp remarks in an engaging way with humor, which captured my attention.

    I do have two questions.

    First, I don’t fully grasp why you think it might be a good idea to oblidge surveyors to give respondents a crash course. Do you think this is realistic, and if so, when should this be done? Maybe some respondents are up to date already by the time researcher x presents his survey and won’t need the crash course anymore. It also might make survey participation less appealing, for people would have to make a greater effort by participating.

    Secondly, “Maybe it should be mandatory for surveyors to give respondents a crash course in survey methodology, so people know what they’re getting into.” Shouldn’t the researcher do everything possible to assure objectivity by adhering to the formulation of the question when formulating a conclusion, instead of respondents being aware of possible misuse?

    Any other opinions on this?

    Liked by 1 person

    1. Thanks for the compliment!

      Now to your First question.
      Both of your points are completely legit. It might be unnecessary and, which I think is even more likely, it makes participating in a survey less appealing. I don’t think it’s a perfect option in any sense, but it might help researchers spot questions that are “unclear” or even “leading”. The goal is to remove “bad” questions and to measure what you set out to measure, I think if participants know what they’re getting into it might help, but it’s indeed far from perfect.

      And the second question.
      You basically state that a crash course for the participants is unnecessary because it’s the researcher’s responsibility to adhere to objectivity, right? (Correct me if I misread your question). Whilst I agree with you, that doesn’t mean that all the researchers are going to be objective, or that they will adhere to the highest standards of survey methodology and question design. This is problematic because it’s the participants “opinion/voice” that’s being misused in that case. By giving the participants more insight into how a question should be worded/made gives him/her more reason and power to quit a survey if he/she feels it’s filled with bad questions (if it’s an online survey for example) or tell the researcher that he/she feels the questions are bad (if it’s a face-to-face/telephone survey). Again, it’s faaaar from perfect. I took the idea from the medical world and other academic experiments where you have to sign a paper which explains what you’re about to do, and in medical trials get detailed instructions and plentiful time to ask questions. Since it’s “research” that surveys are meant for, why not apply the same standards?

      Thanks for your questions, I hoped I answered them at least somewhat satisfactory.

      Like

      1. I too enjoyed reading your blog very much. I like how you are able to make research data more interesting and that you write in such an engaging way! Also, you took time to respond to Laura’s comments and in doing so gave us more information on your thoughts on this topic. I agree that a ‘crash course’ might be a good idea to make people more aware of the quality of the survey they are filling out and it might make people more aware of how their opinions can be used for the personal gain of the researcher for example. It is interesting that more information at the beginning of the survey, already happens as you stated, in medical experiments and might therefore also work for other disciplines of research. However, I agree with Laura that this solution might not be feasible as it might reduce participants’ willingness to participate.

        Liked by 1 person

  3. I think we can’t even grasp how much the impact it on current research in all sorts of fields. Millions of surveys are conducted every single day. All of them might contain misleading questions. It is almost an impossible task to overcome framing in surveys. Nevertheless, I do like you proposition of a crash course. Not sure if it’s realistic, but at least it’s a possible way to make people more aware of the questions they answer while filling in a survey.

    Liked by 1 person

  4. Hi Michiel, I also like this second blog of yours. You once again compared scientific data to real-life cases, good job! I think we should indeed be careful with surveys. I think that the main problem with surveys is the fact that only 1 percent of the existing surveys are actually made by experts on the survey-area. The other survey constructors may not even know or cannot even grasp the large influence using certain words can have on the outcome.

    I do think that there are a lot of surveys were it does not matter to a societal level what the outcome is. But there are some serious surveys (KiesKompas, Stemwijzer) that do affect the way people think about our democracy. And these survey can therefore have a great impact on our democracy via elections. That is why I think surveys like this, of national interest should be monitored by an external committee.

    Liked by 1 person

    1. First: Michiel, I really enjoyed reading your article. I personally find the topic really dry and you definitely did the best to make it as interesting and readable as possible. You stated really well why it matters to think about survey questions, thanks for that!
      Second: Romy, I totally agree with your comment. When it comes to surveys of national interest, which have direct effects on how the social order and the democracy are realized, there should definitely be an external force, observing the implementation of the study and also the conclusions being drawn on the basis of the research.

      Liked by 1 person

  5. I like your Intro. I read the title twice and something was so unclear that I just skipped it and went reading. I assume that happens to many survey participants. They probably give some partial unwanted answers. I would formulate your idea in an essential small introduction that will recall some attention before the survey starts. If the participants are aware that they can be led with the questions, they may read and answer more carefully. On the other hand I think we should also give some space to more spontaneous answering when comes to less crucial or less significant life topics. Less controlled answers may give new approaches and views.

    Liked by 1 person

  6. Once again, nice blog! Regarding your statement: I am not so sure about a surveyor giving a crash course to a participant. I totally agree, that participants should be made aware of the influence of the way a question is asked. However, if this will be explained to the participant at the moment he/she is going to fill out a questionnaire or will be interviewed, the participant will probably be even more biased. He/she might be extra critical and doubt every question, be full of thoughts about what he/she has heard and therefore not able to concentrate or just wants to quit the survey because how would he/she know if these questions are not misleading? As I said, I totally agree that people should be made aware of the misleading questions, but I think that it is not as easy as it sounds to find a solution for this. As mentioned in replies before, the researcher is in my eyes the first person to adjust his/her behaviour. The researcher has the duty to write objective questions. But this again is not an easy solution if even possible.

    Liked by 1 person

  7. Well done! A ‘complicated’ subject, and what I mean by that is when and how are you fully going to get the ‘real opinion’ from the people that fill in the survey. I mean, the example you gave with the double question within one question of a survey is just wrong. Basic knowledge, I would say. But sometimes things feel so natural, but research shows that it can influence the answer. I would say that if surveyors would like to use answers of a poll for an actual news article, it should always consist of several question. But I think it is impossible, if you just want a quick overview of something, to always follow all these rules and options.. A bit simple thinking of me maybe, but I think eventually it comes back to us/the readers: stay critical on the information you receive, and keep checking several sources.

    Liked by 1 person

Leave a reply to Valerie den Ridder Cancel reply