In this tutorial, we'll be discussing several issues that can impact the quality of the results that you receive from a survey. These include question reliability, question validity, and question and survey bias. After you've drafted all of your survey questions, it's really important to go back and analyze those questions relative to these principles to ensure that you get high-quality results that will help you to draw valuable conclusions.
Let's begin with the issue of question reliability. Question reliability refers to a question being consistently answered in the same way. There are some questions that you can ask yourself in order to determine whether a question is reliable.
If I asked this question to people with similar perspectives, would they all answer the question the same way? If I asked this question to the same person at a later time, would they give the same answer that they gave the first time? If I asked the same person multiple questions on this same topic, would that person answer this question similarly to how they answer all of the other questions on the topic?
Let's look at a couple of examples of questions that are not reliable. Did you find all of today's professional development sessions to be useful? While this question is just a little too broad, what if the person found some of the professional development sessions to be useful, but not all of them? They may be inclined to answer yes in order to give an overall picture of their experiences, but upon further reflection, if you ask them the same question again later, they might be inclined to say no.
Here's another example. True or false, I wouldn't use this iPad app again in my classroom. The wording on this question is tricky. Some of the people responding to the survey might misread it as I would use this iPad app again in my classroom. Some might choose to skip it entirely because the wording is too confusing.
Let's next to look at question validity. Question validity refers to the question effectively targeting the specific desired information. You can maximize the validity of a question by making sure that you just have a single focus in the question, and also by using terms that the participants in the survey are going to understand.
You can ask yourself, will the answers that I get from this question relate to what I actually want to know? Is this question directly related to the specific information that I'm looking for? Could this question be misinterpreted? Does this question, use words that are easy to understand?
By examining your survey questions for reliability and validity, you can help to ensure that the results you get are as accurate and meaningful as possible. But there is one more issue that we need to look at. That is the issue of question and survey bias.
A biased question is one that sways the survey participants towards a particular answer. In fact, an entire survey can be biased if it sways participants in a particular direction throughout the course of the survey. Question and survey bias results in gathering information that is not actually reflective of reality. Another way to say it is that the biased question or survey actually serves to lead the participants away from the truth. So really, if your survey questions are biased, that is going to impact the validity of those questions as well.
There are several ways that bias can be introduced into a question or an entire survey. Note that sometimes the term response bias is also used. No matter what terminology you use for it, bias can be introduced through leading questions, through prompts that might cause the participants in the survey to change their answers, through questions that end up limiting the responses to answers that are aligned with a particular preference, through simply too many questions on one topic, or an overall overemphasis on that topic. An unbiased survey is going to be sure to present all of the varying sides and varying opinions to the participants.
Bias can be introduced through the use of misleading questions, or through words and questions that are easily misinterpreted, or easily misunderstood, or confusing. So here are a few examples of questions that introduce bias. First, why do you think we should change the late homework policy? Well, if a teacher is responding to this question, obviously the assumption is that the teacher already thinks the policy should be changed.
What if the responding teacher disagrees? What if the teacher doesn't see a problem with the policy? If this is a required question on a survey, this really limits the teacher in the ways that he or she can respond, and may actually lead that teacher to respond in a way that is not in line with his or her personal views on the topic.
Here's another example. Many staff members have stated that we need more professional development on the topic of differentiation, do you agree? While this is indeed a simple yes or no question, the implication here is that the overwhelming opinion of the staff is already that we need more PD on differentiation. So a teacher may be swayed in this case to respond with an answer of yes so that they don't stick out from the crowd or so that they don't appear to have values or wants in regards to professional development that are different from the majority of their colleagues.
And here's one more example. The technology staff has been working long hours to make sure that your new LMS is functioning properly. Are you satisfied with the implementation of the new software? This question has a couple of problems. First, the blurb about the long hours that the technology staff has been putting in might actually sway some of the participants to not want to say anything bad about the implementation of the software. Perhaps they just don't want to insult the technology staff, or they don't want to seem like they are complaining about something that has obviously been taking a lot of time and a lot of effort to complete.
Another issue here is the acronym LMS. What if staff members responding to this question don't remember that LMS stands for Learning Management System? What if they simply don't have enough experience with the LMS yet to even recognize that acronym?
A further problem is that it's really not clear what the question is even looking for. Are staff members supposed to comment on the installation of the learning management system software? Or is it the training on the software that we're focusing on? Or are they making commentary on how well the learning management system has been working lately? So this particular question really needs to be modified in order to take out the bias, and create more valid and more meaningful results from the survey question.
So here's a chance for you to stop and reflect. Once you've examined your survey questions with an eye on reliability, validity, and bias, your surveys should help you to gather the most meaningful results possible. As you reflect on how this new information can be applied, you may want to explore the additional resources section that accompanies this video presentation. This is where you'll find links to resources chosen to help you deepen your learning and explore ways to apply your newly acquired skill set. Thanks for watching. Have a great day.
(00:00 - 00:29) Introduction
(00:30 - 02:13) Question Reliability
(02:14 - 03:08) Question Validity
(03:09 - 07:50) Question and Survey Bias
(07:51 - 08:27) Stop and Reflect