Source: Globe, Clker, http://bit.ly/1CVSonk; Thinking Person, Clker, http://bit.ly/1EmDSQV; Promise, Pixabay, http://bit.ly/1HLe3SQ; Blue Man Crossed Arms, Clker, http://bit.ly/1H8Tav3; Blue Man Pointing, Clker, http://bit.ly/1Uy6vGo; Blue Man Shrugging, Clker, http://bit.ly/1Cohahr
Hello there, and welcome. Reliability, ability and bias are terms most often associated with survey questions. In this lesson we will take a closer look at how to examine survey questions for these traits. Let's get started.
You know it's close to election time whenever results from opinion polls and favorability ratings are all over the news. It seems like when one set is reported, the opposing candidate comes out with their own poll numbers to contradict it. I often wonder how we reliable and valid those results actually are, and are we really to believe that information coming directly from the candidates camps are really unbiased?
It's easy to write questions. It's far more difficult to write solid, thoughtful, insightful questions that will get you the information that you're seeking. There are some important steps you can take to help you improve the quality of the results you receive, and they include examining your questions for reliability, validity, and bias.
The definition of reliability is when a question is consistently answered in the same way. Think of a ruler. If you were to use the same ruler to measure an object, you would get the same result each time because it's reliable. When you're trying to determine if a question as reliable, there are a series of questions that you can ask.
If I ask people with a similar perspective on this topic the same question, would they all answer it the same way? If I gave the same person the same question later on, would they answer it the same? And if I asked multiple questions on the same topic, would the participant answer the questions similarly to how they would answer the other questions?
Here are a couple examples of questions that are considered unreliable based on how they are written. Number one, what did you think about the amazing workshop you attended? Obviously this is a leading question which hurts its reliability. How often don't you assign homework? This question is poorly written and confusing, thus hurting it's reliability.
Validity is when your question effectively targets the information that you are trying to find out. There are definitely some ways to maximize question validity. For instance, having a single focus in your question and using terms that your participants will understand, rather than jargon. There are questions you can ask that will help you check for validity.
Will the answers that I get relate to what I want to know? Does this question relates to the information that I'm looking for? Could this question be misinterpreted? Are the words or phrases in this question easy to understand?
Let's say you wanted to find out about the pacing of a new math program being used. Here are a couple of examples of questions that are considered invalid. Why are you so far behind in your pacing? This question is very confrontational and can be misinterpreted as an attack. Example number two, do you find the differentiated tasks associated with the end of module assessments to be an effective use of the students time, or would they be better served utilizing alternate materials provided by the collaborative team working on fraction review to be more helpful? This question is simply too long and all over the place to be valid.
In the context of writing surveys, bias is when questions are written in a way that sways the reader toward a particular answer or direction, and as a result, the information gathered is not reflective of reality. Simply put, a question that contains bias leads the participant away from the truth. This phenomena is also known as response bias, and also affects a question's validity.
Here are some common mistakes that are made when writing questions that can introduce bias. The questions lead the respondent. It contains prompts that may cause the participant to change their answer. Questions that limit the responder's options. Too many questions pertaining to a particular topic or an over emphasis. You need to make sure to present all sides and all options. Misleading questions. Questions that are too confusing or easily misinterpreted.
Here a few examples of questions that introduce bias. Number one, how did you utilize that extra prep period that was given to you by your administrator? Clearly the teacher is made to say how well they used that time. On a scale of one to five, how much did you enjoy the inspiring and engaging speaker today? Again, this is leading. And example number three, how are your students doing in math? Choose one-- exceptional or poor. The choices here are very limited.
So it's time to go ahead and summarize this lesson. We started by talking about how we can improve survey questions by running a few checks on them, and why it's important to do so. We then defined and gave non-examples of questions to better understand reliability, validity, and bias. And now for today's food for thought. Take some of the example questions that were shared with you in this video and rewrite them to increase the reliability, validity, and reduce bias.
For more information on how to apply what you've learned in this video, take a look at the additional resources section. That section includes hyperlinks useful for applications of course material, including a brief description of each resource. That's all for this lesson. Thanks so much watching. We'll see you next time.
(00:00-00:15) Intro
(00:16-00:40) Vote For Me
(00:41-01:00) Analyzing Questions
(01:01-02:02) Reliability
(02:03-03:13) Validity
(03:14-04:33) Survey Bias
(04:34-05:23) Food For Thought/Summary