In Part 3 yesterday, Tom Miller looked at: how to identify the target population and sample; determine how many people should be surveyed and how to reach them; and the importance of asking the right questions in the right way. In today’s installment, he covers: asking the right persons; testing the survey; and then conducting the survey, checking for bias, and interpreting the results
A good survey instrument is of little practical benefit if it is used to obtain answers from respondents who do not fairly reflect the sampling frame. Once a surveyor has decided how residents will be contacted (e.g., by mail, phone, or in-person), he or she can then “draw” a sample.
For a mailed survey, address lists may be purchased from commercial address listing services. Before making a major purchase, it is usually a good idea to test a sample of the addresses supplied by the service to make sure they are accurate and include all units in multi-family dwellings.
Those creating a sample for a telephone survey can reasonably assume that the proportion of prefixes — the first three digits of a seven digit number — in a telephone book reflects their actual number among all telephones (whether listed or not). Thus, the phone book can be used to generate the sample of numbers by using “plus one dialing,” which involves adding one to the last digit of each phone number (changing 555-1234 to 555-1235, for example). This way, surveyors can ensure that unlisted phone numbers are as likely to be sampled as listed numbers. [See also my discussion yesterday on the downsides of phone and web-based surveys]
Asking the “right” person also means finding the right member of a household to interview. If the choice of respondent is left up to the people in the household, the resulting sample might be unrepresentative. Regardless of how a survey is conducted, respondent selection in households should be controlled. To achieve this with a mailed or phone survey, the surveyor can include language in the instrument asking for the adult who most recently had a birthday to complete the survey.
Who to Include in the Sample?
A sample should be large enough to represent the total community and any subgroups of interest in the community.
At this point, “stratification” may make sense. Stratification means placing members of the population into groups. When membership in a group makes a difference in how members of that group will respond to survey questions (home owners versus renters, for example), then stratification can increase the precision of sample results.
When you want to be certain to have enough response from segments of the population that may not have many members in your sample, you will want to sample a disproportionately large number from that stratum.
For example, if you survey 400 residents in a community with ten percent people of color, and want to be sure you have enough response from people of color, you need to “oversample” people of color, assuring that 100 respond where only 40 would have been expected to respond by chance alone. Later, when you report the results for the entire community, they will need to be statistically re-weighted to give people of color the appropriate ten percent weight (i.e., reflecting their proportion of the population).
8. Test the Survey and Adjust If Necessary
Testing a survey instrument is critical if a surveyor is to determine whether the instrument contains questions that are clear and easily understood. It can also be used with some open-ended questions to help the surveyor develop meaningful forced choice questions and to test various wordings for policy questions.
A sample of twenty “pretest” respondents can identify questions that may not be explicit enough or that seem to suggest an answer. It is best to choose the pretest respondents from your sampling frame, but local government staff, committee members who have participated in drafting the questionnaire, and friends can help provide useful feedback.
In any pretests, surveyors may wish to include questions about the questions themselves. In pretests for a phone survey, interviewers can ask about any confusion as it arises.
9. Conduct the Survey, Check for Bias, and Interpret the Results
For the results of a survey to be valid, the responses on which they are based must reflect the target population. Consequently, before interpreting the results of a survey, a surveyor must calculate the rate of response for the survey, and check and correct for any non-response bias.
To calculate the response rate for a telephone or in-person survey, a surveyor needs to track the number of attempts — usually a minimum of three with phone surveys and two with mail surveys — that are made to contact each person in the survey sample.
After quantifying the rate of response, the surveyor must attempt to discern any differences between those who responded to the survey and those who chose not to or could not be reached. If comparison with Census data suggests that there are significant differences between these two groups (for example, if demographic characteristics such as income, education levels, or race are dissimilar), it will be necessary to correct for the non-response. In some cases, such adjustments can be made by “re-weighting” (using statistics to increase or decrease the representation of various groups).
While it is true that much can be learned from mathematically intensive evaluation of survey results, most citizen surveys do not require fancy statistics. Results can usually be calculated by using widely available software programs that are designed to calculate medians, ranges, percentages, frequency distributions, and other measures. These programs will typically prepare extensive tables, as well as attractive charts and graphs. Most of the software programs also present cross-tabulations of different responses and indicate which differences are statistically significant.
Remember, though, that finding statistically significant differences in the responses to a question does not necessarily mean the differences are important. For example, with a large enough sample you may find that 82 percent of older residents want a flood plain ordinance but only 78 percent of younger residents do. This difference may be statistically significant, but without any policy relevance whatsoever.
When the survey results are in, it’s time to call back into action your advisory panel to analyze the results.
More and more communities are using surveys to get a better sense of public opinion on a wide range of planning-related issues. Surveys are effective at reaching residents who do not ordinarily participate in typical “public involvement” events, such as meetings and forums.
For surveys to be of value, however, they need to be carefully prepared and administered. This includes clearly identifying just what the purpose of the survey is; identifying the target population and sample size; asking the right questions in the right way; and conducting the survey in a fair and unbiased manner.
Thomas I. Miller, Ph.D., is founder and President of National Research Center, Inc., a survey research firm located in Boulder, Colorado. An expert in research and evaluation methods, Miller is the co-author of Citizen Surveys: A comprehensive guide to making them matter, published by the International City/County Management Association in 2009. His firm, which specializes in surveys that permit communities to compare their results with “peer” communities, maintains an integrated database of over 500 surveys completed by about a one half million residents in 44 states.
Miller would be pleased to respond to readers’ questions about the article through our PlannersWeb Linkedin group page; he can also be reached at: 303-444-7863.