Back to Articles and Publications

Customer Satisfaction

 

Dewey Beats Truman! (Good Survey Sampling)

By Rick Crandall, for Hostedware Corporation

 

When you read about a reader poll in the newspaper or on your favorite web site, you can't count on the results being accurate. The title of this article goes back to a famous case from the 1940s that is often used to "prove" that surveys or polls are not accurate. The title is the headline on a newspaper the morning after the presidential election in the US. In fact, Truman held it up after he won the election!

Why was the poll, and the newspaper, wrong? The short answer is improper sampling. When you are trying to predict what a large group of people think or will do, it is usually too expensive to ask them all. So you ask a "sample" of them and project your results to the larger group.


Real Sampling Is Magic (Science)
The poll that predicted that Dewey would win easily was conducted by a Republican publication among their readers. It was a biased "sample." A real sample is random: it gives every member of the appropriate population an equal chance of being included. It's as if you drew their names from a bowl containing everyone's name. A reader poll only includes readers and in this case included few Democrats. A proper sample would have been drawn from the population of all registered voters.


Of course, you don't always know what the complete population is. And sometimes people don't tell the truth in surveys. For instance, the best election results polls are random responses from people after they vote, so called "exit polls." There are also statistical techniques to deal with these problems. Similarly, there are statistical shortcuts to draw a sample representing everyone in the US without putting all their names in a bowl. (In brief, you randomly select areas and then blocks, houses, and people within them.) When done right, it turns out that responses from fewer than 1500 people can represent the entire adult population of the US within a couple of percent. This is the real magic of proper sampling.


Even with the best methods, you don't get everyone in the sample to respond. Fortunately, about 75% gives a good result 95% of the time. And there are statistics to estimate how many times you'll be wrong by what degree.


Being a Fanatic
Let me give you an example of how fanatical good survey research people are. At the Survey Research Institute at University of Michigan, these are the types of stories they told about collecting data. (Stories are a good way of conveying cultural values.) One ongoing survey followed up with people at regular intervals, asking similar questions to assess changes over time. At one point, one respondent had been put in jail. The only visitors allowed were family members. So the woman surveyor told the jail she was his mother and went in and did the interview. In another case, a house address came up in a canyon in Los Angeles. When the surveyor went to the address, the house wasn't there! It had been in a mudslide and was gone. The surveyor was told to rent a jeep and go look for the house. When they found it, there were people in it and the designated one was interviewed.


Of course surveyors aren't all fanatics. One researcher at Michigan did follow-ups when surveys in Detroit weren't completed because of "no English." Many times those people were in bad neighborhoods and the surveyors were simply afraid to work there. (In addition to knowing other languages, this guy was big.) And you'll notice that follow-ups were done to try to collect missed interviews. So when you read about the latest survey data about politics or consumer sentiment, make sure it was a real sample if you need to count on the results.


How Can You Use This Magic?
Does this mean that results from your web visitors or customers won't be accurate? It depends on what you use the data for. If nine people out of ten love or hate your new product, that is an important hint to act on. But you shouldn't spend $10,000 on retooling without more data. Other times, if only 20% of your customers complain about something, it may be very important. Many more may have left without bothering to tell you. And if hundreds of people say they want something from you, it can be valuable input, even if it's not representative.


Clearly, the exact questions you ask can make a big difference. General attitudes, such as "liking" something, don't predict behavior well. But when people say that they intend to act, it is more predictive. If 500 people say they would buy a new service, in our experience, less than 20% of those "buyers" will actually buy next month when you offer them the new product. (Others will buy eventually.) However, if all 500 gave you a deposit against delivery, it's a different story. The best research is often trying to collect a check or purchase order.

Conclusion
So now you know something about what makes a scientific survey accurate. You shouldn't be discouraged if you can't collect perfect data. More information is almost always better than less information as long as you know its limitations. So talk to your customers, do surveys, and observe their behavior. When you're not sure of your conclusions, act in a way that tests your guesses and collects better information. In business, that usually means trying to sell something.


Rick Crandall, PhD (www.rickcrandall.com) author and consultant specializing in sales, marketing, and customer service for trade associations, the service industries and professions, and other business groups.

 

Hostedware Corporation is a pioneer in providing online software solutions for research, education and performance improvement. Hosted Survey and Hosted Test are used by human resources professionals, market researchers, education and training organizations and membership associations worldwide.

 

 

< Back to Articles and Publications