Share with your friends


Analytics Magazine

Survey sampling

A comparison of the eight most common methods.

Tom RigbyBy Tom Rigby

Every 10 years, Americans receive a familiar questionnaire in the mail known as the population census. This massive undertaking is an attempt to survey every citizen and is typically used to understand the changing demographics of the country.

While a census like this might be the most accurate method of research, administering a survey to every member of the population is almost always beyond the scope of a standard marketing research study. It is for this reason that we instead draw “samples.” In doing so, the hope is that respondents’ opinions will be consistent with those of the population, and the necessary learnings can be obtained for a fraction of what it would cost to survey everyone.

Having used a wide variety of sampling methods over the course of my career, I have become accustomed to the ins and outs of each one, and will elaborate in this article on what I consider to be the eight most common approaches. In learning about them, my hope is that you will be in an even better position to explain to future stakeholders which sampling method was used, why it was selected and what limitations (if any) should be placed on the findings as a result.

Probability vs. Non-Probability

The first thing to know about sampling methods is that there are two main types: “probability” and “non-probability” methods.

Probability methods revolve around the notion of “random selection,” where each member of the population has an equal chance of being selected for the study. When this is the case, the results from the sample can be representative of the overall population.

With non-probability methods, members of the population are picked more arbitrarily and do not all have an equal opportunity to participate. For this reason, results cannot be extrapolated to the population with a high degree of confidence.

Given these differences, you may be asking yourself why anyone would opt to use a non-probability method. The reason is quite simply that they tend to be less time-consuming and less expensive, both of which are important considerations in marketing research studies.

Probability method 1: simple random sampling: As the name implies, a simple random sampling method means that each member of the population is selected in a completely random process. Doing so ensures that everyone has an equal opportunity to participate in the study.

For example, let us assume that our client (ABC company) has provided us with an Excel database of customers that it would like to conduct research among. Let us further assume that this database contains a thousand names, which will represent our specific “population.” Using a simple random sampling method, we could first assign random numbers to every name in the database. We would then select 100 numbers, and each corresponding member would become part of our sample.

The first thing to know about sampling methods is that there are two main types: “probability” and “non-probability” methods. Photo Courtesy of | © iamnee

The first thing to know about sampling methods is that there are two main types: “probability” and “non-probability” methods.
Photo Courtesy of | © iamnee

When to use this approach: This method is regarded as one of the best options, but it technically requires a list of all potential respondents to be available. When this is the case, it is a great approach that is associated with a very low degree of sampling bias.

When not to use this approach: When a list of all potential respondents is unavailable, too costly or too time-consuming to obtain.

Probability method 2: systematic sampling: With a systematic sampling approach, the first member of the population is selected at random and then the others are chosen according to a fixed interval. The interval is typically calculated by taking the overall population and dividing it by the size of the desired sample.

Returning to our example of the client-provided database, if we were to use the systematic sampling method, we would pick the first member at random and then the other 99 respondents would be included by choosing every 10th name in the database. An interval of 10 would be used because we have a population of 1,000 and a desired sample of 100 (1,000/100=10).

When to use this approach: When there is no pattern to the data, and when it is too costly or time-consuming to use a simple random sampling method.

When not to use this approach: When there is a pattern to the data. For example, if our database was organized in a cyclical manner that matched our interval (i.e., every 10th member of the database happens to be a low-spending customer), the resulting sample could be biased.

Probability method 3: stratified sampling: In stratified sampling, the researcher starts by dividing the population into smaller groups (or “stratas”) based on characteristics of his or her choosing. The researcher then randomly selects members from each group and pools them together to form the sample.

For example, applying a stratified sampling method to our database would first require us to divide the 1,000 members into different groups. One option could be to group by age (under 25 years old, 26 to 54, and 55 or older). If the under 25 group represented 500 people in the database (50 percent of the population), the 26- to 54-year-old group had 300 people (30 percent), and the 55 or older group had 200 people (20 percent), then in our sample, we would randomly select 50 members from the youngest group, 30 members from the middle group and 20 members from the oldest group. This would provide us with a proportional randomized sample of 100 members.

When to use this approach: When there are clear subgroups in the data that will yield homogeneity within each group but heterogeneity between each group (as was the case with age). Furthermore, this approach is best when the researcher wants to be sure that the subgroups in the sample are represented equally or proportionally to the population.

Probability method 4: cluster sampling: Cluster sampling is similar to stratified sampling in that it too involves dividing members of the population into separate groups. However, in cluster sampling, these groups are naturally occurring (for example, members who live in the same city or region). Once these natural clusters have been identified, the researcher selects a few at random and only includes their members in the sample.

For our example, let us assume that our client’s database had only the names of the members and the state they live in. Using a cluster sampling approach, we would categorize all 1,000 members into 50 groups, one for each state. We could then randomly select 10 groups, and only the members in these groups would form our sample.

When to use this approach: When no master list of the population exists with alternative characteristics, but there is at least some “natural” break between groups (like city or state).

Non-probability method 1: convenience sampling: When a convenience sampling method is used, the researcher selects members based on who is most willing to participate, who is the easiest to access or who can be selected with the least amount of effort or expense.

For example, let us assume that in our client’s database, it has the home address of all 1,000 members, but emails for only 100 (perhaps the others declined to provide their email addresses). If this were the case, and our client was pushing for a quick turnaround with limited budget, we could potentially use a convenience sampling method and only select those members who have an email included. This would make them easier to contact, and therefore we would in effect be basing our sampling exclusively on convenience or ease of access.
When to use this approach: When we need descriptive results or results that give a “general idea” of consumer opinion. Another circumstance could be when we are limited in our ability to contact the prospective respondents, and the only way we can do so is via a convenient channel.

Non-probability method 2: purposive sampling: In purposive sampling (also known as “judgment” sampling), the researcher selects the sample according to his or her judgment or expertise. This method requires a great amount of trust on the part of the client, and as such, the researcher should provide clear explanations as to how the selections were made and what assumptions it imposes on the study.

For example, imagine that our client is in the fashion industry and has a goal of shifting its positioning from a mainstream brand to a more premium one. In doing so, it indicated that it is most interested in understanding the opinions of women who already seem inclined to buy premium brands. With this background, the researcher may go over the database and select the 100 women who spent the most on clothing in the past year or who listed designer brands as their preferred choices. The results of these women’s surveys would not necessarily be reflective of the population overall, but this approach would nonetheless meet this study’s particular research needs.

When to use this approach: When you have a specific target within your population that you believe is worthy of focusing on or understanding, and when this objective is considered more important than having a representative result.

Non-probability method 3: quota sampling: Quota sampling requires the researcher to select a sample so that it is in proportion to some characteristic or trait of the population. Researchers need to be careful when employing this method as they are essentially forcing the profile of sample to look a certain way. Not only could this have an impact on the results, but it makes it more difficult to know how a sample would have fallen out naturally, and therefore, what the natural profile of a population is.

In line with our previous example, our client may agree not to exclusively conduct research among premium brand buyers but nonetheless wants to ensure that the voice of this target is captured in the research. If this were the case, we may fix a quota of 30 percent so that we have representation in our sample of at least 30 premium brand buyers (even though they might not represent 30 percent of the population overall). Fixing this specific criterion to satisfy the needs and interests of our client would effectively require a quota sampling technique.

When to use this approach: When you want to be sure that respondents who meet specific criteria or who have a specific profile are included in the study.

When not to use this approach: When you want to know what the natural profile of your population is.

Non-probability method 4: snowball sampling: Snowball sampling means that the researcher finds and recruits initial sample members through either a probability or non-probability method, but then asks those initial respondents for referrals to other members. The referred members may still be screened to ensure that they match the study’s selection criteria, but importantly, they are not sourced by the researcher.

Within our client database, using a snowball sampling method would require us to select a few members (randomly or through another approach) and to then ask those respondents for referrals to other database members that we should include in our study.

When to use this approach: When you are trying to reach members of a very specific population for which it is difficult to obtain contact information, when only the referrer can identify appropriate respondents, or when a participant is unlikely to come forward unless approached by someone they know and trust.


With a clear understanding of sampling methods, it is my belief that we can better articulate the reasons for using particular approaches, as well as the assumptions or limitations that come with them. In turn, we can help stakeholders avoid the types of circumstances that lead to incorrect conclusions, and that can undermine confidence in the marketing research as a result. It may not be the most “exciting” part of a project, but understanding sampling fundamentals can mean all the difference between insights that are good only in theory, and insights that deliver the game-changing impact we all hope for, researchers and clients alike.

Tom Rigby is the founding partner of Callosum Marketing Inc.


Challenges facing supply chain execs: leadership, labor, legacy technology

While most companies recognize the value of a digitally enabled supply chain – empowered by new technologies like artificial intelligence, blockchain, big data and analytics – many chief supply chain officers (CSCOs) are not leveraging their C-suite counterparts to help reinvent the supply chain function and transform it into an engine of new growth models and customer experiences, according to new research from Accenture. Read more →

Data Science Bowl: Using AI to accelerate life-saving medical research

Imagine unleashing the power of artificial intelligence to automate a critical component of biomedical research, expediting life-saving research in the treatment of almost every disease from rare disorders to the common cold. This could soon be a reality, thanks to the fourth Data Science Bowl, a 90-day competition in which, for the first time, participants trained deep learning models to examine images of cells and identify nuclei, regardless of the experimental setup – and without human intervention. Read more →



INFORMS International Conference
June 17-20, 2018, Taipei, Taiwan

INFORMS Annual Meeting
Nov. 4-7, 2018, Phoenix


Advancing the Analytics-Driven Organization
July 16-19, noon-5 p.m.

Making Data Science Pay
July 30-31, 12:30 p.m.-5 p.m.

Predictive Analytics: Failure to Launch Webinar
Aug. 18, 11 a.m.

Applied AI & Machine Learning | Comprehensive
Sept. 10-13, 17-20 and 24-25


CAP® Exam computer-based testing sites are available in 700 locations worldwide. Take the exam close to home and on your schedule:

For more information, go to