Sampling (statistics)

From Wikipedia, the free encyclopedia

Sampling is that part of statistical practice concerned with the selection of individual observations intended to yield some knowledge about a population of concern, especially for the purposes of statistical inference. Each observation measures one or more properties (weight, location, etc.) of an observable entity enumerated to distinguish objects or individuals. Results from probability theory and statistical theory are employed to guide practice.

The sampling process consists of 7 stages:

  • Definition of population of concern
  • Specification of a sampling frame, a set of items or events that it is possible to measure
  • Specification of sampling method for selecting items or events from the frame
  • Determine the sample size
  • Implement the sampling plan
  • Sampling and data collecting
  • Review of sampling process

Contents

[edit] Population definition

Successful statistical practice is based on focused problem definition. Typically, we seek to take action on some population, for example when a batch of material from production must be released to the customer or sentenced for scrap or rework.

Alternatively, we seek knowledge about the cause system of which the population is an outcome, for example when a researcher performs an experiment on rats with the intention of gaining insights into biochemistry that can be applied for the benefit of humans. In the latter case, the population of concern can be difficult to specify, as it is in the case of measuring some physical characteristic such as the electrical conductivity of copper.

However, in all cases, time spent in making the population of concern precise is often well spent, often because it raises many issues, ambiguities and questions that would otherwise have been overlooked at this stage.

[edit] Sampling frame

In the most straightforward case, such as the sentencing of a batch of material from production (acceptance sampling by lots), it is possible to identify and measure every single item in the population and to include any one of them in our sample. However, in the more general case this is not possible. There is no way to identify all rats in the set of all rats. There is no way to identify every voter at a forthcoming election (in advance of the election).

These imprecise populations are not amenable to sampling in any of the ways below and to which we could apply statistical theory.

As a remedy, we seek a sampling frame which has the property that we can identify every single element and include any in our sample. For example, in an opinion poll, possible sampling frames include:

The sampling frame must be representative of the population and this is a question outside the scope of statistical theory demanding the judgement of experts in the particular subject matter being studied. All the above frames omit some people who will vote at the next election and contain some people who will not. People not in the frame have no prospect of being sampled. Statistical theory tells us about the uncertainties in extrapolating from a sample to the frame. In extrapolating from frame to population its role is motivational and suggestive.

There is however, a strong division of views about the acceptabilty of representative sampling across different domains of study. To the philosopher, representative sampling procedure has no justification whatsoever because it is not how truth is pursued in philosophy. 'To the scientist, however, representative sampling is the only justified procedure for choosing individual objects for use as the basis of generalization, and is therefore usually the only acceptable basis for ascertaining truth'. (Andrew A. Marino) [1]. It is important to understand this difference to steer clear of confusing prescriptions found in many web pages.

In defining the frame, practical, economic, ethical and technical issues need to be addressed. The need to obtain timely results may prevent extending the frame far into the future.

The difficulties can be extreme when the population and frame are disjoint. This is a particular problem in forecasting where inferences about the future are made from historical data. In fact, in 1703, when Jacob Bernoulli proposed to Gottfried Leibniz the possibility of using historical mortality data to predict the probability of early death of a living man, Gottfried Leibniz recognised the problem in replying:

Nature has established patterns originating in the return of events but only for the most part. New illnesses flood the human race, so that no matter how many experiments you have done on corpses, you have not thereby imposed a limit on the nature of events so that in the future they could not vary.

Having established the frame, there are a number of ways of organizing it to improve efficiency and effectiveness.

[edit] Simple random sampling

In a simple random sample, all elements of the frame are treated equally and it is not subdivided or partitioned. One of the sampling methods below is applied to the whole frame.

[edit] Stratified sampling

Where the population embraces a number of distinct categories, the frame can be organized by these categories into separate strata or demographics. Another sampling method is then applied to each stratum separately, producing a stratified sample. Major gains in efficiency (either lower sample sizes or higher precision) can be achieved by varying the sampling fraction from stratum to stratum. The sample size is usually proportional to the relative size of the strata. However, if variances differ significantly across strata, sample sizes should be made proportional to the stratum standard deviation. Disproportionate stratification can provide better precision than proportionate stratification. Typically, strata should be chosen to:

  • have means which differ substantially from one another.
  • minimise variance within strata and maximise variance between strata.

[edit] Cluster sampling

Sometimes it is cheaper to 'cluster' the sample in some way e.g. by selecting respondents from certain areas only, or certain time-periods only. (Nearly all samples are in some sense 'clustered' in time - although this is rarely taken into account in the analysis.)

Cluster sampling is an example of 'two-stage sampling' or 'multistage sampling': in the first stage a sample of areas is chosen; in the second stage a sample of respondent within those areas is selected.

This can reduce travel and other administrative costs. It also means that one does not need a sampling frame for the entire population, but only for the selected clusters.

Cluster sampling generally increases the variability of sample estimates above that of simple random sampling, depending on how the clusters differ between themselves, as compared with the within-cluster variation.

[edit] Quota sampling

In quota sampling, the population is first segmented into mutually exclusive sub-groups, just as in stratified sampling. Then judgement is used to select the subjects or units from each segment based on a specified proportion. For example, an interviewer may be told to sample 200 females and 300 males between the age of 45 and 60.

It is this second step which makes the technique one of non-probability sampling. In quota sampling the selection of the sample is non-random. For example interviewers might be tempted to interview those who look most helpful. The problem is that these samples may be biased because not everyone gets a chance of selection. This random element is its greatest weakness and quota versus probability has been a matter of controversy for many years.

[edit] Sampling method

Within any of the types of frame identified above, a variety of sampling methods can be employed, individually or in combination.

[edit] Random sampling

In random sampling, also known as probability sampling, every combination of items from the frame, or stratum, has a known probability of occurring, but these probabilities are not necessarily equal. With any form of sampling there is a risk that the sample may not adequately represent the population but with random sampling there is a large body of statistical theory which quantifies the risk and thus enables an appropriate sample size to be chosen. Furthermore, once the sample has been taken the sampling error associated with the measured results can be computed. With non-random sampling there is no measure of the associated sampling error. While such methods may be cheaper this is largely meaningless since there is no measure of quality. There are several forms of random sampling. For example, in simple random sampling, each element has an equal probability of being selected. It may be infeasible in many practical situations. Other examples of probability sampling include stratified sampling and multistage sampling.

[edit] Matched random sampling

A method of assigning participants to groups in which pairs of participants are first matched on some characteristic and then individually assigned randomly to groups. (Brown, Cozby, Kee, & Worden, 1999, p.371).

[edit] Systematic sampling

Selecting (say) every 10th name from the telephone directory is called an every 10th sample, which is an example of systematic sampling. It is a type of nonprobability sampling unless the directory itself is randomized. It is easy to implement and the stratification induced can make it efficient, but it is especially vulnerable to periodicities in the list. If periodicity is present and the period is a multiple of 10, then bias will result. It is important that the first name chosen is not simply the first in the list, but is chosen to be (say) the 7th, where 7 is a random integer in the range 1,...,10-1. Every 10th sampling is especially useful for efficient sampling from databases.

[edit] Mechanical sampling

Mechanical sampling is typically used in sampling solids, liquids and gases, using devices such as grabs, scoops, thief probes, the coliwasa and riffle splitter.

Mechanical sampling Care is needed in ensuring that the sample is representative of the frame. Much work in this area was developed by Pierre Gy.

[edit] Convenience sampling

Sometimes called grab or opportunity sampling, this is the method of choosing items arbitrarily and in an unstructured manner from the frame. Though almost impossible to treat rigorously, it is the method most commonly employed in many practical situations. In social science research, snowball sampling is a similar technique, where existing study subjects are used to recruit more subjects into the sample.

[edit] Sample size

Where the frame and population are identical, statistical theory yields exact recommendations on sample size. However, where it is not straightforward to define a frame representative of the population, it is more important to understand the cause system of which the population are outcomes and to ensure that all sources of variation are embraced in the frame. Large number of observations are of no value if major sources of variation are neglected in the study. In other words, it is taking a sample group that matches the survey category and is easy to survey. Bartlett, Kotrlik, and Higgins (2001) published a paper titled Organizational Research: Determining Appropriate Sample Size in Survey Research Information Technology, Learning, and Performance Journal that provides an explanation of Cochran’s (1977) formulas. A discussion and illustration of sample size formulas, including the formula for adjusting the sample size for smaller populations, is included. A table is provided that can be used to select the sample size for a research problem based on three alpha levels and a set error rate.

[edit] Types of data

[edit] Categorical and Numerical

There are two types of random variables: categorical and numerical. Categorical random variables yield responses such as 'yes' or 'no'. Categorical variables can yield more than two possible responses. For example: 'Which day of the week are you most likely to wash clothes?' Numerical random variables yield numerical responses, such as your height in centimeters.

There are two types of numerical variables: discrete and continuous. Discrete random variables produce numerical responses from a counting process. An example is 'how many times do you visit the cash machine in a typical month?' Continuous random variables produce responses from a measuring process. Height is an example of a continuous variable because the response takes on a value from an interval. Precision of the measurement instrument(s) may lead to tied observations. A tied observation occurs when the measuring device is not sensitive or sophisticated enough to detect incremental differences in the experimental or survey data.

[edit] Levels of measurement

There are four generally recognized levels of measurement: nominal, ordinal, interval, and ratio.

[edit] Nominal and ordinal scales

Data obtained from categorical variables are considered to be measured on either a nominal or ordinal scale. A nominal scale classifies data into distinct categories where ordering is not explicit or implicit. An example is 'What is your gender?' Nominal scaling is the weakest form of measurement.

An ordinal scale classifies data into distinct categories where ordering is implied. An example is 'How would you rate the service provided on your last visit (1 - worst ever, 5 - best ever)?' Ordinal scaling is stronger than nominal scaling, but it is still relatively weak.

[edit] Interval and ratio scales

Data obtained from a numerical variable are usually assumed to have been measured on an interval scale or a ratio scale. An interval scale is an ordered scale in which the difference between measurements is a meaningful quantity that does not involve a true zero point. An example is an exam score that has been adjusted to take class performance into account. Two students' scores can be described in relation to one another, but one cannot say that a student whose adjusted score is 44 did twice as well as a student who scored a 22. A ratio scale is an ordered scale in which the measurements possess a true zero point. An example is Kelvin temperature. The Kelvin scale, which scientifically defines absolute zero, is ratio scaled.

[edit] Sampling and data collection

Good data collection involves:

  • Following the defined sampling process
  • Keeping the data in time order
  • Noting comments and other contextual events
  • Recording non-responses

[edit] Review of sampling process

After sampling, a review should be held of the exact process followed in sampling, rather than that intended, in order to study any effects that any divergences might have on subsequent analysis. A particular problem is that of non-responses.

[edit] Non-response

In survey sampling, many of the individuals identified as part of the sample may be unwilling to participate or impossible to contact. In this case, there is a risk of differences, between (say) the willing and unwilling, leading to selection bias in conclusions. This is often addressed by follow-up studies which make a repeated attempt to contact the unresponsive and to characterise their similarities and differences with the rest of the frame.

[edit] Weighting of samples

In many situations the sample fraction may be varied by stratum and data will have to be weighted to correctly represent the population. Thus for example, a simple random sample of individuals in the United Kingdom might include some in remote Scottish islands who would be inordinately expensive to sample. A cheaper method would be to use a stratified sample with urban and rural strata. The rural sample could be under-represented in the sample, but weighted up appropriately in the analysis to compensate.

[edit] History of sampling

The idea of random sampling by the use of lots is an old one, mentioned several times in the Bible. In 1786 Pierre Simon Laplace estimated the population of France by using a sample, along with ratio estimator. He also computed probabilistic estimates of the error. These were not expressed as modern confidence intervals but as the sample size that would be needed to achieve a particular upper bound on the sampling error with probability 1000/1001. His estimates used Bayes' theorem with a uniform prior probability and it assumed his sample was random.The theory of small-sample statistics developed by William Sealy Gossett put the subject on a more rigorous basis in the 20th century. However, the importance of random sampling was not universally appreciated and in the USA the 1936 Literary Digest prediction of a Republican win in the presidential election went badly awry, due to severe bias. A sample size of one million was obtained through magazine subscription lists and telephone directories. It was not appreciated that these lists were heavily biased towards Republicans and the resulting sample, though very large, was deeply flawed.

[edit] See also

[edit] Graduate degree programs specializing in sampling/survey methods

[edit] Doctoral and Masters Degrees

[edit] Masters Degrees only

[edit] References