Design of experiments
From Wikipedia, the free encyclopedia
Design of experiments is the design of all information-gathering exercises where variation is present, whether under the full control of the experimenter or not. (The latter situation is usually called an observational study.) Often the experimenter is interested in the effect of some process or intervention (the 'treatment') on some objects (the 'experimental units'), which may be people. Design of experiments is thus a discipline that has very broad application across all the natural and social sciences. It is also called experimental design at a slight risk of ambiguity (it concerns designing experiments, not experimenting in design).
Contents |
[edit] Early example of experimental design
In 1747, while serving as surgeon on HM Bark Salisbury, James Lind, the ship's surgeon, carried out a controlled experiment to discover a cure for scurvy.
Lind selected 12 men from the ship, all suffering from scurvy, and divided them into six pairs, giving each group different additions to their basic diet for a period of two weeks. The treatments were all remedies that had been proposed at one time or another. They were
- A quart of cider per day
- Twenty five gutts of elixir vitriol three times a day upon an empty stomach,
- Half a pint of seawater every day
- A mixture of garlic, mustard and horseradish, in a lump the size of a nutmeg
- Two spoonfuls of vinegar three times a day
- Two oranges and one lemon every day.
The men who had been given citrus fruits recovered dramatically within a week. One of them returned to duty after 6 days and the other became nurse to the rest. The others experienced some improvement, but nothing was comparable to the citrus fruits, which were proved to be substantially superior to the other treatments.
In this study his subjects' cases "were as similar as I could have them", that is he provided strict entry requirements to reduce extraneous variation. The men were paired, which provided replication. From a modern perspective, the main thing that is missing is randomized allocation of subjects to treatments.
[edit] A formal mathematical theory
- See also: optimal design
The first statistician to consider a formal mathematical methodology for the design of experiments was Sir Ronald A. Fisher. As an example, he described how to test the hypothesis that a certain lady could distinguish by flavor alone whether the milk or the tea was first placed in the cup. While this sounds like a frivolous application, it allowed him to illustrate the most important means of experimental design:
1. Comparison
In many fields of study it is hard to reproduce measured results exactly. Comparisons between treatments are much more reproducible and are usually preferable. Often one compares against a standard or traditional treatment that acts as baseline.
There is an extensive body of mathematical theory that explores the consequences of making the allocation of units to treatments by means of some random mechanism such as tables of random numbers, or the use of randomization devices such as playing cards or dice. Provided the sample size is adequate, the risks associated with random allocation (such as failing to obtain a representative sample in a survey, or having a serious imbalance in a key characteristic between a treatment group and a control group) are calculable and hence can be managed down to an acceptable level. Random does not mean haphazard, and great care must be taken that appropriate random methods are used.
3. Replication
Where measurement is made of a phenomenon that is subject to variation it is important to carry out repeat measurements, so that the variability associated with the phenomenon can be estimated.
4. Blocking
Blocking is the arrangement of experimental units into groups (blocks) that are similar to one another. Blocking reduces known but irrelevant sources of variation between units and thus allows greater precision in the estimation of the source of variation under study.
Orthogonality concerns the forms of comparison (contrasts) that can be legitimately and efficiently carried out. Contrasts can be represented by vectors and sets of orthogonal contrasts are uncorrelated and independently distributed if the data are normal. Because of this independence, each orthogonal treatment provides different information to the others. If there are T treatments and T - 1 orthogonal contrasts, all the information that can be captured from the experiment is obtainable from the set of contrasts.
6. Use of factorial experiments instead of the one-factor-at-a-time method. These are efficient at evaluating the effects and possible interactions of several factors (independent variables).
Analysis of the design of experiments was built on the foundation of the analysis of variance, a collection of models in which the observed variance is partitioned into components due to different factors which are estimated and/or tested.
Some efficient designs for estimating several main effects simultaneously were found by Raj Chandra Bose and K. Kishen in 1940 at the Indian Statistical Institute, but remained little known until the Plackett-Burman designs were published in Biometrika in 1946.
In 1950, Gertrude Mary Cox and William Gemmell Cochran published the book Experimental Designs which became the major reference work on the design of experiments for statisticians for years afterwards.
Developments of the theory of linear models have encompassed and surpassed the cases that concerned early writers. Today, the theory rests on advanced topics in abstract algebra and combinatorics.
As with all other branches of statistics, there is both classical and Bayesian experimental design.
[edit] Example
This example is attributed to Harold Hotelling in [1]. It conveys some of the flavor of those aspects of the subject that involve combinatorial designs.
The weights of eight objects are to be measured using a pan balance that measures the difference between the weight of the objects in the two pans. Each measurement has a random error. The average error is zero; the standard deviations of the probability distribution of the errors is the same number σ on different weighings; and errors on different weighings are independent. Denote the true weights by
We consider two different experiments:
- Weigh each object in one pan, with the other pan empty. Call the measured weight of the ith object Xi for i = 1, ..., 8.
- Do the eight weighings according to the following schedule and let Yi be the measured difference for i = 1, ..., 8:
- Then the estimated value of the weight θ1 is
The question of design of experiments is: which experiment is better?
The variance of the estimate X1 of θ1 is σ2 if we use the first experiment. But if we use the second experiment, the variance of the estimate given above is σ2/8. Thus the second experiment gives us 8 times as much precision.
Many problems of the design of experiments involve combinatorial designs, as in this example.
[edit] Statistical control
It is best for a process to be in reasonable statistical control prior to conducting designed experiments. When this is not possible, proper blocking, replication, and randomization allow for the careful conduct of designed experiments.[2]
[edit] See also
- Control variable
- Dependent variable
- Independent variable
- Randomized controlled trial
- Sample size
- Statistics
- Statistical theory
- Survey sampling
- Taguchi methods
- Factorial experiment
- Fisher's inequality
- Applications:
- Process Analytical Technology (PAT)
- Test and learn
- Retail testing (commercial)
- Clinical trials
[edit] References
- ^ Herman Chernoff, Sequential Analysis and Optimal Design, SIAM Monograph, 1972.
- ^ Bisgaard, S: "Must a Process be in Statistical Control before Conducting Designed Experiments?", Quality Engineering", ASQ, pp 143 - 176, 2008, Vol 20, Nr2
- Box,G. E, Hunter,W.G., Hunter, J.S., Hunter,W.G., "Statistics for Experimenters: Design, Innovation, and Discovery", 2nd Edition, Wiley, 2005, ISBN: 0471718130
- Pearl, J. Causality: Models, Reasoning and Inference, Cambridge University Press, 2000.
[edit] External links
- A chapter from a "NIST/SEMATECH Handbook on Engineering Statistics" at NIST
- Box-Behnken designs from a "NIST/SEMATECH Handbook on Engineering Statistics" ] at NIST
- Articles on Design of Experiments
- Czitrom (1999) "One-Factor-at-a-Time Versus Designed Experiments", American Statistician, 53, 2.
- Design Resources Server a mobile library on Design of Experiments. The server is dynamic in nature and new additions would be posted on this site from time to time.
- Gosset: A General-Purpose Program for Designing Experiments
- Matlab SUrrogate MOdeling Toolbox - SUMO Toolbox - Matlab code for Design of Experiments + Sequential Design + Surrogate Modeling
- SAS Examples for Experimental Design
- WebDOE: a web site that offers free, online design of experiments.
[edit] Design of military experiments
- Code of Best Practice for Experimentation (CCRP, 2002)
- NATO Code of Best Practice for C2 Assessment (CCRP, 2002)
- Code of Best Practice for Campaigns of Experimentation (CCRP, 2005)
- The Logic of Warfighting Experiments (CCRP, 2006)