Design matrix

In statistics, a design matrix (also known as regressor matrix or model matrix) is a matrix of values of explanatory variables of a set of objects, often denoted by X. Each row represents an individual object, with the successive columns corresponding to the variables and their specific values for that object. The design matrix is used in certain statistical models, e.g., the general linear model.[1][2][3] It can contain indicator variables (ones and zeros) that indicate group membership in an ANOVA, or it can contain values of continuous variables.

The design matrix contains data on the independent variables (also called explanatory variables) in statistical models which attempt to explain observed data on a response variable (often called a dependent variable) in terms of the explanatory variables. The theory relating to such models makes substantial use of matrix manipulations involving the design matrix: see for example linear regression. A notable feature of the concept of a design matrix is that it is able to represent a number of different experimental designs and statistical models, e.g., ANOVA, ANCOVA, and linear regression.

Definition

The design matrix is defined to be a matrix X such that the jth column of the ith row of X represents the value of the jth variable associated with the ith object.

A regression model which is a linear combination of the explanatory variables may therefore be represented via matrix multiplication as

where X is the design matrix, is a vector of the model's coefficients (one for each variable), and y is the vector of predicted outputs for each object.

Examples

Simple Regression

This section gives an example of simple linear regression—that is, regression with only a single explanatory variable—with seven observations. The seven data points are {yi, xi}, for i = 1, 2, …, 7. The simple linear regression model is

where is the y-intercept and is the slope of the regression line. This model can be represented in matrix form as

where the first column of ones in the design matrix allows estimation of the y-intercept while the second column contains the x-values associated with the corresponding y-values.

Multiple Regression

This section contains an example of multiple regression with two covariates (explanatory variables): w and x. Again suppose that the data consist of seven observations, and that for each observed value to be predicted (), values wi and xi of the two covariates are also observed. The model to be considered is

This model can be written in matrix terms as

Here the 7×3 matrix on the right side is the design matrix.

One-way ANOVA (Cell Means Model)

This section contains an example with a one-way analysis of variance (ANOVA) with three groups and seven observations. The given data set has the first three observations belonging to the first group, the following two observations belonging to the second group and the final two observations belonging to the third group. If the model to be fit is just the mean of each group, then the model is

which can be written

It should be emphasized that in this model represents the mean of the th group.

One-way ANOVA (offset from reference group)

The ANOVA model could be equivalently written as each group parameter being an offset from some overall reference. Typically this reference point is taken to be one of the groups under consideration. This makes sense in the context of comparing multiple treatment groups to a control group and the control group is considered the "reference". In this example, group 1 was chosen to be the reference group. As such the model to be fit is

with the constraint that is zero.

In this model is the mean of the reference group and is the difference from group to the reference group. is not included in the matrix because its difference from the reference group (itself) is necessarily zero.

See also

References

  1. Everitt, B. S. (2002). Cambridge Dictionary of Statistics (2nd ed.). Cambridge, UK: Cambridge University Press. ISBN 0-521-81099-X.
  2. Box, G. E. P.; Tiao, G. C. (1992) [1973]. Bayesian Inference in Statistical Analysis. New York: John Wiley and Sons. ISBN 0-471-57428-7. (Section 8.1.1)
  3. Timm, Neil H. (2007). Applied Multivariate Analysis. Springer Science & Business Media. p. 107.
This article is issued from Wikipedia. The text is licensed under Creative Commons - Attribution - Sharealike. Additional terms may apply for the media files.