Parameter space

From Wikipedia, the free encyclopedia

In generative art people talk about parameter space as the set of possible parameters for a generative system.

In statistics one can study the distribution of a random variable. Several models exist, the most common one being the normal distribution (or Gaussian distribution). When the distribution is known explicitly, it often depends on several parameters. A parameter space is simply the set of values that this parameter can take. For example, if we toss a coin, we can use the Bernoulli distribution of parameter p. In this case the parameter space is the intervall [0,1].

More precisely, Θ is a parameter space of dimension p\in\mathbb{N}^* if there exists a p-dimensional vector space E such that \Theta\subseteq E. p is called number of parameters.

For example, \mathbb{R}\times\mathbb{R}^+ is a parameter space because it is included in \mathbb{R}^2. It is the parameter space for the normal distribution.

The term parameter space as used in data-fitting (See for example "Data Reduction and Error Analysis for the Physical Sciences" by Bevington and Robinson), refers to the hypothetical space where a "location" is defined by the values of all optimizable parameters. For example, if we fit data using a function which has 10 optimizable parameters, each of these parameters is seen as a dimension and parameter space in this case is 10-dimensional. Every "location" then corresponds to a χ² (chi-squared) value indicating the goodness-of-fit, hence we have a "field" in our 10-dimensional space. Following this "field" downwards leads us to the "location" in parameter space with the lowest χ², i.e. the optimum parameter values.

Alternatively, χ² can be thought of as an additional dimension. In this case, if we're optimizing 2 variables, variable space is still 2-dimensional, but the addition of χ² as a third dimension results in 3-dimensional "goodness-of-fit" landscapes where the best fit is represented by the lowest point in 3D space.

[edit] See also