Multidimensional scaling

An example of classical multidimensional scaling applied to voting patterns in the United States House of Representatives. Each red dot represents one Republican member of the House, and each blue dot one Democrat.

Multidimensional scaling (MDS) is a means of visualizing the level of similarity of individual cases of a dataset. It refers to a set of related ordination techniques used in information visualization, in particular to display the information contained in a distance matrix. An MDS algorithm aims to place each object in N-dimensional space such that the between-object distances are preserved as well as possible. Each object is then assigned coordinates in each of the N dimensions. The number of dimensions of an MDS plot N can exceed 2 and is specified a priori. Choosing N=2 optimizes the object locations for a two-dimensional scatterplot.[1]

Types

MDS algorithms fall into a taxonomy, depending on the meaning of the input matrix:

Classical multidimensional scaling
Also known as Principal Coordinates Analysis, Torgerson Scaling or Torgerson–Gower scaling. Takes an input matrix giving dissimilarities between pairs of items and outputs a coordinate matrix whose configuration minimizes a loss function called strain.[1]
Metric multidimensional scaling
A superset of classical MDS that generalizes the optimization procedure to a variety of loss functions and input matrices of known distances with weights and so on. A useful loss function in this context is called stress, which is often minimized using a procedure called stress majorization.
Non-metric multidimensional scaling
In contrast to metric MDS, non-metric MDS finds both a non-parametric monotonic relationship between the dissimilarities in the item-item matrix and the Euclidean distances between items, and the location of each item in the low-dimensional space. The relationship is typically found using isotonic regression.
  • Louis Guttman's smallest space analysis (SSA) is an example of a non-metric MDS procedure.
Generalized multidimensional scaling
An extension of metric multidimensional scaling, in which the target space is an arbitrary smooth non-Euclidean space. In cases where the dissimilarities are distances on a surface and the target space is another surface, GMDS allows finding the minimum-distortion embedding of one surface into another.[2]

Details

The data to be analyzed is a collection of I objects (colors, faces, stocks, . . .) on which a distance function is defined,

\delta_{i,j} := distance between i-th and j-th objects.

These distances are the entries of the dissimilarity matrix

\Delta := 
\begin{pmatrix}
\delta_{1,1} & \delta_{1,2} & \cdots & \delta_{1,I} \\
\delta_{2,1} & \delta_{2,2} & \cdots & \delta_{2,I} \\
\vdots & \vdots & & \vdots \\
\delta_{I,1} & \delta_{I,2} & \cdots & \delta_{I,I}
\end{pmatrix}.

The goal of MDS is, given \Delta, to find I vectors x_1,\ldots,x_I \in \mathbb{R}^N such that

\|x_i - x_j\| \approx \delta_{i,j} for all i,j\in {1,\dots,I},

where \|\cdot\| is a vector norm. In classical MDS, this norm is the Euclidean distance, but, in a broader sense, it may be a metric or arbitrary distance function.[3]

In other words, MDS attempts to find an embedding from the I objects into \mathbb{R}^N such that distances are preserved. If the dimension N is chosen to be 2 or 3, we may plot the vectors x_i to obtain a visualization of the similarities between the I objects. Note that the vectors x_i are not unique: With the Euclidean distance, they may be arbitrarily translated, rotated, and reflected, since these transformations do not change the pairwise distances \|x_i - x_j\|.

(Note: The symbol \mathbb{R} indicates the set of real numbers, and the notation \mathbb{R}^N refers to the Cartesian product of N copies of \mathbb{R}, which is an N-dimensional vector space over the field of the real numbers.)

There are various approaches to determining the vectors x_i. Usually, MDS is formulated as an optimization problem, where (x_1,\ldots,x_I) is found as a minimizer of some cost function, for example,

 \min_{x_1,\ldots,x_I} \sum_{i<j} ( \|x_i - x_j\| - \delta_{i,j} )^2. \,

A solution may then be found by numerical optimization techniques. For some particularly chosen cost functions, minimizers can be stated analytically in terms of matrix eigendecompositions.

Procedure

There are several steps in conducting MDS research:

  1. Formulating the problem – What variables do you want to compare? How many variables do you want to compare? What purpose is the study to be used for?
  2. Obtaining input data – For example :- Respondents are asked a series of questions. For each product pair, they are asked to rate similarity (usually on a 7 point Likert scale from very similar to very dissimilar). The first question could be for Coke/Pepsi for example, the next for Coke/Hires rootbeer, the next for Pepsi/Dr Pepper, the next for Dr Pepper/Hires rootbeer, etc. The number of questions is a function of the number of brands and can be calculated as Q = N (N - 1) / 2 where Q is the number of questions and N is the number of brands. This approach is referred to as the “Perception data : direct approach”. There are two other approaches. There is the “Perception data : derived approach” in which products are decomposed into attributes that are rated on a semantic differential scale. The other is the “Preference data approach” in which respondents are asked their preference rather than similarity.
  3. Running the MDS statistical program – Software for running the procedure is available in many statistical software packages. Often there is a choice between Metric MDS (which deals with interval or ratio level data), and Nonmetric MDS (which deals with ordinal data).
  4. Decide number of dimensions – The researcher must decide on the number of dimensions they want the computer to create. The more dimensions, the better the statistical fit, but the more difficult it is to interpret the results.
  5. Mapping the results and defining the dimensions – The statistical program (or a related module) will map the results. The map will plot each product (usually in two-dimensional space). The proximity of products to each other indicate either how similar they are or how preferred they are, depending on which approach was used. How the dimensions of the embedding actually correspond to dimensions of system behavior, however, are not necessarily obvious. Here, a subjective judgment about the correspondence can be made (see perceptual mapping).
  6. Test the results for reliability and validity – Compute R-squared to determine what proportion of variance of the scaled data can be accounted for by the MDS procedure. An R-square of 0.6 is considered the minimum acceptable level. An R-square of 0.8 is considered good for metric scaling and .9 is considered good for non-metric scaling. Other possible tests are Kruskal’s Stress, split data tests, data stability tests (i.e., eliminating one brand), and test-retest reliability.
  7. Report the results comprehensively – Along with the mapping, at least distance measure (e.g., Sorenson index, Jaccard index) and reliability (e.g., stress value) should be given. It is also very advisable to give the algorithm (e.g., Kruskal, Mather), which is often defined by the program used (sometimes replacing the algorithm report), if you have given a start configuration or had a random choice, the number of runs, the assessment of dimensionality, the Monte Carlo method results, the number of iterations, the assessment of stability, and the proportional variance of each axis (r-square).

Applications

Applications include scientific visualisation and data mining in fields such as cognitive science, information science, psychophysics, psychometrics, marketing and ecology. New applications arise in the scope of autonomous wireless nodes that populate a space or an area. MDS may apply as a real time enhanced approach to monitoring and managing such populations.

Furthermore, MDS has been used extensively in geostatistics, for modeling the spatial variability of the patterns of an image (by representing them as points in a lower-dimensional space),[4] and natural language processing, for modeling the semantic and affective relatedness of natural language concepts (by representing them as points in a 100-dimensional vector space).[5]

Marketing

In marketing, MDS is a statistical technique for taking the preferences and perceptions of respondents and representing them on a visual grid, called perceptual maps. By mapping multiple attributes and multiple brands at the same time, a greater understanding of the marketplace and of consumers' perceptions can be achieved, as compared with a basic two attribute perceptual map.[6]

Bioinformatics

MDS is becoming a popular method used in sequence clustering and visualization. In bioinformatics, MDS is used to reduce the dimensionality by giving the dissimilarity scores from each pair of sequences. These disimilarity scores are usually calculated using Sequence Alignment. By mapping each sequence from the high dimensional space to a visually acceptable space (such as 2D/3D space), the correlations between each sequence cluster can be observed easily. [7]

Comparison and advantages

Potential customers are asked to compare pairs of products and make judgments about their similarity. Whereas other techniques (such as factor analysis, discriminant analysis, and conjoint analysis) obtain underlying dimensions from responses to product attributes identified by the researcher, MDS obtains the underlying dimensions from respondents’ judgments about the similarity of products. This is an important advantage. It does not depend on researchers’ judgments. It does not require a list of attributes to be shown to the respondents. The underlying dimensions come from respondents’ judgments about pairs of products. Because of these advantages, MDS is the most common technique used in perceptual mapping.

Implementations

See also

Wikimedia Commons has media related to Multidimensional scaling.

Bibliography

  1. 1 2 Borg, I., Groenen, P. (2005). Modern Multidimensional Scaling: theory and applications (2nd ed.). New York: Springer-Verlag. pp. 207–212. ISBN 0-387-94845-7.
  2. Bronstein AM, Bronstein MM, Kimmel R (January 2006). "Generalized multidimensional scaling: a framework for isometry-invariant partial surface matching". Proc. Natl. Acad. Sci. U.S.A. 103 (5): 1168–72. doi:10.1073/pnas.0508601103. PMC 1360551. PMID 16432211.
  3. Kruskal, J. B., and Wish, M. (1978), Multidimensional Scaling, Sage University Paper series on Quantitative Application in the Social Sciences, 07-011. Beverly Hills and London: Sage Publications.
  4. Honarkhah, M and Caers, J, 2010, Stochastic Simulation of Patterns Using Distance-Based Pattern Modeling, Mathematical Geosciences, 42: 487–517
  5. Cambria, E, Song, Y, Wang, H and Howard, N, 2013, 'Semantic multi-dimensional scaling for open-domain sentiment analysis", IEEE Intelligent Systems
  6. Fripp G, 2014, Understanding Perceptual Maps for Marketing
  7. Yang R, 2012, DACIDR: deterministic annealed clustering with interpolative dimension reduction using a large collection of 16S rRNA sequences
  • Cox, T.F., Cox, M.A.A. (2001). Multidimensional Scaling. Chapman and Hall. 
  • Coxon, Anthony P.M. (1982). The User's Guide to Multidimensional Scaling. With special reference to the MDS(X) library of Computer Programs. London: Heinemann Educational Books. 
  • Green, P. (January 1975). "Marketing applications of MDS: Assessment and outlook". Journal of Marketing 39 (1): 24–31. doi:10.2307/1250799. 
  • McCune, B. and Grace, J.B. (2002). Analysis of Ecological Communities. Oregon, Gleneden Beach: MjM Software Design. ISBN 0-9721290-0-6. 
  • Torgerson, Warren S. (1958). Theory & Methods of Scaling. New York: Wiley. ISBN 0-89874-722-8. 

External links

This article is issued from Wikipedia - version of the Monday, January 25, 2016. The text is available under the Creative Commons Attribution/Share Alike but additional terms may apply for the media files.