Single linkage clustering

From Wikipedia, the free encyclopedia

Single linkage (or "nearest neighbor") is a method of calculating distances between clusters in hierarchical cluster analysis. In single linkage, the distance between two clusters is computed as the distance between the two closest elements in the two clusters.

Mathematically, the linkage function - the distance D(X,Y) between clusters X and Y - is described by the following expression : D(X,Y) = min(d(x,y))

where

  • d(x,y) is the distance between elements x \in X and y \in Y ;
  • X and Y are two sets of elements (clusters)

A drawback of this method is the so-called chaining phenomenon: clusters may be forced together due to single elements being close to each other, even though many of the elements in each cluster the cluster may be very distant to each other.

[edit] Algorithm

The following algorithm is an agglomerative scheme that erases rows and columns in a proximity matrix as old clusters are merged into new ones. The N \times N proximity matrix D contains all distances d(i,j). The clusterings are assigned sequence numbers 0,1,......, (n-1) and L(k) is the level of the kth clustering. A cluster with sequence number m is denoted (m) and the proximity between clusters (r) and (s) is denoted d [(r),(s)].

The algorithm is composed of the following steps:

  1. Begin with the disjoint clustering having level L(0) = 0 and sequence number m = 0.
  2. Find the least dissimilar pair of clusters in the current clustering, say pair (r), (s), according to d[(r),(s)] = min d[(i),(j)] where the minimum is over all pairs of clusters in the current clustering.
  3. Increment the sequence number : m = m +1. Merge clusters (r) and (s) into a single cluster to form the next clustering m. Set the level of this clustering to L(m) = d[(r),(s)]
  4. Update the proximity matrix, D, by deleting the rows and columns corresponding to clusters (r) and (s) and adding a row and column corresponding to the newly formed cluster. The proximity between the new cluster, denoted (r,s) and old cluster (k) is defined as d[(k), (r,s)] = min d[(k),(r)], d[(k),(s)]
  5. If all objects are in one cluster, stop. Else, go to step 2.

This is essentially the same as Kruskal's algorithm for minimum spanning trees. However, in single linkage clustering, the order in which clusters are formed is important, while for minimum spanning trees what matters is the set of pairs of points that form distances chosen by the algorithm.

Alternative linkage schemes include complete linkage and average linkage - implementing a different linkage is simply a matter of using a different formula to calculate inter-cluster distances in steps 2 and 4 of the above algorithm. The formula that should be adjusted has been highlighted using bold text.

[edit] References