K-medoids

From Wikipedia, the free encyclopedia

The K-medoids algorithm is a clustering algorithm related to the K-means algorithm. Both algorithms are partitional (breaking the dataset up into groups) and both attempt to minimize squared error, the distance between points labeled to be in a cluster and a point designated as the center of that cluster. In contrast to the K-means algorithm K-medoids chooses datapoints as centers (medoids or exemplars).


k-medoid is a classical partitioning technique of clustering that clusters the data set of n objects into k clusters known apriori.

It is more robust to noise and outliers as compared to k-means

A medoid can be defined as that object of a cluster, whose average dissimilarity to all the objects in the cluster is minimal i.e. it is a most centrally located point in the given data set.
K-medoid clustering algorithm is as follows

1) The algorithm begins with arbitrary selection of the k objects as medoid points out of n data points (n>k)

2) After selection of the k medoid points, associate each data object in the given data set to most similar medoid. The similarity here is defined using distance measure that can be euclidean distance, manhattan distance or minkowski distance

3) Randomly select nonmedoid object O’

4) compute total cost , S of swapping initial medoid object to O’

5) If S<0, then swap initial medoid with the new one ( if S<0 then there will be new set of medoids)

6) repeat steps 2 to 5 until there is no change in the medoid.

[edit] Demonstration of algorithm

Cluster the following data set of 10 objects into 2 clusters i.e k=2.

Consider data set of 10 data set of objects as follows:

Image:kmedoidt1.jpg

Figure 1.1 distribution of the data
Figure 1.1 distribution of the data




Step 1:

Initialize k centre Let us assume C1=(3,4) and C2=(7,4) So here C1 and C2 are selected as medoid. Calculating distance so as to associate each data object to its nearest medoid.

Image:kmedoidt2.jpg

So the clusters become:

Cluster1={(3,4)(2,6)(3,8)(4,7)} Cluster2={(7,4)(6,2)(6,4)(7,3)(8,5)(7,6)}

Since the points (2,6) ( 3,8) and (4,7) are close to c1 hence they form one cluster while remaining points form another cluster.

So the total cost involved is 20.

Where cost between any 2 points is found using formula

Image:kmedoideq.png

where x is any data object, c is the medoid, and d is the dimension of the object which in this case is 2.

Total cost is summation of the cost of data object from its medoid in its cluster so here


Total cost= {cost((3,4),(2,6))+ cost((3,4),(3,8))+ cost((3,4),(4,7))} + {cost((7,4),(6,2))+ cost((7,4),(6,4)) + cost((7,4),(7,3)) + cost((7,4),(8,5)) + cost((7,4),(7,6)) }

          =3+4+4+3+1+1+2+2
          =20

Image:kmedoid2.jpg
Figure 1.2 clusters after step 1


Step 2: Selection of nonmedoid O’ randomly Let us assume O’=(7,3)

So now the medoids are c1(3,4) and O’(7,3) If c1 and O’ are new medoids, calculate the total cost involved By using the formula in the step 1


Image:kmedoidt3.jpg



Image:kmedoid3.jpg
Figure 1.3 clusters after step 2


Total cost = 3+4+4+2+2+1+3+3 =22

So cost of swapping medoid from c2 to O’ is

S=Current total cost – past total cost

= 22-20
= 2>0

So moving to O’ would be bad idea,so the previous choice was good and algorithm terminates here(i.e there is no change in the medoids).

It may happen some data points may shift from one cluster to another cluster depending upon their closeness to medoid.

Languages