Hierarchical Clustering

 Hierarchical Methods 
This method creates a hierarchical decomposition of the given set of data objects.
In data mining and statistics, hierarchical clustering (also called hierarchical cluster analysis or HCA) is a method of cluster analysis which seeks to build a hierarchy of clusters. Strategies for hierarchical clustering generally fall into two types:

  • Agglomerative: This is a "bottom-up" approach: each observation starts in its own cluster, and pairs of clusters are merged as one moves up the hierarchy.  
  • Divisive: This is a "top-down" approach: all observations start in one cluster, and splits are performed recursively as one moves down the hierarchy.

In general, the merges and splits are determined in a greedy manner. The results of hierarchical clustering are usually presented in a dendrogram (A Dendrogram is a type of tree diagram showing hierarchical relationships between different sets of data).

Agglomerative Hierarchical Clustering

The Agglomerative Hierarchical Clustering is the most common type of hierarchical clustering used to group objects in clusters based on their similarity. It’s also known as AGNES (Agglomerative Nesting). It's a “bottom-up” approach: each observation starts in its own cluster, and pairs of clusters are merged as one moves up the hierarchy.

How does it work?

  1. Make each data point a single-point cluster → forms N clusters
  2. Take the two closest data points and make them one cluster → forms N-1 clusters
  3. Take the two closest clusters and make them one cluster → Forms N-2 clusters.
  4. Repeat step-3 until you are left with only one cluster.

Pseudo Code Steps:

  1. Begin with n clusters, each containing one object and we will number the clusters 1 through n.
  2. Compute the between-cluster distance D(r, s) as the between-object distance of the two objects in r and s respectively, r, s =1, 2, ..., n. Let the square matrix D = (D(r, s)). If the objects are represented by quantitative vectors we can use Euclidean distance.
  3. Next, find the most similar pair of clusters r and s, such that the distance, D(r, s), is minimum among all the pairwise distances.
  4. Merge r and s to a new cluster t and compute the between-cluster distance D(t, k) for any existing cluster kr, s . Once the distances are obtained,  delete the rows and columns corresponding to the old cluster r and s in the D matrix, because r and s do not exist anymore. Then add a new row and column in D corresponding to cluster t.
  5. Repeat Step 3 a total of n − 1 times until there is only one cluster left.

 

There are several ways to measure the distance between clusters in order to decide the rules for clustering, and they are often called Linkage Methods. Some of the common linkage methods are:

  • Complete-linkage: the distance between two clusters is defined as the longest distance between two points in each cluster.
  • Single-linkage: the distance between two clusters is defined as the shortest distance between two points in each cluster. This linkage may be used to detect high values in your dataset which may be outliers as they will be merged at the end.
  • Average-linkage: the distance between two clusters is defined as the average distance between each point in one cluster to every point in the other cluster.
  • Centroid-linkage: finds the centroid of cluster 1 and centroid of cluster 2, and then calculates the distance between the two before merging.

Divisive Hierarchical Clustering

In Divisive or DIANA(DIvisive ANAlysis Clustering) is a top-down clustering method where we assign all of the observations to a single cluster and then partition the cluster to two least similar clusters. Finally, we proceed recursively on each cluster until there is one cluster for each observation. So this clustering approach is exactly opposite to Agglomerative clustering.

There is evidence that divisive algorithms produce more accurate hierarchies than agglomerative algorithms in some circumstances but is conceptually more complex.

In both agglomerative and divisive hierarchical clustering, users need to specify the desired number of clusters as a termination condition(when to stop merging).

 

https://en.wikipedia.org/wiki/Hierarchical_clustering

https://www.kdnuggets.com/2019/09/hierarchical-clustering.html 

No comments:

Post a Comment

Monk and Inversions

using System; public class Solution { public static void Main () { int T = Convert . ToInt32 ( Console . ReadLine...