Chameleon- A hierarchical Clustering Algorithm Using .ppt

上传人:explodesoak291 文档编号:379472 上传时间:2018-10-09 格式:PPT 页数:18 大小:334KB
下载 相关 举报
Chameleon- A hierarchical Clustering Algorithm Using .ppt_第1页
第1页 / 共18页
Chameleon- A hierarchical Clustering Algorithm Using .ppt_第2页
第2页 / 共18页
Chameleon- A hierarchical Clustering Algorithm Using .ppt_第3页
第3页 / 共18页
Chameleon- A hierarchical Clustering Algorithm Using .ppt_第4页
第4页 / 共18页
Chameleon- A hierarchical Clustering Algorithm Using .ppt_第5页
第5页 / 共18页
亲,该文档总共18页,到这儿已超出免费预览范围,如果喜欢就下载吧!
资源描述

1、Chameleon: A hierarchical Clustering Algorithm Using Dynamic Modeling,By George Karypis, Eui-Hong Han,Vipin Kumarand not by Prashant Thiruvengadachari,Existing Algorithms,K-means and PAMAlgorithm assigns K-representational points to the clusters and tries to form clusters based on the distance measu

2、re.,More algorithms,Other algorithm include CURE, ROCK, CLARANS, etc.CURE takes into account distance between representativesROCK takes into account inter-cluster aggregate connectivity.,Chameleon,Two-phase approachPhase -I Uses a graph partitioning algorithm to divide the data set into a set of ind

3、ividual clusters.Phase -II uses an agglomerative hierarchical mining algorithm to merge the clusters.,So, basically,Why not stop with Phase-I? Weve got the clusters, havent we ?Chameleon(Phase-II) takes into account Inter Connetivity Relative closenessHence, chameleon takes into account features int

4、rinsic to a cluster.,Constructing a sparse graph,Using KNNData points that are far away are completely avoided by the algorithm (reducing the noise in the dataset)captures the concept of neighbourhood dynamically by taking into account the density of the region.,What do you do with the graph ?,Parti

5、tion the KNN graph such that the edge cut is minimized.Reason: Since edge cut represents similarity between the points, less edge cut = less similarity.Multi-level graph partitioning algorithms to partition the graph hMeTiS library.,Example:,Cluster Similarity,Models cluster similarity based on the

6、relative inter-connectivity and relative closeness of the clusters.,Relative Inter-Connectivity,Ci and Cj RIC= AbsoluteIC(Ci,Cj)internal IC(Ci)+internal IC(Cj) / 2where AbsoluteIC(Ci,Cj)= sum of weights of edges that connect Ci with Cj.internalIC(Ci) = weighted sum of edges that partition the cluste

7、r into roughly equal parts.,Relative Closeness,Absolute closeness normalized with respect to the internal closeness of the two clusters.Absolute closeness got by average similarity between the points in Ci that are connected to the points in Cj.average weight of the edges from C(i)-C(j).,Internal Cl

8、oseness.,Internal closeness of the cluster got by average of the weights of the edges in the cluster.,So, which clusters do we merge?,So far, we have gotRelative Inter-Connectivity measure. Relative Closeness measure.Using them,If the relative inter-connectivity measure relative closeness measure ar

9、e same, choose inter-connectivity.You can also use, RI (Ci , C j )T(RI) and RC(C i,C j ) T(RC)Allows multiple clusters to merge at each level.,Merging the clusters,Good points about the paper :,Nice description of the working of the system.Gives a note of existing algorithms and as to why chameleon

10、is better.Not specific to a particular domain.,yucky and reasonably yucky parts,Not much information given about the Phase-I part of the paper graph properties ?Finding the complexity of the algorithm O(nm + n log n + m2log m)Different domains require different measures for connectivity and closeness, .,Questions ?,

展开阅读全文
相关资源
猜你喜欢
相关搜索

当前位置:首页 > 教学课件 > 大学教育

copyright@ 2008-2019 麦多课文库(www.mydoc123.com)网站版权所有
备案/许可证编号:苏ICP备17064731号-1