K-最近邻分类技术的改进算法
An Improved K-Nearest Neighbor Algorithm
-
摘要: 该文提出了一种改进的K-最近邻分类算法。该算法首先将训练事例集中的每一类样本进行聚类,既减小了训练事例集的数据量,又去除了孤立点,大大提高了算法的快速性和预测精度,从而使该算法适用于海量数据集的情况。同时,在算法中根据每个属性对分类贡献的大小,采用神经网络计算其权重,将这些属性权重用在最近邻计算中,从而提高了算法的分类精度。在几个标准数据库和实际数据库上的实验结果表明,该算法适合于对复杂而数据量比较大的数据库进行分类。
-
关键词:
- K-最近邻; 聚类; 权值调整; 分类
Abstract: This paper presents a improved K-NN algorithm. The CURE clustering is carried out to select the subset of the training set. It can reduce the volume of the training set and omit the outlier. Therefore it can lead both to computational efficiency and to higher classification accuracy. In the algorithm, the weights of each feature are learned using neural network. The feature weights are used in the nearest measure computation such that the important features contribute more in the nearest measure. Experiments on several UCI databases and practical data sets show the efficiency of the algorithm. -
Shin C, Yun U, Kim H, Park S. A hybrid approach of neural network and memory-based learning to data mining[J].IEEE Trans. on Neural Networks.2000, 11(3):637-[2]Wettschereck D, Aha D W, Mohri T. A review and empirical evaluation of feature weighting metbords for a class of lazy learning algorithms. AI Review, 1997, 11 (2): 273 - 314.[3]范明,孟小峰.数据挖掘概念与技术,北京:机械工业出版社,2001,第七章第七节.[4]Kuncheva L I. Fitness Functions in Editing k-nn Reference Set by Genetic Algorithms[J].Pattern Recognition.1997, 30(6):1041-[5]Setiono R, Liu H. Neural-network feature selector. IEEE Trans.on Neural Networks, 1997 8(3): 654 - 662.[6]Guha S, Rastugi R, Shim K. CURE: An efficient clustering algorithm for large databases. In Proc. 1998 ACM-SIGMOD Int.Conf. Management of Data (SIGMOD98), Seattle, WA, June 1998:73 - 84.[7]Pemg C, Wang H, Zhang S, parker D. Landmarks: A new model for similarity-based pattern querying in time series databases.IEEE Conf. on Data Engineering, 2000:33 - 44.[8]Quinlan J R. C4.5: Programs for Machine Learning. San Mateo,CA: Morgan Kaufmann, 1993, Chapter 3.
计量
- 文章访问数: 2422
- HTML全文浏览量: 113
- PDF下载量: 1152
- 被引次数: 0