Advanced Search
Volume 39 Issue 4
Apr.  2017
Turn off MathJax
Article Contents
ZHANG Tao, TANG Zhenmin. Improved Algorithm Based on Non-negative Low Rank and Sparse Graph for Semi-supervised Learning[J]. Journal of Electronics & Information Technology, 2017, 39(4): 915-921. doi: 10.11999/JEIT160559
Citation: ZHANG Tao, TANG Zhenmin. Improved Algorithm Based on Non-negative Low Rank and Sparse Graph for Semi-supervised Learning[J]. Journal of Electronics & Information Technology, 2017, 39(4): 915-921. doi: 10.11999/JEIT160559

Improved Algorithm Based on Non-negative Low Rank and Sparse Graph for Semi-supervised Learning

doi: 10.11999/JEIT160559
Funds:

The National Natural Science Foundation of China (61473154)

  • Received Date: 2016-05-28
  • Rev Recd Date: 2016-09-23
  • Publish Date: 2017-04-19
  • Semi-supervised learning algorithm based on non-negative low rank and sparse graph can not describe the structures of the data exactly. Therefore, an improved algorithm which integrates smoothed low rank representation and weighted sparsity constraint is proposed. The low rank term and sparse term of the classical algorithm are improved by this algorithm respectively, and the global subspace structure and the locally linear structure can be captured exactly. When building the objective function, the logarithm determinant function instead of the nuclear norm is used to approximate the rank function smoothly. Meanwhile, the shape interaction information and the label information of labeled samples is used to build the weighted sparsity constraint regularization term. Then, the objective function is solved by a linearized alternating direction method with adaptive penalty and the graph construction is restructured by an available post-processing method. Finally, a semi-supervised classification framework based on local and global consistency is used to finish the learning task. The experimental results on ORL, Extended Yale B and USPS database show that the improved algorithm improves the accuracy of semi-supervised learning.
  • loading
  • 刘建伟, 刘媛, 罗雄麟. 半监督学习方法[J]. 计算机学报, 2015, 38(8): 1592-1617. doi: 10.11897/SP.J.1016.2015.01592.
    LIU Jianwei, LIU Yuan, and LUO Xionglin. Semi-supervised learning methods[J]. Chinese Journal of Computers, 2015, 38(8): 1592-1617. doi: 10.11897/SP.J.1016.2015.01592.
    ZHU X. Semi-supervised learning literature survey[R]. Madison: University of Wisconsin, 2006.
    LIU G C, LIN Z C, YAN S C, et al. Robust recovery of subspace structures by low-rank representation[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2013, 35(1): 171-184. doi: 10.1109/TPAMI.2012.88.
    ZHUANG L S, GAO H Y, LIN Z C, et al. Non-negative low rank and sparse graph for semi-supervised learning[C]. Proceedings of the IEEE International Conference on Computer Vision and Pattern Recognition, Providence, USA, 2012: 2328-2335. doi: 10.1109/CVPR.2012.6247944.
    ZHENG Y G, ZHANG X G, YANG S Y, et al. Low-rank representation with local constraint for graph construction[J]. Neurocomputing, 2013, 122: 398-405. doi: 10.1016/j.neucom. 2013.06.013.
    TANG K W, LIU R S, SU Z X, et al. Structure-constrained low-rank representation[J]. IEEE Transactions on Neural Networks and Learning Systems, 2014, 25(12): 2167-2179. doi: 10.1109/TNNLS.2014.2306063.
    PENG Y, LU B L, and WANG S H. Enhanced low-rank representation via sparse manifold adaption for semi-supervised learning[J]. Neural Networks, 2015, 65: 1-17. doi: 10.1016/j.neunet.2015.01.001.
    MOHAN K and FAZEL M. Iterative reweighted algorithms for matrix rank minimization[J]. The Journal of Machine Learning Research, 2012, 13(1): 3441-3473.
    王斯琪, 冯象初, 张瑞, 等. 基于最大范数的低秩稀疏分解模型[J]. 电子与信息学报, 2015, 37(11): 2601-2607. doi: 10. 11999/JEIT150468.
    WANG Siqi, FENG Xiangchu, ZHANG Rui, et al. Low-rank sparse decomposition model based on max-norm[J]. Journal of Electronics Information Technology, 2015, 37(11): 2601-2607. doi: 10.11999/JEIT150468.
    FAZEL M, HINDI H, and BOYD S. Log-det heuristic for matrix rank minimization with applications to hankel and euclidean distance matrices[C]. Proceedings of the American Control Conference, Denver, USA, 2003, 3: 2156-2162. doi: 10.1109/ACC.2003.1243393.
    KANG Z, PENG C, and CHENG Q. Robust subspace clustering via smoothed rank approximation[J]. IEEE Signal Processing Letters, 2015, 22(11): 2088-2092. doi: 10.1109/ LSP.2015.2460737.
    LIN Z C, LIU R S, and SU Z X. Linearized alternating direction method with adaptive penalty for low-rank representation[C]. Proceedings of the Advances in Neural Information Processing Systems, Granada, Spain, 2011: 612-620.
    ZHOU D Y, BOUSQUET O, LAL T N, et al. Learning with local and global consistency[C]. Proceedings of the Advances in Neural Information Processing Systems, Cambridge, UK, 2004: 321-328.
    LIU B, JING L P, YU J, et al. Robust graph learning via constrained elastic-net regularization[J]. Neurocomputing, 2016, 171: 299-312. doi: 10.1016/j.neucom.2015.06.059.
    KANG Z, PENG C, CHENG J, et al. Logdet rank minimization with application to subspace clustering[J]. Computational Intelligence and Neuroscience, 2015, 824289, doi: 10.1155/2015/824289.
    LIN Z C, CHEN M M, and MA Y. The augmented lagrange multiplier method for exact recovery of corrupted low-rank matrices [R]. UIUC Technical Report UILU-ENG-09-2215, 2009.
    LIU G C, LIN Z C, and YU Y. Robust subspace segmentation by low-rank representation[C]. Proceedings of the International Conference on Machine Learning, Haifa, Israel, 2010: 663-670.
    ELHAMIFAR E and VIDAL R. Sparse subspace clustering: algorithm,theory, and applications[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2013, 34(11): 2765-2781. doi: 10.1109/TPAMI.2013.57.
  • 加载中

Catalog

    通讯作者: 陈斌, bchen63@163.com
    • 1. 

      沈阳化工大学材料科学与工程学院 沈阳 110142

    1. 本站搜索
    2. 百度学术搜索
    3. 万方数据库搜索
    4. CNKI搜索

    Article Metrics

    Article views (1545) PDF downloads(458) Cited by()
    Proportional views
    Related

    /

    DownLoad:  Full-Size Img  PowerPoint
    Return
    Return