高级搜索

留言板

尊敬的读者、作者、审稿人, 关于本刊的投稿、审稿、编辑和出版的任何问题, 您可以本页添加留言。我们将尽快给您答复。谢谢您的支持!

姓名
邮箱
手机号码
标题
留言内容
验证码

一种基于正则优化的批次继承极限学习机算法

刘彬 杨有恒 赵志彪 吴超 刘浩然 闻岩

刘彬, 杨有恒, 赵志彪, 吴超, 刘浩然, 闻岩. 一种基于正则优化的批次继承极限学习机算法[J]. 电子与信息学报, 2020, 42(7): 1734-1742. doi: 10.11999/JEIT190502
引用本文: 刘彬, 杨有恒, 赵志彪, 吴超, 刘浩然, 闻岩. 一种基于正则优化的批次继承极限学习机算法[J]. 电子与信息学报, 2020, 42(7): 1734-1742. doi: 10.11999/JEIT190502
Bin LIU, Youheng YANG, Zhibiao ZHAO, Chao WU, Haoran LIU, Yan WEN. A Batch Inheritance Extreme Learning Machine Algorithm Based on Regular Optimization[J]. Journal of Electronics & Information Technology, 2020, 42(7): 1734-1742. doi: 10.11999/JEIT190502
Citation: Bin LIU, Youheng YANG, Zhibiao ZHAO, Chao WU, Haoran LIU, Yan WEN. A Batch Inheritance Extreme Learning Machine Algorithm Based on Regular Optimization[J]. Journal of Electronics & Information Technology, 2020, 42(7): 1734-1742. doi: 10.11999/JEIT190502

一种基于正则优化的批次继承极限学习机算法

doi: 10.11999/JEIT190502
基金项目: 河北省自然科学基金(F2019203320, E2018203398)
详细信息
    作者简介:

    刘彬:男,1953年生,教授,博士生导师,研究方向为数据挖掘、信号估计与识别算法

    杨有恒:男,1996年生,硕士生,研究方向为数据挖掘、机器学习

    赵志彪:男,1989年生,博士生,研究方向为人工智能优化算法

    吴超:男,1990年生,博士生,研究方向为计算机视觉

    刘浩然:男,1980年生,教授,博士生导师,研究方向为无线传感器网络、信号处理

    闻岩:男,1963年生,教授,博士生导师,研究方向为数据挖掘、人工智能优化算法

    通讯作者:

    刘彬 liubin@ysu.edu.cn

  • 中图分类号: TN911.7; TP391

A Batch Inheritance Extreme Learning Machine Algorithm Based on Regular Optimization

Funds: The Natural Science Foundation of Hebei Province (F2019203320, E2018203398)
  • 摘要:

    极限学习机(ELM)作为一种新型神经网络,具有极快的训练速度和良好的泛化性能。针对极限学习机在处理高维数据时计算复杂度高,内存需求巨大的问题,该文提出一种批次继承极限学习机(B-ELM)算法。首先将数据集均分为不同批次,采用自动编码器网络对各批次数据进行降维处理;其次引入继承因子,建立相邻批次之间的关系,同时结合正则化框架构建拉格朗日优化函数,实现批次极限学习机数学建模;最后利用MNIST, NORB和CIFAR-10数据集进行测试实验。实验结果表明,所提算法具有较高的分类精度,并且有效降低了计算复杂度和内存消耗。

  • 图  1  ELM网络结构图示意图

    图  2  B-ELM训练过程示意图

    图  3  数据集图像示例

    图  4  节点数L对测试精度的影响

    图  5  正则化参数C对测试精度的影响

    图  6  MNIST数据集算法性能比较

    图  7  NORB数据集算法性能比较

    图  8  CIFAR-10数据集算法性能比较

    表  1  不同数据集上的性能比较

    分类方法MNISTNORBCIFAR-10
    精度(%)训练时间(s)精度(%)训练时间(s)精度(%)训练时间(s)
    SAE98.604042.3686.286438.5643.3760514.26
    SDA98.723892.2687.626572.1443.6187289.59
    DBM99.0514505.1489.6518496.6443.1290123.53
    ML-ELM98.2151.8388.9178.3645.4274.06
    H-ELM99.1228.9791.2842.7450.2162.76
    B-ELM99.4342.6791.9055.9650.3869.06
    下载: 导出CSV
  • HUANG Guangbin, ZHU Qinyu, and SIEW C K. Extreme learning machine: Theory and applications[J]. Neurocomputing, 2006, 70(1/3): 489–501. doi: 10.1016/j.neucom.2005.12.126
    李佩佳, 石勇, 汪华东, 等. 基于有序编码的核极限学习顺序回归模型[J]. 电子与信息学报, 2018, 40(6): 1287–1293. doi: 10.11999/JEIT170765

    LI Peijia, SHI Yong, WANG Huadong, et al. Ordered code-based kernel extreme learning machine for ordinal regression[J]. Journal of Electronics &Information Technology, 2018, 40(6): 1287–1293. doi: 10.11999/JEIT170765
    HUANG Guangbin, ZHOU Hongming, DING Xiaojian, et al. Extreme learning machine for regression and multiclass classification[J]. IEEE Transactions on Systems, Man, and Cybernetics, Part B (Cybernetics) , 2012, 42(2): 513–529. doi: 10.1109/tsmcb.2011.2168604
    WANG Yongchang and ZHU Ligu. Research and implementation of SVD in machine learning[C]. The 2017 16th IEEE/ACIS International Conference on Computer and Information Science, Wuhan, China, 2017: 471–475. doi: 10.1109/ICIS.2017.7960038.
    CASTAÑO A, FERNÁNDEZ-NAVARRO F, and HERVÁS-MARTÍNEZ C. PCA-ELM: A robust and pruned extreme learning machine approach based on principal component analysis[J]. Neural Processing Letters, 2013, 37(3): 377–392. doi: 10.1007/s11063-012-9253-x
    ZONG Weiwei, HUANG Guangbin, and CHEN Yiqiang. Weighted extreme learning machine for imbalance learning[J]. Neurocomputing, 2013, 101: 229–242. doi: 10.1016/j.neucom.2012.08.010
    ZHAO Rui and MAO Kezhi. Semi-random projection for dimensionality reduction and extreme learning machine in high-dimensional space[J]. IEEE Computational Intelligence Magazine, 2015, 10(3): 30–41. doi: 10.1109/MCI.2015.2437316
    LUO Xiong, XU Yang, WANG Weiping, et al. Towards enhancing stacked extreme learning machine with sparse autoencoder by correntropy[J]. Journal of the Franklin Institute, 2018, 355(4): 1945–1966. doi: 10.1016/j.jfranklin.2017.08.014
    WU Shuang, LI Guoqi, DENG Lei, et al. L1-norm batch normalization for efficient training of deep neural networks[J]. IEEE Transactions on Neural Networks and Learning Systems, 2019, 30(7): 2043–2051. doi: 10.1109/TNNLS.2018.2876179
    LI Yanghao, WANG Naiyan, SHI Jianping, et al. Adaptive batch normalization for practical domain adaptation[J]. Pattern Recognition, 2018, 80: 109–117. doi: 10.1016/j.patcog.2018.03.005
    LIANG Nanying, HUANG Guangbin, SARATCHANDRAN P, et al. A fast and accurate online sequential learning algorithm for feedforward networks[J]. IEEE Transactions on Neural Networks, 2006, 17(6): 1411–1423. doi: 10.1109/TNN.2006.880583
    HUANG Guangbin. What are extreme learning machines? Filling the gap between frank Rosenblatt’s dream and john von Neumann’s puzzle[J]. Cognitive Computation, 2015, 7(3): 263–278. doi: 10.1007/s12559-015-9333-0
    YI Yugen, QIAO Shaojie, ZHOU Wei, et al. Adaptive multiple graph regularized semi-supervised extreme learning machine[J]. Soft Computing, 2018, 22(11): 3545–3562. doi: 10.1007/s00500-018-3109-x
    CHENG Kai and LU Zhenzhou. Adaptive sparse polynomial chaos expansions for global sensitivity analysis based on support vector regression[J]. Computers & Structures, 2018, 194: 86–96. doi: 10.1016/j.compstruc.2017.09.002
    HINTON G E and SALAKHUTDINOV R R. Reducing the dimensionality of data with neural networks[J]. Science, 2006, 313(5786): 504–507. doi: 10.1126/science.1127647
    VINCENT P, LAROCHELLE H, BENGIO Y, et al. Extracting and composing robust features with denoising autoencoders[C]. The 25th International Conference on Machine Learning, Helsinki, Finland, 2008: 1096–1103. doi: 10.1145/1390156.1390294.
    SALAKHUTDINOV R and HINTON G. An efficient learning procedure for deep Boltzmann machines[J]. Neural Computation, 2012, 24(8): 1967–2006. doi: 10.1162/NECO_a_00311
    CAMBRIA E, HUANG Guangbin, KASUN L L C, et al. Extreme learning machines[trends & controversies][J]. IEEE Intelligent Systems, 2013, 28(6): 30–59. doi: 10.1109/MIS.2013.140
    TANG Jiexiong, DENG Chenwei, and HUANG Guangbin. Extreme learning machine for multilayer perceptron[J]. IEEE Transactions on Neural Networks and Learning Systems, 2016, 27(4): 809–821. doi: 10.1109/TNNLS.2015.2424995
  • 加载中
图(8) / 表(1)
计量
  • 文章访问数:  3210
  • HTML全文浏览量:  1622
  • PDF下载量:  98
  • 被引次数: 0
出版历程
  • 收稿日期:  2019-07-05
  • 修回日期:  2019-12-12
  • 网络出版日期:  2019-12-20
  • 刊出日期:  2020-07-23

目录

    /

    返回文章
    返回