高级搜索

留言板

尊敬的读者、作者、审稿人, 关于本刊的投稿、审稿、编辑和出版的任何问题, 您可以本页添加留言。我们将尽快给您答复。谢谢您的支持!

姓名
邮箱
手机号码
标题
留言内容
验证码

基于深度时空特征融合的多通道运动想象EEG解码方法

杨俊 马正敏 沈韬 陈壮飞 宋耀莲

杨俊, 马正敏, 沈韬, 陈壮飞, 宋耀莲. 基于深度时空特征融合的多通道运动想象EEG解码方法[J]. 电子与信息学报, 2021, 43(1): 196-203. doi: 10.11999/JEIT190300
引用本文: 杨俊, 马正敏, 沈韬, 陈壮飞, 宋耀莲. 基于深度时空特征融合的多通道运动想象EEG解码方法[J]. 电子与信息学报, 2021, 43(1): 196-203. doi: 10.11999/JEIT190300
Jun YANG, Zhengmin MA, Tao SHEN, Zhuangfei CHEN, Yaolian SONG. Multichannel MI-EEG Feature Decoding Based on Deep Learning[J]. Journal of Electronics & Information Technology, 2021, 43(1): 196-203. doi: 10.11999/JEIT190300
Citation: Jun YANG, Zhengmin MA, Tao SHEN, Zhuangfei CHEN, Yaolian SONG. Multichannel MI-EEG Feature Decoding Based on Deep Learning[J]. Journal of Electronics & Information Technology, 2021, 43(1): 196-203. doi: 10.11999/JEIT190300

基于深度时空特征融合的多通道运动想象EEG解码方法

doi: 10.11999/JEIT190300
基金项目: 国家自然科学基金地区基金(31760281),云南省2020年博士后科研基金,昆明理工大学引进人才科研启动基金(KKSY201903028)
详细信息
    作者简介:

    杨俊:男,1984年生,讲师,博士后,研究方向为机器学习、深度学习及其在脑信息解码中的应用

    马正敏:女,1997年生,硕士生,研究方向为基于深度学习的脑电解码

    沈韬:男,1984年生,教授,博士生导师,研究方向为材料智能检测、机器学习与数据挖掘、半导体器件与智能微电网

    陈壮飞:女,1983年生,副教授,研究方向为心理干预及其脑机制

    宋耀莲:女,1983年生,副教授,研究方向为通信系统、信号处理

    通讯作者:

    沈韬 shentao@kust.edu.cn

  • 中图分类号: TN911.7

Multichannel MI-EEG Feature Decoding Based on Deep Learning

Funds: The Regional Fund Project of National Natural Science Foundation in China (31760281), The Postdoctoral Research Fund of Yunnan Province 2020, The Introduction of Talent Research and Start-up Fund for Kunming University of Science and Technology (KKSY201903028)
  • 摘要:

    脑电(EEG)是一种在临床上广泛应用的脑信息记录形式,其反映了脑活动中神经细胞放电产生的电场变化情况。脑电广泛应用于脑-机接口(BCI)系统。然而,研究表明脑电信息空间分辨率较低,这种缺陷可以综合分析多通道电极的脑电数据来弥补。为了从多通道数据中高效地获取到与运动想象任务相关的辨识特征,该文提出一种针对多通道脑电信息的卷积神经网络(MC-CNN)解码方法,先对预先选取好的多通道数据预处理后送入2维卷积神经网络(CNN)进行时间-空间特征提取,然后利用自动编码(AE)器把这些特征映射为具有辨识度的特征子空间,最后指导识别网络进行分类识别。实验结果表明,该文所提多通道空间特征提取和构建方法在运动想象脑电任务识别性能和效率上都具有较大优势。

  • 图  1  运动想象实验过程

    图  2  所采用的电极通道位置

    图  3  所提方法的网络架构

    图  4  不同特征提取方法下的混淆矩阵及评价指标

    图  5  不同方法的测试时间对比情况

    图  6  不同被试者数据时-频特征对分类结果的影响

    表  1  数据经过各卷积层和神经网络层后的映射变化情况

    网络结构层滤波器滤波器数输入输出参数数量
    卷积层1(10, 1, 1)50(1024, 35)(1015, 35, 25)500
    批量归一化处理(1015, 35, 25)(1015, 35, 25)
    最大池化(1, 5, 35)(1015, 35, 25)(1015, 7, 1)
    卷积层2(5, 2, 1)100(1015, 7, 1)(1011, 6, 50)1000
    批量归一化处理(1011, 6, 50)(1011, 6, 50)
    最大池化(10, 1, 50)(1011, 6, 50)(101, 6, 1)
    全连接层(101, 6, 1)606
    下载: 导出CSV

    表  2  MI-EEG实验数据情况

    数据集采集数据集公共数据集
    S1S2S3
    被试者数644
    通道数353535
    实验次数(次/人)100300280
    下载: 导出CSV

    表  3  不同被试者和方法下的准确率(采集数据D1)

    方法被试者平均值
    ABCDEF
    CNN0.8260.8440.8520.8620.8710.8190.8457
    LSTM0.8350.8710.8720.8520.8580.8480.8560
    本文MC-CNN0.8510.8790.9220.8630.9000.8680.8805
    下载: 导出CSV

    表  4  不同被试者和方法下的准确率(公共数据D2,D3)

    方法S2S3
    ABCDEFGH
    CNN0.8140.8250.8290.8740.8030.8250.8430.804
    LSTM0.8820.8290.8280.8580.8860.8320.8680.820
    本文MC-CNN0.9110.9210.8690.8530.8700.8510.8910.860
    下载: 导出CSV

    表  5  不同多通道MI-EEG解码方法的准确率对比(%)

    方法最差最优平均准确率
    RC-SFS80.492.381.96
    RC-SBS60.791.779.52
    SVM-GA67.994.883.86
    RC-GA75.198.588.20
    MC-CNN86.891.989.15
    下载: 导出CSV

    表  6  不同通道对识别结果的影响分布

    序号影响程度通道
    1极大C3, C4, Cz
    2CP3, FC3, FC4, FC5, CP4, F5
    3F3, CP5, FC6,
    4P3, P4, F4, F6, CP6,
    5微弱C5, C1, C2, C6, FC1, F1, F2, P1, P2, P5, P6, FC2, CP1, CP2, FCz, CPz, Pz, Fz
    下载: 导出CSV
  • WOLPAW J R, BIRBAUMER N, MCFARLAND D J, et al. Brain–computer interfaces for communication and control[J]. Clinical Neurophysiology, 2002, 113(6): 767–791. doi: 10.1016/S1388-2457(02)00057-3
    LECUN Y, BENGIO Y, and HINTON G. Deep learning[J]. Nature, 2015, 521(7553): 436–444. doi: 10.1038/nature14539
    BELHADJ S A, BENMOUSSAT N, and KRACHAI M D. CSP features extraction and FLDA classification of EEG-based motor imagery for Brain-Computer Interaction[C]. The 2015 4th International Conference on Electrical Engineering, Boumerdes, Algeria, 2015: 1–6. doi: 10.1109/INTEE.2015.7416697.
    CHEN Jing, HU Bin, XU Lixin, et al. Feature-level fusion of multimodal physiological signals for emotion recognition[C]. 2015 IEEE International Conference on Bioinformatics and Biomedicine, Washington, USA, 2015: 395–399. doi: 10.1109/BIBM.2015.7359713.
    YANG Jun, YAO Shaowen, and WANG Jin. Deep fusion feature learning network for MI-EEG classification[J]. IEEE Access, 2018, 6: 79050–79059. doi: 10.1109/ACCESS.2018.2877452
    HUSSAIN S, CALVO R A, and POUR P A. Hybrid fusion approach for detecting affects from multichannel physiology[C]. The 4th International Conference on Affective Computing and Intelligent Interaction, Memphis, USA, 2011: 568–577. doi: 10.1007/978-3-642-24600-5_60.
    JIRAYUCHAROENSAK S, PAN-NGUM S, and ISRASENA P. EEG-based emotion recognition using deep learning network with principal component based covariate shift adaptation[J]. The Scientific World Journal, 2014, 2014: 627892. doi: 10.1155/2014/627892
    ZHANG Xiang, YAO Lina, SHENG Q Z, et al. Converting your thoughts to texts: Enabling brain typing via deep feature learning of EEG signals[C]. 2018 IEEE International Conference on Pervasive Computing and Communications, Athens, Greece, 2018: 1–10. doi: 10.1109/PERCOM.2018.8444575.
    DAI Mengxi, ZHENG Dezhi, NA Rui, et al. EEG classification of motor imagery using a novel deep learning framework[J]. Sensors, 2019, 19(3): 551. doi: 10.3390/s19030551
    MZURIKWAO D, ANG C S, SAMUEL O W, et al. Efficient channel selection approach for motor imaginary classification based on convolutional neural network[C]. 2018 IEEE International Conference on Cyborg and Bionic Systems, Shenzhen, China, 2018: 418–421. doi: 10.1109/CBS.2018.8612157.
    HE Lin, GU Zhenghui, LI Yuanqing, et al. Classifying motor imagery EEG signals by iterative channel elimination according to compound weight[C]. International Conference on Artificial Intelligence and Computational Intelligence, Sanya, China, 2010: 71–78.
    TAN Chuanqi, SUN Fuchun, ZHANG Wenchang, et al. Spatial and spectral features fusion for EEG classification during motor imagery in BCI[C]. 2017 IEEE EMBS International Conference on Biomedical & Health Informatics, Orlando, USA, 2017: 309–312. doi: 10.1109/BHI.2017.7897267.
    何群, 杜硕, 张园园, 等. 融合单通道框架及多通道框架的运动想象分类[J]. 仪器仪表学报, 2018, 39(9): 20–29.

    HE Qun, DU Shuo, ZHANG Yuanyuan, et al. Classification of motor imagery based on single-channel frame and multi-channel frame[J]. Chinese Journal of Scientific Instrument, 2018, 39(9): 20–29.
    CHUNG Y G, KIM M K, and KIM S P. Inter-channel connectivity of motor imagery EEG signals for a noninvasive BCI application[C]. 2011 International Workshop on Pattern Recognition in NeuroImaging, Seoul, South Korea, 2011: 49–52. doi: 10.1109/PRNI.2011.9.
    LECUN Y, KAVUKCUOGLU K, and FARABET C. Convolutional networks and applications in vision[C]. 2010 IEEE International Symposium on Circuits and Systems, Paris, France, 2010: 253–256. doi: 10.1109/ISCAS.2010.5537907.
    ZHANG Jin, YAN Chungang, and GONG Xiaoliang. Deep convolutional neural network for decoding motor imagery based brain computer interface[C]. 2017 IEEE International Conference on Signal Processing, Communications and Computing, Xiamen, China, 2017: 1–5. doi: 10.1109/ICSPCC.2017.8242581.
    MATEI R P. Design method for orientation-selective CNN filters[C]. 2004 IEEE International Symposium on Circuits and Systems, Vancouver, Canada, 2004: III-105. doi: 10.1109/ISCAS.2004.1328694.
    LEE W Y, PARK S M, and SIM K B. Optimal hyperparameter tuning of convolutional neural networks based on the parameter-setting-free harmony search algorithm[J]. Optik, 2018, 172: 359–367. doi: 10.1016/j.ijleo.2018.07.044
    MUÑOZ-ORDÓÑEZ J, COBOS C, MENDOZA M, et al. Framework for the training of deep neural networks in tensorFlow using metaheuristics[C]. The 19th International Conference on Intelligent Data Engineering and Automated Learning, Madrid, Spain, 2018: 801–811. doi: 10.1007/978-3-030-03493-1_83.
    KINGMA D P and BA J. Adam: A method for stochastic optimization[C]. The 3rd International Conference on Learning Representations, San Diego, USA, 2015: 1–15.
    HE Lin, HU Youpan, LI Yuanqing, et al. Channel selection by Rayleigh coefficient maximization based genetic algorithm for classifying single-trial motor imagery EEG[J]. Neurocomputing, 2013, 121: 423–433. doi: 10.1016/j.neucom.2013.05.005
  • 加载中
图(6) / 表(6)
计量
  • 文章访问数:  1611
  • HTML全文浏览量:  511
  • PDF下载量:  225
  • 被引次数: 0
出版历程
  • 收稿日期:  2019-04-29
  • 修回日期:  2020-10-30
  • 网络出版日期:  2020-11-16
  • 刊出日期:  2021-01-15

目录

    /

    返回文章
    返回