高级搜索

留言板

尊敬的读者、作者、审稿人, 关于本刊的投稿、审稿、编辑和出版的任何问题, 您可以本页添加留言。我们将尽快给您答复。谢谢您的支持!

姓名
邮箱
手机号码
标题
留言内容
验证码

基于线性映射场的fNIRS信号特征提取与分析

姚宇轩 孙兆辉 高毓兵 吴奇

姚宇轩, 孙兆辉, 高毓兵, 吴奇. 基于线性映射场的fNIRS信号特征提取与分析[J]. 电子与信息学报, 2023, 45(4): 1401-1411. doi: 10.11999/JEIT220120
引用本文: 姚宇轩, 孙兆辉, 高毓兵, 吴奇. 基于线性映射场的fNIRS信号特征提取与分析[J]. 电子与信息学报, 2023, 45(4): 1401-1411. doi: 10.11999/JEIT220120
YAO Yuxuan, SUN Zhaohui, GAO Yubing, WU Qi. Feature Extraction and Analysis of fNIRS Signals Based on Linear Mapping Field[J]. Journal of Electronics & Information Technology, 2023, 45(4): 1401-1411. doi: 10.11999/JEIT220120
Citation: YAO Yuxuan, SUN Zhaohui, GAO Yubing, WU Qi. Feature Extraction and Analysis of fNIRS Signals Based on Linear Mapping Field[J]. Journal of Electronics & Information Technology, 2023, 45(4): 1401-1411. doi: 10.11999/JEIT220120

基于线性映射场的fNIRS信号特征提取与分析

doi: 10.11999/JEIT220120
基金项目: 国家自然科学基金(U1933125, 62171274),空军医学科研重大项目(2021KHYX11),国防创新特区项目(193-CXCY-A04-01-11-03),上海市级科技重大专项(2021SHZDZX)
详细信息
    作者简介:

    姚宇轩:男,博士生,研究方向为脑机接口、图神经网络

    孙兆辉:男,博士生,研究方向为神经工效学、脑认知检测

    高毓兵:女,硕士生,研究方向为统计机器学习

    吴奇:男,教授,研究方向为视脑交互

    通讯作者:

    吴奇 Edmondqwu@sjtu.edu.cn

  • 中图分类号: TP181

Feature Extraction and Analysis of fNIRS Signals Based on Linear Mapping Field

Funds: The National Natural Science Foundation of China (U1933125, 62171274), The Air Force Medical Research Major Project (2021KHYX11), The Defense Innovation Project (193-CXCY-A04-01-11-03), Shanghai Science and Technology Major Project (2021SHZDZX)
  • 摘要: 大脑功能性激活的相关研究普遍存在特征提取依赖人工经验、深层次生理学信息难以挖掘两大问题。针对这两个问题,该文通过引入变分模态分解(VMD)技术,提出自适应VMD算法。该算法考虑了脑血氧信号在不同频段下的生理意义,降低了传统VMD对超参数选取的依赖。实验结果表明自适应VMD算法能够精确地提取出功能性近红外光谱(fNIRS)中富有生理学意义的有效模态分量,进而提升数据预处理效果。在此基础上,基于将时间序列映射成图像并使用深度卷积神经网络进行特征学习的思路,提出线性映射场(LMF)。基于LMF,该文以较低的运算量将fNIRS序列映射成2维图像,辅以深度卷积神经网络,实现了fNIRS生理信号深层次特征的提取。实验结果证明了所提出LMF的优势。最后,该文对提出方法的有效性进行了讨论与分析,说明了不同于循环神经网络仅能“顺序”地感知时间序列,卷积神经网络对时间序列的“跳跃”感知是其取得优异效果的关键。
  • 图  1  实验整体流程图

    图  2  图像生成算法流程

    图  3  敲击任务流程图

    图  4  两种不同 fNIRS 信号及其小波系数

    图  5  不同图像生成算法的生成效果对比

    图  6  图像生成算法“跳跃”感知和感知域扩张示意图

    图  7  自适应VMD分解结果

    表  1  脑血氧信号在各频段的生理意义

    频段频率范围(Hz)生理意义
    I0.6~2心率活动
    II0.145~0.6呼吸活动
    III0.052~0.145肌源性活动
    IV0.021~0.052神经性活动
    V0.0095~0.021内皮细胞代谢活动
    VI0.005~0.0095内皮细胞活动
    下载: 导出CSV

    表  2  传统机器学习算法应用于本数据集的分类结果

    算法三分类准确度(%)
    对数几率回归49.71
    随机森林67.85
    支持向量机74.97
    Adaboost62.42
    朴素贝叶斯51.64
    决策树53.32
    多层感知机73.37
    线性判别分析58.71
    二次判别分析65.95
    下载: 导出CSV

    表  3  两种不同fNIRS信号下7种典型的时间序列特征对比

    最小值最大值峰峰值均值中位值偏度峰度
    信号1–0.00990.00720.0171–0.0025–0.00350.5575–0.7113
    信号2–0.00780.01600.02380.00600.0065–0.2932–0.4681
    下载: 导出CSV

    表  4  不同图像生成算法及图像尺寸的分类结果(%)

    图像生成算法图像尺寸
    16326496128
    GAF76.8978.5880.5780.8480.57
    MTF55.4254.5459.1558.8458.12
    GAF+MTF77.7277.8380.5379.7579.04
    linear映射场77.7179.8281.2081.5381.01
    sigmoid映射场77.2979.0380.7180.8880.20
    tan映射场70.9468.0968.4168.3468.63
    tanh映射场70.8168.4168.6569.6767.35
    下载: 导出CSV

    表  5  线性映射场与3种循环神经网络模型的分类性能对比

    模型算法分类准确度(%)
    Linear映射场81.5
    RNN65.9
    LSTM64.4
    GRU65.5
    下载: 导出CSV

    表  6  自适应VMD模块的影响

    三分类准确度(%)
    使用自适
    应VMD
    不使用自
    适应VMD
    性能
    提升
    图像生成算法GAF80.5776.893.68
    MTF59.1554.544.61
    GAF-MTF80.5377.722.81
    Linear映射场81.2077.743.46
    sigmoid映射场80.7177.283.43
    tan映射场68.4168.090.32
    tanh映射场68.6567.351.30
    传统机器学习算法对数几率回归49.7154.53–4.82
    随机森林67.8572.37–4.52
    支持向量机74.9770.554.42
    Adaboost62.4255.197.23
    朴素贝叶斯51.6446.804.84
    决策树53.3252.760.56
    多层感知机73.3770.373.00
    线性判别分析58.7169.95–11.24
    二次判别分析65.9572.82–6.87
    下载: 导出CSV
  • [1] 朱朝喆. 近红外光谱脑功能成像[M]. 北京: 科学出版社, 2020: 4–5.

    ZHU Chaozhe. Functional Near-infrared Spectropy[M]. Beijing: Science Press, 2020: 4–5.
    [2] JIANG Xingxing, SHEN Changqing, SHI Juanjuan, et al. Initial center frequency-guided VMD for fault diagnosis of rotating machines[J]. Journal of Sound and Vibration, 2018, 435: 36–55. doi: 10.1016/j.jsv.2018.07.039
    [3] LIU Changfu, ZHU Lida, and NI Chenbing. Chatter detection in milling process based on VMD and energy entropy[J]. Mechanical Systems and Signal Processing, 2018, 105: 169–182. doi: 10.1016/j.ymssp.2017.11.046
    [4] ZHANG Yagang, ZHAO Yuan, KONG Chunhui, et al. A new prediction method based on VMD-PRBF-ARMA-E model considering wind speed characteristic[J]. Energy Conversion and Management, 2020, 203: 112254. doi: 10.1016/j.enconman.2019.112254
    [5] LU Chunming, ZHANG Yujin, BISWAL B B, et al. Use of fNIRS to assess resting state functional connectivity[J]. Journal of Neuroscience Methods, 2010, 186(2): 242–249. doi: 10.1016/j.jneumeth.2009.11.010
    [6] ZHANG Han, ZHANG Yujin, LU Chunming, et al. Functional connectivity as revealed by independent component analysis of resting-state fNIRS measurements[J]. Neuroimage, 2010, 51(3): 1150–1161. doi: 10.1016/j.neuroimage.2010.02.080
    [7] BRIER M R, THOMAS J B, FAGAN A M, et al. Functional connectivity and graph theory in preclinical Alzheimer's disease[J]. Neurobiology of Aging, 2014, 35(4): 757–768. doi: 10.1016/j.neurobiolaging.2013.10.081
    [8] NIU Haijing, LU Chunming, ZHU Chaozhe, et al. Resting-state functional connectivity assessed with two diffuse optical tomographic systems[J]. Journal of Biomedical Optics, 2011, 16(4): 046006. doi: 10.1117/1.3561687
    [9] KRIZHEVSKY A, SUTSKEVER I, and HINTON G E. ImageNet classification with deep convolutional neural networks[C]. The 25th International Conference on Neural Information Processing Systems, Lake Tahoe, USA, 2012: 1097–1105.
    [10] SIMONYAN K and ZISSERMAN A. Very deep convolutional networks for large-scale image recognition[C]. The 3rd International Conference on Learning Representations, San Diego, USA, 2015.
    [11] SZEGEDY C, LIU Wei, JIA Yangqing, et al. Going deeper with convolutions[C]. The IEEE Conference on Computer Vision and Pattern Recognition, Boston, USA, 2015: 1–9.
    [12] WANG Zhiguang and OATES T. Encoding time series as images for visual inspection and classification using tiled convolutional neural networks[C]. Workshops at the Twenty-Ninth AAAI Conference on Artificial Intelligence, Austin, USA, 2015: 40–46.
    [13] HUANG N E, SHEN Zheng, LONG S R, et al. The empirical mode decomposition and the Hilbert spectrum for nonlinear and non-stationary time series analysis[J]. Proceedings of the Royal Society A:Mathematical, Physical and Engineering Sciences, 1998, 454(1971): 903–995. doi: 10.1098/rspa.1998.0193
    [14] WU Zhaohua and HUANG NORDEN E. Ensemble empirical mode decomposition: A noise-assisted data analysis method[J]. Advances in Adaptive Data Analysis, 2009, 1(1): 1–41. doi: 10.1142/S1793536909000047
    [15] YEH J R, SHIEH J S, and HUANG N E. Complementary ensemble empirical mode decomposition: A novel noise enhanced data analysis method[J]. Advances in Adaptive Data Analysis, 2010, 2(2): 135–156. doi: 10.1142/S1793536910000422
    [16] TORRES M E, COLOMINAS M A, SCHLOTTHAUER G, et al. A complete ensemble empirical mode decomposition with adaptive noise[C]. 2011 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), Prague, Czech Republic, 2011: 4144–4147.
    [17] DRAGOMIRETSKIY K and ZOSSO D. Variational mode decomposition[J]. IEEE Transactions on Signal Processing, 2014, 62(3): 531–544. doi: 10.1109/TSP.2013.2288675
    [18] CHEN Junhao and TSAI Y C. Encoding candlesticks as images for pattern classification using convolutional neural networks[J]. Financial Innovation, 2020, 6(1): 26. doi: 10.1186/s40854-020-00187-0
    [19] MITICHE I, MORISON G, NESBITT A, et al. Imaging time series for the classification of EMI discharge sources[J]. Sensors, 2018, 18(9): 3098. doi: 10.3390/s18093098
    [20] BARRA S, CARTA S M, CORRIGA A, et al. Deep learning and time series-to-image encoding for financial forecasting[J]. IEEE/CAA Journal of Automatica Sinica, 2020, 7(3): 683–692. doi: 10.1109/JAS.2020.1003132
    [21] HE Kaiming, ZHANG Xiangyu, REN Shaoqing, et al. Deep residual learning for image recognition[C]. The IEEE Conference on Computer Vision and Pattern Recognition. Las Vegas, USA, 2016: 770–778.
    [22] BAK S, PARK J, SHIN J, et al. Open-access fNIRS dataset for classification of unilateral finger- and foot-tapping[J]. Electronics, 2019, 8(12): 1486. doi: 10.3390/electronics8121486
    [23] HOU Xin, ZHANG Zong, ZHAO Chen, et al. NIRS-KIT: A MATLAB toolbox for both resting-state and task fNIRS data analysis[J]. Neurophotonics, 2021, 8(1): 010802. doi: 10.1117/1.NPh.8.1.010802
    [24] COMBRISSON E and JERBI K. Exceeding chance level by chance: The caveat of theoretical chance levels in brain signal classification and statistical assessment of decoding accuracy[J]. Journal of Neuroscience Methods, 2015, 250: 126–136. doi: 10.1016/j.jneumeth.2015.01.010
    [25] GROSSMANN A and MORLET J. Decomposition of Hardy functions into square integrable wavelets of constant shape[J]. SIAM Journal on Mathematical Analysis, 1984, 15(4): 723–736. doi: 10.1137/0515056
    [26] SIDDIQUE T and MAHMUD M S. Classification of fNIRS data under uncertainty: A Bayesian neural network approach[C]. 2020 IEEE International Conference on E-health Networking, Application & Services (HEALTHCOM), Shenzhen, China, 2021: 1–4.
    [27] KUANG Dongyang and MICHOSKI C. Dual stream neural networks for brain signal classification[J]. Journal of Neural Engineering, 2021, 18(1): 016006. doi: 10.1088/1741-2552/abc903
    [28] ZAREMBA W, SUTSKEVER I, and VINYALS O. Recurrent neural network regularization[J]. arXiv: 1409.2329, 2014.
    [29] SHI Xingjian, CHEN Zhourong, WANG Hao, et al. Convolutional LSTM network: A machine learning approach for precipitation nowcasting[C]. The 28th International Conference on Neural Information Processing Systems, Montréal, Canada, 2015: 802–810.
    [30] CHO K, VAN MERRIËNBOER B, GULCEHRE C, et al. Learning phrase representations using RNN encoder-decoder for statistical machine translation[C]. The 2014 Conference on Empirical Methods in Natural Language Processing, Doha, Qatar, 2014: 1724–1734.
  • 加载中
图(7) / 表(6)
计量
  • 文章访问数:  448
  • HTML全文浏览量:  504
  • PDF下载量:  104
  • 被引次数: 0
出版历程
  • 收稿日期:  2022-01-27
  • 修回日期:  2022-06-29
  • 录用日期:  2022-07-14
  • 网络出版日期:  2022-07-19
  • 刊出日期:  2023-04-10

目录

    /

    返回文章
    返回