高级搜索

留言板

尊敬的读者、作者、审稿人, 关于本刊的投稿、审稿、编辑和出版的任何问题, 您可以本页添加留言。我们将尽快给您答复。谢谢您的支持!

姓名
邮箱
手机号码
标题
留言内容
验证码

导联注意力及脑连接驱动的虚拟现实晕动症识别模型研究

化成城 周占峰 陶建龙 杨文清 刘佳 付荣荣

化成城, 周占峰, 陶建龙, 杨文清, 刘佳, 付荣荣. 导联注意力及脑连接驱动的虚拟现实晕动症识别模型研究[J]. 电子与信息学报. doi: 10.11999/JEIT240440
引用本文: 化成城, 周占峰, 陶建龙, 杨文清, 刘佳, 付荣荣. 导联注意力及脑连接驱动的虚拟现实晕动症识别模型研究[J]. 电子与信息学报. doi: 10.11999/JEIT240440
HUA Chengcheng, ZHOU Zhanfeng, TAO Jianlong, YANG Wenqing, LIU Jia, FU Rongrong. Virtual Reality Motion Sickness Recognition Model Driven by Lead-attention and Brain Connection[J]. Journal of Electronics & Information Technology. doi: 10.11999/JEIT240440
Citation: HUA Chengcheng, ZHOU Zhanfeng, TAO Jianlong, YANG Wenqing, LIU Jia, FU Rongrong. Virtual Reality Motion Sickness Recognition Model Driven by Lead-attention and Brain Connection[J]. Journal of Electronics & Information Technology. doi: 10.11999/JEIT240440

导联注意力及脑连接驱动的虚拟现实晕动症识别模型研究

doi: 10.11999/JEIT240440
基金项目: 国家自然科学基金(62206130, 62073283),江苏省自然科学基金(BK20200821),南京信息工程大学人才启动经费(2020r075),河北省自然科学金(F2022203092)
详细信息
    作者简介:

    化成城:男,讲师,研究方向为脑电信号与脑机接口

    周占峰:男,硕士生,研究方向为脑电信号、虚拟现实晕动症

    陶建龙:男,硕士生,研究方向为脑电信号、虚拟现实晕动症

    杨文清:女,硕士生,研究方向为脑电信号、脑电微状态、脑功能网络

    刘佳:女,教授,研究方向为多模态情感分析、计算机视觉与图像处理、虚拟/增强现实

    付荣荣:女,副教授,研究方向为脑电信号处理及特征提取、脑意图动态识别、脑机接口系统研究

    通讯作者:

    付荣荣 frr1102@aliyun.com

  • 中图分类号: TN911.7; TP391

Virtual Reality Motion Sickness Recognition Model Driven by Lead-attention and Brain Connection

Funds: The National Natural Science Foundation of China (62206130, 62073282), The Natural Science Foundation of Jiangsu Province (BK20200821), The Startup Foundation for Introducing Talent of NUIST (2020r075), The Natural Science Foundation of Hebei Province (F2022203092)
  • 摘要: 虚拟现实晕动症(VRMS)是阻碍虚拟现实技术行业发展的重要问题,检测VRMS水平是研究并克服这一问题的先决条件。所以该文引入并改进了一种脑电端到端识别模型定量识别用户在使用虚拟现实时的VRMS水平。该模型首先利用一维卷积神经网络(CNN)对脑电信号进行滤波,然后计算导联间相关性构成功能脑网络,最后利用CNN和全连接层提取脑网络特征和回归分析。该文通过优化1维卷积核大小及加入一种新型导联注意力结构来增强该模型特征提取能力。最后采用虚拟现实场景《VRQ test》诱发受试者产生VRMS并记录受试者脑电信号及主观评价VRMS水平(模拟器眩晕量表SSQ),所得数据用于验证该模型。结果显示经过10折交叉验证该方法检测到的VRMS水平与真实值之间平均均方误差为15.10,平均拟合优度为:96.63%。该结果表明该文所提模型可用于虚拟现实晕动症的检测,该脑电检测方法有望成为一种通用的虚拟现实产品评估方法。
  • 图  1  虚拟现实晕动症诱发实验设计

    图  2  基于CNN的脑连接模型(BCCNN)结构

    图  3  注意力模块结构

    图  4  回归结果随1维卷积层卷积核大小变化

    图  5  基于BCCNN模型构建和提取的平均邻接矩阵及平均脑功能网络特征(高眩晕SSQ>77.2 vs.低眩晕SSQ<8.5)

    图  6  导联注意力与通道注意力的权值平均分布

    表  1  不同类型1D卷积核及不同类型输出层激活函数下BCCNN模型的回归表现

    激活函数卷积核长度均方误差(MSE)平均绝对误差(MAE)拟合优度(R2,%)
    Sigmoid最优1618.40±3.762.34±0.2695.84±0.90
    最差2925.1±5.302.73±0.4494.16±1.45
    线性最优1629.98±6.003.35±0.4692.96±1.65
    最差2341.21±21.543.82±0.9390.02±5.61
    下载: 导出CSV

    表  2  改进BCCNN模型与其他常用模型对VRMS数据的回归表现对比(Mean±std.)

    #方法均方误差(MSE)平均绝对误差(MAE)拟合优度(R2,%)
    1节律能量及能量比[12]+mRMR[29]+高斯过程回归60.50±3.685.13±0.1383.86±1.74
    2微分熵[27]+mRMR+高斯过程回归38.65±2.374.18±0.1190.84±0.86
    3EMDPLV+图论指标+mRMR+高斯过程回归154.37±14.179.29±0.2750.04±5.74
    4Shallow ConvNet[30]217.76±27.2011.17±0.806.22±11.10
    5Deep ConvNet[30]213.55±49.9711.62±1.532.58±5.98
    6FBtCNN[31]171.03±16.949.91±0.5523.47±11.78
    7SSVEPNet[32]166.29±22.238.91±0.6967.24±4.05
    81D-CNN(时间、导联)+LSTM123.92±17.957.16±0.7568.74±5.68
    9ADFCNN[33]122.74±22.057.42±0.7967.45±7.38
    10MOCNN[34]91.74±22.287.17±1.0277.38±6.63
    11Conformer[35]82.15±9.865.44±0.2780.92±2.58
    12FBCNet[36]77.29±8.196.87±0.3880.08±2.63
    13滤波器组+1D-CNN(导联)+LSTM53.62±6.555.61±0.3686.83±3.25
    14原版BCCNN24.93±4.602.8±0.39994.16±1.23
    15BCCNN+优化卷积核18.40±3.762.34±0.2695.84±0.90
    16BCCNN+通道注意力22.73±6.492.25±0.3294.90±1.39
    17BCCNN+优化卷积核+通道注意力20.80±6.722.22±0.3495.29±1.65
    18BCCNN+导联注意力17.28±3.682.08±0.2896.10±0.90
    19BCCNN+优化卷积核+导联注意力15.10±4.321.93±0.2896.63±1.09
    下载: 导出CSV
  • [1] KENNEDY R S, DREXLER J, and KENNEDY R C. Research in visually induced motion sickness[J]. Applied Ergonomics, 2010, 41(4): 494–503. doi: 10.1016/j.apergo.2009.11.006.
    [2] NALIVAIKO E, DAVIS S L, BLACKMORE K L, et al. Cybersickness provoked by head-mounted display affects cutaneous vascular tone, heart rate and reaction time[J]. Physiology & Behavior, 2015, 151: 583–590. doi: 10.1016/j.physbeh.2015.08.043.
    [3] MASON B, 王麒. 虚拟现实会增加晕动病的风险[J]. 中国科技教育, 2017(2): 65–66.

    MASON B, WANG Lin. Virtual reality raises real risk of motion sickness[J]. China Science & Technology Educationb, , 2017(2): 65–66.
    [4] 易琳, 贾瑞双, 刘然, 等. 虚拟现实环境中视觉诱导晕动症的评估指标[J]. 航天医学与医学工程, 2018, 31(4): 437–445. doi: 10.16289/j.cnki.1002–0837.2018.04.008.

    YI Lin, JIA Ruishuang, LIU Ran et al. Evaluation indicators for visually induced motion sickness in virtual reality environment[J]. Space Medicine & Medical Engineering, 2018, 31(4): 437–445. doi: 10.16289/j.cnki.1002–0837.2018.04.008.
    [5] KIM J, OH H, KIM W, et al. A deep motion sickness predictor induced by visual stimuli in virtual reality[J]. IEEE Transactions on Neural Networks and Learning Systems, 2022, 33(2): 554–566. doi: 10.1109/tnnls.2020.3028080.
    [6] KIM H G, LIM H T, LEE S, et al. VRSA net: VR sickness assessment considering exceptional motion for 360° VR video[J]. IEEE Transactions on Image Processing, 2019, 28(4): 1646–1660. doi: 10.1109/tip.2018.2880509.
    [7] WANG Yuyang, CHARDONNET J R, and MERIENNE F. VR sickness prediction for navigation in immersive virtual environments using a deep long short term memory model[C]. 2019 IEEE Conference on Virtual Reality and 3D User Interfaces (VR), Osaka, Japan, 2019: 1874–1881. doi: 10.1109/VR.2019.8798213.
    [8] 耿跃华, 石金祥. 机器学习与脑电信号分析相结合的眩晕状态分类[J]. 中国组织工程研究, 2022, 26(29): 4624–4631. doi: 10.12307/2022.844.

    GENG Yuehua and SHI Jinxiang. Classification of vertigo state based on machine learning and electroencephalogram signal analysis[J]. Chinese Journal of Tissue Engineering Research, 2022, 26(29): 4624–4631. doi: 10.12307/2022.844.
    [9] JANG K M, WOO Y S, and LIM H K. Electrophysiological changes in the virtual reality sickness: EEG in the VR sickness[C]. The 25th International Conference on 3D Web Technology, 2020: 26. doi: 10.1145/3424616.3424722.
    [10] LIM H K, JI K, WOO Y S, et al. Test-retest reliability of the virtual reality sickness evaluation using electroencephalography (EEG)[J]. Neuroscience Letters, 2021, 743: 135589. doi: 10.1016/j.neulet.2020.135589.
    [11] NAQVI S A A, BADRUDDIN N, JATOI M A, et al. EEG based time and frequency dynamics analysis of visually induced motion sickness (VIMS)[J]. Australasian Physical & Engineering Sciences in Medicine, 2015, 38(4): 721–729. doi: 10.1007/s13246-015-0379-9.
    [12] LI Xiaolu, ZHU Changrong, XU Cangsu, et al. VR motion sickness recognition by using EEG rhythm energy ratio based on wavelet packet transform[J]. Computer Methods and Programs in Biomedicine, 2020, 188: 105266. doi: 10.1016/j.cmpb.2019.105266.
    [13] CHEN Y C, DUANN J R, CHUANG S W, et al. Spatial and temporal EEG dynamics of motion sickness[J]. NeuroImage, 2010, 49(3): 2862–2870. doi: 10.1016/j.neuroimage.2009.10.005.
    [14] KROKOS E and VARSHNEY A. Quantifying VR cybersickness using EEG[J]. Virtual Reality, 2022, 26(1): 77–89. doi: 10.1007/s10055-021-00517-2.
    [15] WU Jintao, ZHOU Qianxiang, LI Jiaxuan, et al. Inhibition-related N2 and P3: Indicators of visually induced motion sickness (VIMS)[J]. International Journal of Industrial Ergonomics, 2020, 78: 102981. doi: 10.1016/j.ergon.2020.102981.
    [16] PARK S, KIM L, KWON J, et al. Evaluation of visual-induced motion sickness from head-mounted display using heartbeat evoked potential: A cognitive load-focused approach[J]. Virtual Reality, 2022, 26(3): 979–1000. doi: 10.1007/s10055-021-00600-8.
    [17] LIU Ran, XU Miao, ZHANG Yanzhen, et al. A pilot study on electroencephalogram-based evaluation of visually induced motion sickness[J]. Journal of Imaging Science and Technology, 2020, 64(2): 020501. doi: 10.2352/J.ImagingSci.Technol.2020.64.2.020501.
    [18] CECOTTI H and GRASER A. Convolutional neural networks for P300 detection with application to brain-computer interfaces[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2011, 33(3): 433–445. doi: 10.1109/tpami.2010.125.
    [19] TABAR Y R and HALICI U. A novel deep learning approach for classification of EEG motor imagery signals[J]. Journal of Neural Engineering, 2017, 14(1): 016003. doi: 10.1088/1741-2560/14/1/016003.
    [20] ACHARYA U R, OH S L, HAGIWARA Y, et al. Automated EEG-based screening of depression using deep convolutional neural network[J]. Computer Methods and Programs in Biomedicine, 2018, 161: 103–113. doi: 10.1016/j.cmpb.2018.04.012.
    [21] HU Jie, SHEN Li, ALBANIE S, et al. Squeeze-and-excitation networks[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2020, 42(8): 2011–2023. doi: 10.1109/tpami.2019.2913372.
    [22] HUANG Jing, REN Lifeng, ZHOU Xiaokang, et al. An improved neural network based on SENet for sleep stage classification[J]. IEEE Journal of Biomedical and Health Informatics, 2022, 26(10): 4948–4956. doi: 10.1109/jbhi.2022.3157262.
    [23] HE Yanbin, LU Zhiyang, WANG Jun, et al. A self-supervised learning based channel attention MLP-mixer network for motor imagery decoding[J]. IEEE Transactions on Neural Systems and Rehabilitation Engineering, 2022, 30: 2406–2417. doi: 10.1109/tnsre.2022.3199363.
    [24] NIU Weixin, MA Chao, SUN Xinlin, et al. A brain network analysis-based double way deep neural network for emotion recognition[J]. IEEE Transactions on Neural Systems and Rehabilitation Engineering, 2023, 31: 917–925. doi: 10.1109/tnsre.2023.3236434.
    [25] HUA Chengcheng, WANG Hong, CHEN Jichi, et al. Novel functional brain network methods based on CNN with an application in proficiency evaluation[J]. Neurocomputing, 2019, 359: 153–162. doi: 10.1016/j.neucom.2019.05.088.
    [26] KENNEDY R S, LANE N E, BERBAUM K S, et al. Simulator sickness questionnaire: An enhanced method for quantifying simulator sickness[J]. The International Journal of Aviation Psychology, 1993, 3(3): 203–220. doi: 10.1207/s15327108ijap0303_3.
    [27] ZHENG Weilong, LIU Wei, LU Yifei, et al. EmotionMeter: A multimodal framework for recognizing human emotions[J]. IEEE Transactions on Cybernetics, 2019, 49(3): 1110–1122. doi: 10.1109/tcyb.2018.2797176.
    [28] 苗敏敏, 徐宝国, 胡文军, 等. 基于自适应优化空频微分熵的情感脑电识别[J]. 仪器仪表学报, 2021, 42(3): 221–230. doi: 10.19650/j.cnki.cjsi.J2006936.

    MIAO Minmin, XU Baoguo, HU Wenjun, et al. Emotion EEG recognition based on the adaptive optimized spatial-frequency differential entropy[J]. Chinese Journal of Scientific Instrument, 2021, 42(3): 221–230. doi: 10.19650/j.cnki.cjsi.J2006936.
    [29] PENG Hanchuan, LONG Fuhui, and DING C. Feature selection based on mutual information criteria of max-dependency, max-relevance, and min-redundancy[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2005, 27(8): 1226–1238. doi: 10.1109/Tpami.2005.159.
    [30] SCHIRRMEISTER R T, SPRINGENBERG J T, FIEDERER L D J, et al. Deep learning with convolutional neural networks for EEG decoding and visualization[J]. Human Brain Mapping, 2017, 38(11): 5391–5420. doi: 10.1002/hbm.23730.
    [31] DING Wenlong, SHAN Jianhua, FANG Bin, et al. Filter bank convolutional neural network for short time-window steady-state visual evoked potential classification[J]. IEEE Transactions on Neural Systems and Rehabilitation Engineering, 2021, 29: 2615–2624. doi: 10.1109/tnsre.2021.3132162.
    [32] PAN Yudong, CHEN Jianbo, ZHANG Yangsong, et al. An efficient CNN-LSTM network with spectral normalization and label smoothing technologies for SSVEP frequency recognition[J]. Journal of Neural Engineering, 2022, 19(5): 056014. doi: 10.1088/1741-2552/ac8dc5.
    [33] TAO Wei, WANG Ze, WONG C M, et al. ADFCNN: Attention-based dual-scale fusion convolutional neural network for motor imagery brain-computer interface[J]. IEEE Transactions on Neural Systems and Rehabilitation Engineering, 2024, 32: 154–165. doi: 10.1109/TNSRE.2023.3342331.
    [34] JIN Jing, XU Ruitian, DALY I, et al. MOCNN: A multiscale deep convolutional neural network for ERP-based brain-computer interfaces[J]. IEEE Transactions on Cybernetics, 2024, 54(9): 5565–5576. doi: 10.1109/TCYB.2024.3390805.
    [35] SONG Yonghao, ZHENG Qingqing, LIU Bingchuan, et al. EEG conformer: Convolutional transformer for EEG decoding and visualization[J]. IEEE Transactions on Neural Systems and Rehabilitation Engineering, 2023, 31: 710–719. doi: 10.1109/TNSRE.2022.3230250.
    [36] MANE R, ROBINSON N, VINOD A P, et al. A multi-view CNN with novel variance layer for motor imagery brain computer interface[J]. 2020 42nd Annual International Conference of the IEEE Engineering in Medicine & Biology Society, Montreal, Canada, 2020: 2950–2953. doi: 10.1109/EMBC44109.2020.9175874.
  • 加载中
图(6) / 表(2)
计量
  • 文章访问数:  109
  • HTML全文浏览量:  49
  • PDF下载量:  19
  • 被引次数: 0
出版历程
  • 收稿日期:  2024-06-03
  • 修回日期:  2025-03-04
  • 网络出版日期:  2025-03-15

目录

    /

    返回文章
    返回