高级搜索

留言板

尊敬的读者、作者、审稿人, 关于本刊的投稿、审稿、编辑和出版的任何问题, 您可以本页添加留言。我们将尽快给您答复。谢谢您的支持!

姓名
邮箱
手机号码
标题
留言内容
验证码

基于滑动窗口和卷积神经网络的可穿戴人体活动识别技术

何坚 郭泽龙 刘乐园 苏予涵

何坚, 郭泽龙, 刘乐园, 苏予涵. 基于滑动窗口和卷积神经网络的可穿戴人体活动识别技术[J]. 电子与信息学报, 2022, 44(1): 168-177. doi: 10.11999/JEIT200942
引用本文: 何坚, 郭泽龙, 刘乐园, 苏予涵. 基于滑动窗口和卷积神经网络的可穿戴人体活动识别技术[J]. 电子与信息学报, 2022, 44(1): 168-177. doi: 10.11999/JEIT200942
HE Jian, GUO Zelong, LIU Leyuan, SU Yuhan. Human Activity Recognition Technology Based on Sliding Window and Convolutional Neural Network[J]. Journal of Electronics & Information Technology, 2022, 44(1): 168-177. doi: 10.11999/JEIT200942
Citation: HE Jian, GUO Zelong, LIU Leyuan, SU Yuhan. Human Activity Recognition Technology Based on Sliding Window and Convolutional Neural Network[J]. Journal of Electronics & Information Technology, 2022, 44(1): 168-177. doi: 10.11999/JEIT200942

基于滑动窗口和卷积神经网络的可穿戴人体活动识别技术

doi: 10.11999/JEIT200942
基金项目: 国家重点研发计划(2020YFB2104400),国家自然科学基金(61602016),北京市科技计划(D171100004017003)
详细信息
    作者简介:

    何坚:男,1969年生,副教授,主要研究方向为智能人机交互、普适计算和物联网

    郭泽龙:男,1996年生,硕士生,研究方向为智能人机交互和模式识别

    刘乐园:男,1990年生,博士生,主要研究方向为物联网、机器学习、软件工程

    苏予涵:男,1997年生,硕士生,研究方向为智能人机交互和模式识别

    通讯作者:

    何坚 Jianhee@bjut.edu.cn

  • 中图分类号: TN911.7; TP391

Human Activity Recognition Technology Based on Sliding Window and Convolutional Neural Network

Funds: The National Key R&D Program of China (2020YFB2104400), The National Natural Science Foundation of China (61602016), Beijing Science and Technology Plan (D171100004017003)
  • 摘要: 由于缺少统一人体活动模型和相关规范,造成已有可穿戴人体活动识别技术采用的传感器类别、数量及部署位置不尽相同,并影响其推广应用。该文在分析人体活动骨架特征基础上结合人体活动力学特征,建立基于笛卡尔坐标的人体活动模型,并规范了模型中活动传感器部署位置及活动数据的归一化方法;其次,引入滑动窗口技术建立将人体活动数据转换为RGB位图的映射方法,并设计了人体活动识别卷积神经网络(HAR-CNN);最后,依据公开人体活动数据集Opportunity创建HAR-CNN实例并进行了实验测试。实验结果表明,HAR-CNN对周期性重复活动和离散性人体活动识别的F1值分别达到了90%和92%,同时算法具有良好的运行效率。
  • 图  1  基于笛卡儿坐标的人体活动力学模型

    图  2  人体活动力学数据转位图示意

    图  3  HAR-CNN架构

    图  4  Opportunity 数据集传感器分布图

    图  5  HAR-CNN网络架构

    图  6  不同滑动窗口长度和步长的F1

    表  1  Opportunity数据集描述

    活动次数
    周期性960
    169
    走路1711
    40
    非周期性开门1125
    关门1122
    开门2119
    关门2120
    开冰箱209
    关冰箱213
    喝饮料136
    清理餐桌134
    开/关灯129
    开洗碗机128
    关洗碗机124
    开抽屉1123
    关抽屉1125
    开抽屉2116
    关抽屉2112
    开抽屉3210
    关抽屉3216
    下载: 导出CSV

    表  2  不同算法F1值对比


    算法
    GestureGesture
    (NULL)
    MLML
    (NULL)
    运行
    时间(s)
    Bayes Network0.790.810.820.7432.9
    Random Forest0.630.720.730.6921.2
    Naïve Bayes0.540.660.750.748.1
    Random Tree0.750.880.870.857.0
    DeepConvLSTM0.860.910.930.896.6
    MS-2DCNN0.810.890.920.855.8
    DRNN0.840.920.910.887.3
    HAR-CNN0.900.920.920.903.7
    下载: 导出CSV

    表  3  ML活动识别的准确率和召回率(%)

    活动类别DRNN准确率DRNN召回率HAR-CNN
    准确率
    HAR-CNN
    召回率
    NULL90919292
    站立92919692
    走路79828194
    坐下92859373

    平均值
    90
    89
    85
    87
    90
    90
    85
    87
    标准差5458
    下载: 导出CSV

    表  4  GR活动识别的准确率和召回率(%)

    活动类别DRNN准确率DRNN召回率HAR-CNN
    准确率
    HAR-CNN
    召回率
    NULL
    开门1
    95
    87
    97
    92
    94
    92
    95
    87
    开门293939693
    关门190929680
    关门295929691
    开冰箱85659080
    关冰箱83868774
    开洗碗机88748383
    关洗碗机80818583
    开抽屉179778886
    关抽屉180778575
    开抽屉277839782
    关抽屉279888383
    开抽屉386859690
    关抽屉386849185
    清理餐桌9382100100
    喝饮料89887895
    开/关灯
    平均值
    94
    87
    69
    84
    91
    90
    95
    87
    标准差6867
    下载: 导出CSV
  • [1] SINGH R and SRIVASTAVA R. Some contemporary approaches for human activity recognition: A survey[C]. 2020 International Conference on Power Electronics & IoT Applications in Renewable Energy and its Control, Mathura, India, 2020: 544–548.
    [2] GOTA D I, PUSCASIU A, FANCA A, et al. Human-Computer Interaction using hand gestures[C]. 2020 IEEE International Conference on Automation, Quality and Testing, Robotics, Cluj-Napoca, Romania, 2020: 1–5.
    [3] HU Ning, SU Shen, TANG Chang, et al. Wearable-sensors based activity recognition for smart human healthcare using internet of things[C]. 2020 International Wireless Communications and Mobile Computing, Limassol, Cyprus, 2020: 1909–1915. doi: 10.1109/IWCMC48107.2020.9148197.
    [4] NATANI A, SHARMA A, PERUMA T, et al. Deep learning for multi-resident activity recognition in ambient sensing smart homes[C]. The IEEE 8th Global Conference on Consumer Electronics, Osaka, Japan, 2019: 340–341.
    [5] RODRIGUES R, BHARGAVA N, VELMURUGAN R, et al. Multi-timescale trajectory prediction for abnormal human activity detection[C]. 2020 IEEE Winter Conference on Applications of Computer Vision, Snowmass, USA, 2020: 2615–2623.
    [6] DEEP S and ZHENG Xi. Leveraging CNN and transfer learning for vision-based human activity recognition[C]. The 29th International Telecommunication Networks and Applications Conference, Auckland, New Zealand, 2019: 1–4.
    [7] 邓诗卓, 王波涛, 杨传贵, 等. CNN多位置穿戴式传感器人体活动识别[J]. 软件学报, 2019, 30(3): 718–737. doi: 10.13328/j.cnki.jos.005685

    DENG Shizhuo, WANG Botao, YANG Chuangui, et al. Convolutional neural networks for human activity recognition using multi-location wearable sensors[J]. Journal of Software, 2019, 30(3): 718–737. doi: 10.13328/j.cnki.jos.005685
    [8] WANG Yan, CANG Shuang, and YU Hongnian. A survey on wearable sensor modality centred human activity recognition in health care[J]. Expert Systems with Applications, 2019, 137: 167–190. doi: 10.1016/j.eswa.2019.04.057
    [9] BAO Ling and INTILLE S S. Activity recognition from user-annotated acceleration data[C]. The 2nd International Conference on Pervasive Computing, Vienna, Austria, 2004: 1–17.
    [10] NURMI P, FLORÉEN P, PRZYBILSKI M, et al. A framework for distributed activity recognition in ubiquitous systems[C]. International Conference on Artificial Intelligence, Las Vegas, USA, 2005: 650–655.
    [11] RAVI N, DANDEKAR N, MYSORE P, et al. Activity recognition from accelerometer data[C]. The Twentieth National Conference on Artificial Intelligence and the Seventeenth Innovative Applications of Artificial Intelligence Conference, Pittsburgh, USA, 2005: 1541–1546.
    [12] 何坚, 周明我, 王晓懿. 基于卡尔曼滤波与k-NN算法的可穿戴跌倒检测技术研究[J]. 电子与信息学报, 2017, 39(11): 2627–2634. doi: 10.11999/JEIT170173

    HE Jian, ZHOU Mingwo, and WANG Xiaoyi. Wearable method for fall detection based on Kalman filter and k-NN algorithm[J]. Journal of Electronics &Information Technology, 2017, 39(11): 2627–2634. doi: 10.11999/JEIT170173
    [13] TRAN D N and PHAN D D. Human activities recognition in android smartphone using support vector machine[C]. The 7th International Conference on Intelligent Systems, Modelling and Simulation, Bangkok, Thailand, 2016: 64–68.
    [14] HANAI Y, NISHIMURA J, and KURODA T. Haar-like filtering for human activity recognition using 3D accelerometer[C]. The 13th Digital Signal Processing Workshop and 5th IEEE Signal Processing Education Workshop. Marco Island, USA, 2009: 675–678.
    [15] ORDÓÑEZ F J and ROGGEN D. Deep convolutional and LSTM recurrent neural networks for multimodal wearable activity recognition[J]. Sensors, 2016, 16(1): 115. doi: 10.3390/s16010115
    [16] 何坚, 张子浩, 王伟东. 基于ZigBee和CNN的低功耗跌倒检测技术[J]. 天津大学学报: 自然科学与工程技术版, 2019, 52(10): 1045–1054. doi: 10.11784/tdxbz201808059

    HE Jian, ZHANG Zihao, and WANG Weidong. Low-power fall detection technology based on ZigBee and CNN algorithm[J]. Journal of Tianjin University:Science and Technology, 2019, 52(10): 1045–1054. doi: 10.11784/tdxbz201808059
    [17] GJORESKI H, LUSTREK M, and GAMS M. Accelerometer placement for posture recognition and fall detection[C]. 2011 Seventh International Conference on Intelligent Environments, Nottingham, UK, 2011: 47-54.
    [18] WANG Changhong, LU Wei, NARAYANAN M R, et al. Low-power fall detector using triaxial accelerometry and barometric pressure sensing[J]. IEEE Transactions on Industrial Informatics, 2016, 12(6): 2302–2311. doi: 10.1109/TII.2016.2587761
    [19] GJORESKI H, KOZINA S, GAMS M, et al. RAReFall - Real-time activity recognition and fall detection system[C]. 2014 IEEE International Conference on Pervasive Computing and Communication Workshops, Budapest, Hungary, 2014: 145-147.
    [20] SHOTTON J, FITZGIBBON A, COOK M, et al. Real-time human pose recognition in parts from single depth images[C]. The CVPR 2011, Colorado, USA, 2011: 1297–1304. doi: 10.1109/CVPR.2011.5995316.
    [21] KONIUSZ P, CHERIAN A, and PORIKLI F. Tensor representations via kernel linearization for action recognition from 3D skeletons[C]. The 14th European Conference on Computer Vision, Amsterdam, The Netherlands, 2106: 37–53.
    [22] LECUN Y, BOTTOU L, BENGIO Y, et al. Gradient-based learning applied to document recognition[J]. Proceedings of the IEEE, 1998, 86(11): 2278–2324. doi: 10.1109/5.726791
    [23] KRIZHEVSKY A, SUTSKEVER I, and HINTON G E. ImageNet classification with deep convolutional neural networks[J]. Communications of the ACM, 2017, 60(6): 84–90. doi: 10.1145/3065386
    [24] SIMONYAN K and ZISSERMAN A. Very Deep convolutional networks for large-scale image recognition[J]. arXiv: 1409.1556, 2015.
    [25] Opportunity dataset[EB/OL]. https://archive.ics.uci.edu/ml/datasets/OPPORTUNITY+Activity+Recognition, 2015.
    [26] MURAD A and PYUN J Y. Deep recurrent neural networks for human activity recognition[J]. Sensors, 2017, 17(11): 2556. doi: 10.3390/s17112556
  • 加载中
图(6) / 表(4)
计量
  • 文章访问数:  1640
  • HTML全文浏览量:  941
  • PDF下载量:  216
  • 被引次数: 0
出版历程
  • 收稿日期:  2020-11-04
  • 修回日期:  2021-06-02
  • 网络出版日期:  2021-08-24
  • 刊出日期:  2022-01-10

目录

    /

    返回文章
    返回