[1]YI Chunzhi,JIA Yicheng,JIANG Feng,et al.Human motion intention recognition method based on inertial measurement unit: current situation, and challenges[J].CAAI Transactions on Intelligent Systems,2025,20(4):763-775.[doi:10.11992/tis.202407012]
Copy

Human motion intention recognition method based on inertial measurement unit: current situation, and challenges

References:
[1] BROPHY E, VEIGA J J D, WANG Zhengwei, et al. An interpretable machine vision approach to human activity recognition using photoplethysmograph sensor data[EB/OL]. (2018-12-03)[2024-07-10]. https://arxiv.org/abs/1812.00668v1.
[2] WANG Jindong, CHEN Yiqiang, HAO Shuji, et al. Deep learning for sensor-based activity recognition: a survey[J]. Pattern recognition letters, 2019, 119: 3-11.
[3] KWAPISZ J R, WEISS G M, MOORE S A. Activity recognition using cell phone accelerometers[J]. ACM SIGKDD explorations newsletter, 2011, 12(2): 74-82.
[4] ANGUITA D, GHIO A, ONETO L, et al. A public domain dataset for human activity recognition using smartphones[C]//ESANN 2013 Proceedings, 21st European Symposium on Artificial Neural Networks, Computational Intelligence and Machine Learning. Bruges: ESANN, 2013: 437-442.
[5] VAN KASTEREN T, NOULAS A, ENGLEBIENNE G, et al. Accurate activity recognition in a home setting[C]//Proceedings of the 10th International Conference on Ubiquitous Computing. Seoul: ACM, 2008: 21-24.
[6] SHOAIB M, BOSCH S, INCEL O D, et al. Fusion of smartphone motion sensors for physical activity recognition[J]. Sensors, 2014, 14(6): 10146-10176.
[7] CHEN Chen, JAFARI R, KEHTARNAVAZ N. UTD-MHAD: a multimodal dataset for human action recognition utilizing a depth camera and a wearable inertial sensor[C]//2015 IEEE International Conference on Image Processing. Quebec City: IEEE, 2015: 168-172.
[8] STISEN A, BLUNCK H, BHATTACHARYA S, et al. Smart devices are different: assessing and mitigating mobile sensing heterogeneities for activity recognition[C]//Proceedings of the 13th ACM Conference on Embedded Networked Sensor Systems. Seoul: ACM, 2015: 127-140.
[9] ALTUN K, BARSHAN B, TUN?EL O. Comparative study on classifying human activities with miniature inertial and magnetic sensors[J]. Pattern recognition, 2010, 43(10): 3605-3620.
[10] BANOS O, GARCIA R, HOLGADO-TERRIZA J A, et al.MHealthDroid: a novel framework for agile development of mobile health applications[M]//Ambient Assisted Living and Daily Activities. Cham: Springer International Publishing, 2014: 91-98.
[11] BANOS O, VILLALONGA C, GARCIA R, et al. Design, implementation and validation of a novel open framework for agile development of mobile health applications[J]. Biomedical engineering online, 2015, 14(Suppl 2): S6.
[12] CHAVARRIAGA R, SAGHA H, CALATRONI A, et al. The opportunity challenge: a benchmark database for on-body sensor-based activity recognition[J]. Pattern recognition letters, 2013, 34(15): 2033-2042.
[13] REISS A, STRICKER D. Introducing a new benchmarked dataset for activity monitoring[C]//2012 16th International Symposium on Wearable Computers. Newcastle: IEEE, 2012: 108-109.
[14] SHOAIB M, SCHOLTEN H, HAVINGA P J M. Towards physical activity recognition using smartphone sensors[C]//2013 IEEE 10th International Conference on Ubiquitous Intelligence and Computing and 2013 IEEE 10th International Conference on Autonomic and Trusted Computing. Vietri sul Mare: IEEE, 2013: 80-87.
[15] MICUCCI D, MOBILIO M, NAPOLETANO P. UniMiB SHAR: a dataset for human activity recognition using acceleration data from smartphones[J]. Applied sciences, 2017, 7(10): 1101.
[16] ZHANG Mi, SAWCHUK A A. USC-HAD: a daily activity dataset for ubiquitous activity recognition using wearable sensors[C]//Proceedings of the 2012 ACM Conference on Ubiquitous Computing. Pittsburgh: ACM, 2012: 1036-1043.
[17] VAIZMAN Y, ELLIS K, LANCKRIET G. Recognizing detailed human context in the wild from smartphones and smartwatches[J]. IEEE pervasive computing, 2017, 16(4): 62-74.
[18] KAWAGUCHI N, OGAWA N, IWASAKI Y, et al. HASC Challenge: gathering large scale human activity corpus for the real-world activity understandings[C]//Proceedings of the 2nd Augmented Human International Conference. Tokyo: ACM, 2011: 271-275.
[19] WEISS G M, LOCKHART J W, PULICKAL T T, et al. Actitracker: a smartphone-based activity recognition system for improving health and well-being[C]//2016 IEEE International Conference on Data Science and Advanced Analytics. Montreal: IEEE, 2016: 682-688.
[20] BRUNO B, MASTROGIOVANNI F, SGORBISSA A. A public domain dataset for ADL recognition using wrist-placed accelerometers[C]//The 23rd IEEE International Symposium on Robot and Human Interactive Communication. Edinburgh: IEEE, 2014: 738-743.
[21] ZHANG Zhilin, PI Zhouyue, LIU Benyuan. TROIKA: a general framework for heart rate monitoring using wrist-type photoplethysmographic signals during intensive physical exercise[J]. IEEE transactions on biomedical engineering, 2015, 62(2): 522-531.
[22] 邓淼磊, 高振东, 李磊, 等. 基于深度学习的人体行为识别综述[J]. 计算机工程与应用, 2022, 58(13): 14-26.
DENG Miaolei, GAO Zhendong, LI Lei, et al. Overview of human behavior recognition based on deep learning[J]. Computer engineering and applications, 2022, 58(13): 14-26.
[23] KYRITSIS K, TATLI C L, DIOU C, et al. Automated analysis of in meal eating behavior using a commercial wristband IMU sensor[C]//2017 39th Annual International Conference of the IEEE Engineering in Medicine and Biology Society. Jeju: IEEE, 2017: 2843-2846.
[24] CHAUHAN J, HU Yining, SENEVIRATNE S, et al. BreathPrint: breathing acoustics-based user authentication[C]//Proceedings of the 15th Annual International Conference on Mobile Systems, Applications, and Services. New York: ACM, 2017: 278-291.
[25] ZAPPI P, LOMBRISER C, STIEFMEIER T, et al. Activity recognition from on-body sensors: accuracy-power trade-off by dynamic sensor selection[M]//Wireless Sensor Networks. Berlin: Springer Berlin Heidelberg, 2008: 17-33.
[26] B?CHLIN M, PLOTNIK M, ROGGEN D, et al. Wearable assistant for Parkinson’s disease patients with the freezing of gait symptom[J]. IEEE transactions on information technology in biomedicine, 2010, 14(2): 436-446.
[27] CHEN Y, KEOGH E, HU B, et al. The UCR time series classification archive[EB/OL]. (2022-02-10)[2024-07-10]. http://www.cs.ucr.edu/~eamonn/time_series_data/.
[28] BAGNALL A, DAU H A, LINES J, et al. The UEA multivariate time series classification archive[EB/OL]. (2018-11-31)[2024-07-10]. https://arxiv.org/abs/1811.00075v1.
[29] GJORESKI H, CILIBERTO M, WANG Lin, et al. The University of Sussex-Huawei locomotion and transportation dataset for multimodal analytics with mobile devices[J]. IEEE access, 2018, 6: 42592-42604.
[30] GIORGI G, MARTINELLI F, SARACINO A, et al. Try walking in my shoes, if you can: accurate gait recognition through deep learning[M]//Computer Safety, Reliability, and Security. Cham: Springer International Publishing, 2017: 384-395.
[31] ISMAIL F H, FORESTIER G, WEBER J, et al. Data augmentation using synthetic data for time series classification with deep residual networks[C]//Proceedings of the International Workshop on Advanced Analytics and Learning on Temporal Data. Dublin: ECML PKDD, 2018.
[32] WANG Jiwei, CHEN Yiqiang, GU Yang, et al. SensoryGANs: an effective generative adversarial framework for sensor-based human activity recognition[C]//2018 International Joint Conference on Neural Networks. Rio de Janeiro: IEEE, 2018: 1-8.
[33] RAMPONI G, PROTOPAPAS P, BRAMBILLA M, et al. T-CGAN: conditional generative adversarial network for data augmentation in noisy time series with irregular sampling[EB/OL]. (2018-11-20)[2024-07-10]. https://arxiv.org/abs/1811.08295v2.
[34] ALZANTOT M, CHAKRABORTY S, SRIVASTAVA M. SenseGen: a deep learning architecture for synthetic sensor data generation[C]//2017 IEEE International Conference on Pervasive Computing and Communications Workshops. Kona: IEEE, 2017: 188-193.
[35] SAHA S S, SANDHA S S, SRIVASTAVA M. Deep convolutional bidirectional LSTM for complex activity recognition with missing data[M]//Human Activity Recognition Challenge. Singapore: Springer Singapore, 2020: 39-53.
[36] WU Donghui, XU Jing, CHEN Jibin, et al. Human activity recognition algorithm based on CNN-LSTM with attention mechanism[J]. Science technology and engineering, 2023, 23(2): 681-689.
[37] KWON H, TONG C, HARESAMUDRAM H, et al. IMUTube: automatic extraction of virtual on-body accelerometry from video for human activity recognition[EB/OL]. (2020-06-29)[2024-07-10]. https://arxiv.org/abs/2006.05675v2.
[38] ALHARBI F, OUARBYA L, WARD J A. Synthetic sensor data for human activity recognition[C]//2020 International Joint Conference on Neural Networks. Glasgow: IEEE, 2020: 1-9.
[39] CHAN Manghong, NOOR M H M. A unified generative model using generative adversarial network for activity recognition[J]. Journal of ambient intelligence and humanized computing, 2021, 12(7): 8119-8128.
[40] LI Xiang, LUO Jinqi, YOUNES R. ActivityGAN: generative adversarial networks for data augmentation in sensor-based human activity recognition[C]//Adjunct Proceedings of the 2020 ACM International Joint Conference on Pervasive and Ubiquitous Computing and Proceedings of the 2020 ACM International Symposium on Wearable Computers. Virtual Event: ACM, 2020: 249-254.
[41] 许芬, 史鹏飞. 人体动作与行为识别研究综述[J]. 工业控制计算机, 2023, 36(9): 58-59.
XU Fen, SHI Pengfei. Research on review on human action recognition[J]. Industrial control computer, 2023, 36(9): 58-59.
[42] SIIRTOLA P, R?NING J. Incremental learning to personalize human activity recognition models: the importance of human AI collaboration[J]. Sensors, 2019, 19(23): 5151.
[43] QIAN Hangwei, PAN S J, MIAO Chunyan. Latent independent excitation for generalizable sensor-based cross-person activity recognition[C]//Proceedings of the AAAI Conference on Artificial Intelligence. Virtual: AAAI, 2021: 11921-11929.
[44] ARJOVSKY M, BOTTOU L, GULRAJANI I, et al. Invariant risk minimization[EB/OL]. (2019-07-05)[2024-07-10]. https://arxiv.org/abs/1907.02893v3.
[45] ZENG Ming, YU Tong, WANG Xiao, et al. Semi-supervised convolutional neural networks for human activity recognition[C]//2017 IEEE International Conference on Big Data. Boston: IEEE, 2017: 522-529.
[46] BALABKA D. Semi-supervised learning for human activity recognition using adversarial autoencoders[C]//Adjunct Proceedings of the 2019 ACM International Joint Conference on Pervasive and Ubiquitous Computing and Proceedings of the 2019 ACM International Symposium on Wearable Computers. London: ACM, 2019: 685-688.
[47] CHEN Kaixuan, YAO Lina, ZHANG Dalin, et al. Distributionally robust semi-supervised learning for people-centric sensing[J]. Proceedings of the AAAI conference on artificial intelligence, 2019, 33(1): 3321-3328.
[48] GUDUR G K, SUNDARAMOORTHY P, UMAASHANKAR V. ActiveHARNet: towards on-device deep Bayesian active learning for human activity recognition[C]//The 3rd International Workshop on Deep Learning for Mobile Systems and Applications. Seoul: ACM, 2019: 7-12.
[49] BETTINI C, CIVITARESE G, PRESOTTO R. Personalized semi-supervised federated learning for human activity recognition[EB/OL]. (2021-04-05)[2024-07-10]. https://arxiv.org/abs/2104.08094v2.
[50] 马偲臆, 刘晓薇, 谢雪琴, 等. 迁移学习在生物医学领域的应用[J]. 生物医学工程学进展, 2023, 44(4): 347-356.
MA Caiyi, LIU Xiaowei, XIE Xueqin, et al. The application of transfer learning in biomedicine[J]. Progress in biomedical engineering, 2023, 44(4): 347-356.
[51] MOHAMMED S, TASHEV I. Unsupervised deep representation learning to remove motion artifacts in free-mode body sensor networks[C]//2017 IEEE 14th International Conference on Wearable and Implantable Body Sensor Networks. Eindhoven: IEEE, 2017: 183-188.
[52] CAO Wei, WANG Dong, LI Jian, et al. BRITS: bidirectional recurrent imputation for time series[J]. Advances in neural information processing systems, 2018, 31: 6775-6785.
[53] LUO Y, CAI X, ZHANG Y, et al. Multivariate time series imputation with generative adversarial networks[J]. Advances in neural information processing systems, 2018, 31: 1596-1607.
[54] SAEED A, OZCELEBI T, LUKKIEN J. Synthesizing and reconstructing missing sensory modalities in behavioral context recognition[J]. Sensors, 2018, 18(9): 2967.
[55] GAO Yang, ZHANG Ning, WANG Honghao, et al. iHear food: eating detection using commodity bluetooth headsets[C]//2016 IEEE First International Conference on Connected Health: Applications, Systems and Engineering Technologies. Washington: IEEE, 2016: 163-172.
[56] ZHOU Yipin, WANG Zhaowen, FANG Chen, et al. Visual to sound: generating natural sound for videos in the wild[C]//2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition. Salt Lake City: IEEE, 2018: 3550-3558.
[57] HOSSAIN M Z, SOHEL F, SHIRATUDDIN M F, et al. A comprehensive survey of deep learning for image captioning[J]. ACM computing surveys, 2019, 51(6): 1-36.
[58] 梁绪, 李文新, 张航宁. 人体行为识别方法研究综述[J]. 计算机应用研究, 2022, 39(3): 651-660.
LIANG Xu, LI Wenxin, ZHANG Hangning. Review of research on human action recognition methods[J]. Application research of computers, 2022, 39(3): 651-660.
[59] MALEKZADEH M, CLEGG R G, CAVALLARO A, et al. Protecting sensory data against sensitive inferences[C]//Proceedings of the 1st Workshop on Privacy by Design in Distributed Systems. Porto Portugal: ACM, 2018: 21-26.
[60] MALEKZADEH M, CLEGG R G, CAVALLARO A, et al. Mobile sensor data anonymization[C]//Proceedings of the International Conference on Internet of Things Design and Implementation. Montrea: ACM, 2019: 49-58.
[61] MALEKZADEH M, CLEGG R G, HADDADI H. Replacement AutoEncoder: a privacy-preserving algorithm for sensory data analysis[C]//2018 IEEE/ACM Third International Conference on Internet-of-Things Design and Implementation. Orlando: IEEE, 2018: 165-176.
[62] VAVOULAS G, CHATZAKI C, MALLIOTAKIS T, et al. The MobiAct dataset: recognition of activities of daily living using smartphones[C]//Proceedings of the International Conference on Information and Communication Technologies for Ageing Well and E-Health. Rome: SCITEPRESS-Science and and Technology Publications, 2016: 143-151.
[63] 梁朝晖, 朱笑笑, 曹其新, 等. 基于联邦学习的下肢康复评估算法与实现[J]. 计算机工程与设计, 2023, 44(8): 2548-2554.
LIANG Zhaohui, ZHU Xiaoxiao, CAO Xinqi, et al. Algorithm and implementation of lower limb rehabilitation evaluation based on federated learning[J]. Computer engineering and design, 2023, 44(8): 2548-2554.
[64] XIAO Zhiwen, XU Xin, XING Huanlai, et al. A federated learning system with enhanced feature extraction for human activity recognition[J]. Knowledge-based systems, 2021, 229: 107338.
[65] TU Linlin, OUYANG Xiaomin, ZHOU Jiayu, et al. FedDL: federated learning via dynamic layer sharing for human activity recognition[C]//Proceedings of the 19th ACM Conference on Embedded Networked Sensor Systems. Coimbra Portugal: ACM, 2021: 15-28.
[66] GUDUR G K, PEREPU S K. Resource-constrained federated learning with heterogeneous labels and models for human activity recognition[M]//Deep Learning for Human Activity Recognition. Singapore: Springer Singapore, 2021: 57-69.
[67] 薛开平, 范茂, 王峰, 等. 区块链隐私众包中的数据验证与可控匿名方案[J]. 电子与信息学报, 2024, 46(2): 748-756.
XUE Kaiping, FAN Mao, WANG Feng, et al. Privacy crowdsourcing on blockchain with data verification and controllable anonymity[J]. Journal of electronics & information technology, 2024, 46(2): 748-756.
[68] LYU Mingqi, XU Wei, CHEN Tieming. A hybrid deep convolutional and recurrent neural network for complex activity recognition using multimodal sensors[J]. Neurocomputing, 2019, 362: 33-40.
[69] 赵海, 陈佳伟, 施瀚, 等. 一种应用于人体活动识别的迁移学习算法[J]. 东北大学学报(自然科学版), 2022, 43(6): 776-782.
ZHAO Hai, CHEN Jiawei, SHI Han, et al. A transfer learning algorithm applied to human activity recognition[J]. Journal of northeastern university (natural science edition), 2022, 43(6): 776-782.
[70] SOLEIMANI E, NAZERFARD E. Cross-subject transfer learning in human activity recognition systems using generative adversarial networks[J]. Neurocomputing, 2021, 426: 26-34.
[71] ABEDIN A, REZATOFIGHI H, RANASINGHE D C. Guided-GAN: adversarial representation learning for activity recognition with wearables[EB/OL]. (2021-10-12)[2024-07-10]. https://arxiv.org/abs/2110.05732v1.
[72] SANABRIA A R, ZAMBONELLI F, DOBSON S, et al. ContrasGAN: unsupervised domain adaptation in human activity recognition via adversarial and contrastive learning[J]. Pervasive and mobile computing, 2021, 78: 101477.
[73] VEPAKOMMA P, DE D, DAS S K, et al. A-wristocracy: deep learning on wrist-worn sensing for recognition of user complex activities[C]//2015 IEEE 12th International Conference on Wearable and Implantable Body Sensor Networks. Cambridge: IEEE, 2015: 1-6.
[74] KYRITSIS K, DIOU C, DELOPOULOS A. Modeling wrist micromovements to measure in-meal eating behavior from inertial sensor data[J]. IEEE journal of biomedical and health informatics, 2019, 23(6): 2325-2334.
[75] LIU Cihang, ZHANG Lan, LIU Zongqian, et al. Lasagna: towards deep hierarchical understanding and searching over mobile sensing data[C]//Proceedings of the 22nd Annual International Conference on Mobile Computing and Networking. New York: ACM, 2016: 334-347.
[76] PENG Liangying, CHEN Ling, YE Zhenan, et al. AROMA: a deep multi-task learning based simple and complex human activity recognition method using wearable sensors[C]// Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies. New York: ACM, 2018, 2: 74.
[77] GRZESZICK R, LENK J M, RUEDA F M, et al. Deep neural network based human activity recognition for the order picking process[C]//Proceedings of the 4th International Workshop on Sensor-based Activity Recognition and Interaction. Rostock: ACM, 2017: 1-6.
[78] MATSUI S, INOUE N, AKAGI Y, et al. User adaptation of convolutional neural network for human activity recognition[C]//2017 25th European Signal Processing Conference. Kos: IEEE, 2017: 753-757.
[79] LANE N D, BHATTACHARYA S, GEORGIEV P, et al. DeepX: a software accelerator for low-power deep learning inference on mobile devices[C]//2016 15th ACM/IEEE International Conference on Information Processing in Sensor Networks. Vienna: IEEE, 2016: 1-12.
[80] LANE N D, GEORGIEV P, QENDRO L. DeepEar: robust smartphone audio sensing in unconstrained acoustic environments using deep learning[C]//Proceedings of the 2015 ACM International Joint Conference on Pervasive and Ubiquitous Computing. Osaka: ACM, 2015: 283-294.
[81] CAO Qingqing, BALASUBRAMANIAN N, BALASUBRAMANIAN A. MobiRNN: efficient recurrent neural network execution on mobile GPU[C]//Proceedings of the 1st International Workshop on Deep Learning for Mobile Systems and Applications. New York: ACM, 2017: 1-6.
[82] YAO Shuochao, HU Shaohan, ZHAO Yiran, et al. DeepSense: a unified deep learning framework for time-series mobile sensing data processing[C]//Proceedings of the 26th International Conference on World Wide Web. Perth: International World Wide Web Conferences Steering Committee, 2017: 351-360.
[83] BHATTACHARYA S, LANE N D. Sparsification and separation of deep learning layers for constrained resource inference on wearables[C]//Proceedings of the 14th ACM Conference on Embedded Network Sensor Systems CD-ROM. Stanford: ACM, 2016: 176-189.
[84] EDEL M, K?PPE E. Binarized-BLSTM-RNN based human activity recognition[C]//2016 International Conference on Indoor Positioning and Indoor Navigation. Alcala de Henares: IEEE, 2016: 1-7.
[85] 赵冬冬, 赖亮, 陈朋, 等. 基于昇腾处理器的边端人体动作识别算法设计与实现[J]. 光电工程, 2024, 51(6): 66-80.
ZHAO Dongdong, LAI Liang, CHEN Peng, et al. Design and implementation of edge-based human action recognition algorithm based on ascend processor[J]. Opto-electronic engineering, 2024, 51(6): 66-80.
[86] BHAT G, TUNCEL Y, AN Sizhe, et al. An ultra-low energy human activity recognition accelerator for wearable health applications[J]. ACM transactions on embedded computing systems, 2019, 18(5s): 1-22.
[87] ISLAM B, NIRJON S. Zygarde: time-sensitive on-device deep inference and adaptation on intermittently-powered systems[EB/OL]. (2019-05-05)[2024-07-10]. https://arxiv.org/abs/1905.03854v2.
[88] XIA S, NIE Jingping, JIANG Xiaofan. CSafe: an intelligent audio wearable platform for improving construction worker safety in urban environments[C]//Proceedings of the 20th International Conference on Information Processing in Sensor Networks. Nashville: ACM, 2021: 207-221.
[89] XIA S, DE GODOY PEIXOTO D, ISLAM B, et al. Improving pedestrian safety in cities using intelligent wearable systems[J]. IEEE internet of things journal, 2019, 6(5): 7497-7514.
[90] DE GODOY D, ISLAM B, XIA S, et al. PAWS: a wearable acoustic system for pedestrian safety[C]//2018 IEEE/ACM Third International Conference on Internet-of-Things Design and Implementation. Orlando: IEEE, 2018: 237-248.
[91] NIE Jingping, HU Yigong, WANG Y, et al. SPIDERS: low-cost wireless glasses for continuous in situ bio-signal acquisition and emotion recognition[C]//2020 IEEE/ACM Fifth International Conference on Internet-of-Things Design and Implementation. Sydney: IEEE, 2020: 27-39.
[92] NIE Jingping, LIU Yanchen, HU Yigong, et al. SPIDERS +: a light-weight, wireless, and low-cost glasses-based wearable platform for emotion sensing and bio-signal acquisition[J]. Pervasive and mobile computing, 2021, 75: 101424.
[93] HU Yigong, NIE Jingping, WANG Y, et al. Demo abstract: wireless glasses for non-contact facial expression monitoring[C]//2020 19th ACM/IEEE International Conference on Information Processing in Sensor Networks. Sydney: IEEE, 2020: 367-368.
[94] 万涛, 李婉琦, 葛晶晶. 基于区块链的边缘移动群智感知声誉更新方案[J]. 计算机应用研究, 2023, 40(6): 1636-1640.
WAN Tao, LI Wanqi, GE Jingjing. Reputation update scheme for blockchain-based edge mobile crowdsensing[J]. Application research of computers, 2023, 40(6): 1636-1640.
[95] XIA S, DE GODOY D, ISLAM B, et al. A smartphone-based system for improving pedestrian safety[C]//2018 IEEE Vehicular Networking Conference. Taipei: IEEE, 2018: 1-2.
[96] LANE N D, GEORGIEV P. Can deep learning revolutionize mobile sensing?[C]//Proceedings of the 16th International Workshop on Mobile Computing Systems and Applications. Santa Fe New Mexico: ACM, 2015: 117-122.
[97] ZHANG Dalin, YAO Lina, ZHANG Xiang, et al. Cascade and parallel convolutional recurrent neural networks on EEG-based intention recognition for brain computer interface[J]. Proceedings of the AAAI conference on artificial intelligence, 2018, 32(1): 11496.
Similar References:

Memo

-

Last Update: 1900-01-01

Copyright © CAAI Transactions on Intelligent Systems