[1]龚冬颖,黄敏,张洪博,等.RGBD人体行为识别中的自适应特征选择方法[J].智能系统学报,2017,12(01):1-7.[doi:10.11992/tis.201611008]
 GONG Dongying,HUANG Min,ZHANG Hongbo,et al.Adaptive feature selection method for action recognition of human body in RGBD data[J].CAAI Transactions on Intelligent Systems,2017,12(01):1-7.[doi:10.11992/tis.201611008]
点击复制

RGBD人体行为识别中的自适应特征选择方法(/HTML)
分享到:

《智能系统学报》[ISSN:1673-4785/CN:23-1538/TP]

卷:
第12卷
期数:
2017年01期
页码:
1-7
栏目:
出版日期:
2017-02-25

文章信息/Info

Title:
Adaptive feature selection method for action recognition of human body in RGBD data
作者:
龚冬颖12 黄敏12 张洪博3 李绍滋12
1. 厦门大学 智能科学与技术系, 福建 厦门 361005;
2. 厦门大学 福建省仿脑智能系统重点实验室, 福建 厦门 361005;
3. 华侨大学 计算机科学与技术学院, 福建 厦门 361005
Author(s):
GONG Dongying12 HUANG Min12 ZHANG Hongbo3 LI Shaozi12
1. Intelligent Science & Technology Department, Xiamen University, Xiamen 361005, China;
2. Fujian Key Laboratory of Brain-like Intelligent Systems, Xiamen University, Xiamen 361005, China;
3. Computer Science & Technology School, Huaqiao University, Xiamen 361005, China
关键词:
人体行为识别自适应特征选择信息熵随机森林
Keywords:
action recognition of human bodyadaptive feature selectioninformation entropyrandom forest
分类号:
TP391.41
DOI:
10.11992/tis.201611008
摘要:
目前在RGBD视频的行为识别中,为了提高识别准确率,许多方法采用多特征融合的方式。通过实验分析发现,行为在特定特征上的分类效果好,但是多特征融合并不能体现个别特征的分类优势,同时融合后的特征维度很高,时空开销大。为了解决这个问题,提出了RGBD人体行为识别中的自适应特征选择方法,通过随机森林和信息熵分析人体关节点判别力,以高判别力的人体关节点的数量作为特征选择的标准。通过该数量阈值的筛选,选择关节点特征或者关节点相对位置作为行为识别特征。实验结果表明,该方法相比于特征融合的算法,行为识别的准确率有了较大提高,超过了大部分算法的识别结果。
Abstract:
Many methods adopt the technique of multi-feature fusion to improve the recognition accuracy of RGBD video. Experimental analyses revealed that the classification effect of certain behavior in some features is good; however, multi-feature fusion cannot reflect the classification superiority of certain features. Moreover, multi-feature fusion is highly dimensional and considerably expensive in terms of time and space. This research proposes an adaptive feature selection method for RGBD human-action recognition to solve this problem. First, random forest and information entropy were used to analyze the judgment ability of the human joints, whereas the number of human joints with high judgment ability were chosen as the feature selection criterion. By screening the threshold number, either the joint feature or the relative positions of the joints was used as the recognition feature of action. Experimental results show that compared with multi-feature fusion, the method significantly improved the accuracy of action recognition and outperformed most other algorithms.

参考文献/References:

[1] WANG Jiang, LIU Zicheng, WU Ying, et al. Mining actionlet ensemble for action recognition with depth cameras[C]//Proceedings of 2012 IEEE Conference on Computer Vision and Pattern Recognition (CVPR). Providence, USA, 2012: 1290-1297.
[2] YANG Xiaodong, TIAN Yingli. Super normal vector for activity recognition using depth sequences[C]//Proceedings of 2014 IEEE Conference on Computer Vision and Pattern Recognition. Columbus, USA, 2014: 804-811.
[3] CHEN Chen, JAFARI R, KEHTARNAVAZ N. Action recognition from depth sequences using depth motion maps-based local binary patterns[C]//Proceedings of 2015 IEEE Winter Conference on Applications of Computer Vision. Waikoloa, USA, 2015: 1092-1099.
[4] XIA LU, CHEN C C, AGGARWAL J K. View invariant human action recognition using histograms of 3D joints[C]//Proceedings of 2012 IEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshops. Providence, USA, 2012: 20-27.
[5] LIU Jingen, ALI S, SHAH M. Recognizing human actions using multiple features[C]//Proceedings of 2008 IEEE Conference on Computer Vision and Pattern Recognition. Anchorage, USA, 2008: 1-8.
[6] WANG Liang, ZHOU Hang, LOW S C, et al. Action recognition via multi-feature fusion and Gaussian process classification[C]//Proceedings of 2009 Workshop on Applications of Computer Vision. Snowbird, USA, 2009: 1-6.
[7] LIU Jia, YANG Jie, ZHANG Yi, et al. Action recognition by multiple features and hyper-sphere multi-class SVM[C]//Proceedings of the 20th International Conference on Pattern Recognition. Istanbul, Turkey, 2010: 3744-3747.
[8] BENMOKHTAR R. Robust human action recognition scheme based on high-level feature fusion[J]. Multimedia tools and applications, 2014, 69(2): 253-275.
[9] TRAN K, KAKADIARIS I A, SHAH S K. Fusion of human posture features for continuous action recognition[C]//Proceedings of the 11th European Conference on Trends and Topics in Computer Vision. Heraklion, Greece, 2010: 244-257.
[10] OREIFEJ O, LIU Zicheng. HON4D: histogram of oriented 4D normals for activity recognition from depth sequences[C]//Proceedings of 2013 IEEE Conference on Computer Vision and Pattern Recognition. Portland, USA, 2013: 716-723.
[11] YANG Xiaodong, TIAN Yingli. Effective 3D action recognition using EigenJoints[J]. Journal of visual communication and image representation, 2014, 25(1): 2-11.
[12] RAHMANI H, MAHMOOD A, HUYNH D Q, et al. Real time action recognition using histograms of depth gradients and random decision forests[C]//Proceedings of 2014 IEEE Winter Conference on Applications of Computer Vision. Steamboat Springs, USA, 2014: 626-633.
[13] YU Gang, LIU Zicheng, YUAN Junsong. Discriminative orderlet mining for real-time recognition of human-object interaction[M]//CREMERS D, REID I, SAITO H, et al. Computer Vision—ACCV 2014. Lecture Notes in Computer Science. Cham: Springer International Publishing, 2015: 50-65.
[14] CHAARAOUI A A, PADILLA-LOPEZ J R, FLOREZ-REVUELTA F. Fusion of skeletal and silhouette-based features for human action recognition with RGB-D devices[C]//Proceedings of 2013 IEEE International Conference on Computer Vision Workshops. Sydney, Australia, 2013: 91-97.
[15] GAO Zan, ZHANG Hua, LIU A A, et al. Human action recognition on depth dataset[J]. Neural computing and applications, 2016, 27(7): 2047-2054.
[16] LIU Zhi, ZHANG Chenyang, TIAN Yingli. 3D-based deep convolutional neural network for action recognition with depth sequences[J]. Image and vision computing, 2016, 55(2): 93-100.
[17] LI Meng, LEUNG H, SHUM H P H. Human action recognitionvia skeletal and depth based feature fusion[C]//Proceedings of the 9th International Conference on Motion in Games. Burlingame, USA, 2016: 123-132.

备注/Memo

备注/Memo:
收稿日期:2016-11-7;改回日期:。
基金项目:国家自然科学基金项目(61572409,61571188,61202143);福建省自然科学基金项目(2013J05100);中医健康管理福建省2011协同创新中心项目.
作者简介:龚冬颖,女,1992年生,硕士研究生,主要研究方向为行为识别、机器学习;黄敏,女,1982年生,博士研究生,主要研究方向为行为识别、机器学习、目标检测和图像检索;张洪博,男,1986年生,讲师,博士,主要研究方向为人体行为识别,主持国家自然科学基金青年项目和福建省自然科学基金面上项目各1项,发表学术论文多篇,其中被SCI、EI检索20余篇。
通讯作者:李绍滋.E-mail:szlig@xmu.edu.cn.
更新日期/Last Update: 1900-01-01