[1]JI Xiaofei,WANG Changhui,WANG Yangyang.Human interaction behavior-recognition method based on hierarchical structure[J].CAAI Transactions on Intelligent Systems,2015,10(6):893-900.[doi:10.11992/tis.201505006]
Copy

Human interaction behavior-recognition method based on hierarchical structure

References:
[1] WEINLAND D, RONFARD R, BOYER E. A survey of vision-based methods for action representation, segmentation and recognition[J]. Computer Vision and Image Understanding, 2011, 115(2):224-241.
[2] SEO H J, MILANFAR P. Action recognition from one example[J]. IEEE Transactions on pattern Analysis and Machine Intelligence, 2011, 33(5):867-882.
[3] 吴联世, 夏利民, 罗大庸. 人的交互行为识别与理解研究综述[J]. 计算机应用与软件, 2011, 28(11):60-63. WU Lianshi, XIA Limin, LUO Dayong. Survey on human interactive behaviour recognition and comprehension[J]. Computer Applications and Software, 2011, 28(11):60-63.
[4] YU Gang, YUAN Junsong, LIU Zicheng. Propagative hough voting for human activity recognition[C]//Proceedings of the 12th European Conference on Computer Vision, Florence, Italy. Berlin Heidelberg:Springer, 2012:693-706.
[5] YU T H, KIM T K, CIPOLLA R. Real-time action recognition by spatiotemporal semantic and structural forests[C]//Proceedings of the 21st British Machine Vision Conference. United Kingdom, 2010:1-12.
[6] YUAN Fei, SAHBI H, PRINET V. Spatio-temporal context kernel for activity recognition[C]//Proceedings of the 1st Asian Conference on Pattern Recognition. Beijing, China, 2011:436-440.
[7] BURGHOUTS G J, SCHUTTE K. Spatio-temporal layout of human actions for improved bag-of-words action detection[J]. Pattern Recognition Letters, 2013, 34(15):1861-1869.
[8] LI Nijun, CHENG Xu, GUO Haiyan,et al. A hybrid method for human interaction recognition using spatio-temporal interest points[C]//Proceedings of the 22nd International Conference on Pattern Recognition. Stockholm, Sweden, 2014:2513-2518.
[9] KONG Yu, JIA Yunde, FU Yun. Interactive phrases:semantic descriptions for human interaction recognition[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2014, 36(9):1775-1788.
[10] SLIMANI K, BENEZETH Y, SOUAMI F. Human interaction recognition based on the co-occurrence of visual words[C]//Proceedings of IEEE Conference on Computer Vision and Pattern Recognition Workshops. Columbus, Ohio, USA, 2014:461-466.
[11] JIANG Zhuolin, LIN Zhe, DAVIS L S. Recognizing human actions by learning and matching shape-motion prototype trees[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2012, 34(3):533-547.
[12] JI Xiaofei, ZHOU Lu, LI Yibo. Human action recognition based on adaboost algorithm for feature extraction[C]//Proceedings of 2014 IEEE International Conference on Computer and Information Technology. Xi’an, China, 2014:801-805.
[13] RYOO M S, AGGARWAL J K. Spatio-temporal relationship match:Video structure comparison for recognition of complex human activities[C]//Proceedings of the IEEE International Conference on Computer Vision. Kyoto, Japan, 2009:1593-1600.
[14] KONG Y, LIANG W, DONG Z, et al. Recognising human interaction from videos by a discriminative model[J]. Institution of Engineering and Technology Computer Vision, 2014, 8(4):277-286.
[15] PATRON-PEREZ A, MARSZALEK M, REID I, et al. Structured learning of human interactions in TV shows[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2012, 34(12):2441-2453.
[16] MUKHERJEE S, BISWAS S K, MUKHERJEE D P. Recognizing interaction between human performers using "key pose doublet"[C]//Proceedings of the 19th ACM International Conference onMultimedia. Scottsdale, AZ, United states, 2011:1329-1332.
[17] BRENDEL W, TODOROVIC S. Learning spatiotemporal graphs of human activities[C]//Proceedings of the IEEE International Conference on Computer Vision. Barcelona, Spain, 2011:778-785.
Similar References:

Memo

-

Last Update: 1900-01-01

Copyright © CAAI Transactions on Intelligent Systems