[1]LI Haomiao,ZHANG Hanxiao,XING Xianglei.Gait recognition with united local multiscale and global context features[J].CAAI Transactions on Intelligent Systems,2024,19(4):853-862.[doi:10.11992/tis.202304004]
Copy

Gait recognition with united local multiscale and global context features

References:
[1] 许文正, 黄天欢, 贲晛烨, 等. 跨视角步态识别综述[J]. 中国图象图形学报, 2023, 28(5): 1265–1286
XU Wenzheng, HUANG Tianhuan, BEN Xianye, et al. Cross-view gait recognition: a review[J]. Journal of image and graphics, 2023, 28(5): 1265–1286
[2] 吕卓纹, 王一斌, 邢向磊, 等. 加权CCA多信息融合的步态表征方法[J]. 智能系统学报, 2019, 14(3): 449–454
LYU Zhuowen, WANG Yibin, XING Xianglei, et al. A gait representation method based on weighted CCA for multi-information fusion[J]. CAAI transactions on intelligent systems, 2019, 14(3): 449–454
[3] 李一波, 李昆. 双视角下多特征信息融合的步态识别[J]. 智能系统学报, 2013, 8(1): 74–79
LI Yibo, LI Kun. Gait recognition based on dual view and multiple feature information fusion[J]. CAAI transactions on intelligent systems, 2013, 8(1): 74–79
[4] CONNOR P, ROSS A. Biometric recognition by gait: a survey of modalities and features[J]. Computer vision & image understanding, 2018, 167(1): 1–27.
[5] LIAO R, CAO C, GARCIA E B, et al. Pose-based temporal-spatial network (PTSN) for gait recognition with carrying and clothing variations[C]//12th Chinese Conference on Biometric Recognition. Shenzhen: Springer international publishing, 2017: 474-483.
[6] YU Shiqi, TAN Daoliang, TAN Tieniu. A framework for evaluating the effect of view angle, clothing and carrying condition on gait recognition[C]//18th International Conference on Pattern Recognition. Hong Kong: IEEE, 2006, 4: 441-444.
[7] ZHANG Yuqi, HUANG Yongzhen, YU Shiqi, et al. Cross-view gait recognition by discriminative feature learning[J]. IEEE transactions on image processing, 2019, 29: 1001–1015.
[8] FAN Chao, PENG Yunjie, CAO Chunshui, et al. Gaitpart: Temporal part-based model for gait recognition[C]//Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. Seattle: IEEE, 2020: 14225-14233.
[9] SEPAS-MOGHADDAM A, ETEMAD A. View-invariant gait recognition with attentive recurrent learning of partial representations[J]. IEEE transactions on biometrics, behavior, and identity science, 2020, 3(1): 124–137.
[10] LIN Beibei, ZHANG Shunli, YU Xin. Gait recognition via effective global-local feature representation and local temporal aggregation[C]//Proceedings of the IEEE/CVF International Conference on Computer Vision. Montreal: IEEE, 2021: 14648-14656.
[11] WOLF T, BABAEE M, RIGOLL G. Multi-view gait recognition using 3D convolutional neural networks[C]//2016 IEEE International Conference on Image Processing. Phoenix: IEEE, 2016: 4165-4169.
[12] HUANG Zhen, XUE Dixiu, SHEN Xu, et al. 3D local convolutional neural networks for gait recognition[C]//Proceedings of the IEEE/CVF International Conference on Computer Vision. Montreal: IEEE, 2021: 14920-14929.
[13] ZHANG Ziyuan, TRAN Luan, LIU Feng, et al. On learning disentangled representations for gait recognition[J]. IEEE transactions on pattern analysis and machine intelligence, 2020, 44(1): 345–360.
[14] ZHANG Ziyuan, TRAN Luan, YIN Xi, et al. Gait recognition via disentangled representation learning[C]//Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. Long Beach: IEEE, 2019: 4710-4719.
[15] LIN Beibei, ZHANG Shunli, BAO Feng. Gait recognition with multiple-temporal-scale 3d convolutional neural network[C]//Proceedings of the 28th ACM International Conference on Multimedia. New York: ACM, 2020: 3054-3062.
[16] HUANG Xiaohu, ZHU Duowang, WANG Hao, et al. Context-sensitive temporal feature learning for gait recognition[C]//Proceedings of the IEEE/CVF International Conference on Computer Vision. Montreal: IEEE, 2021: 12909-12918.
[17] JI Shuiwang, XU Wei, YANG Ming, et al. 3D convolutional neural networks for human action recognition[J]. IEEE transactions on pattern analysis and machine intelligence, 2012, 35(1): 221–231.
[18] TRAN D, BOURDEV L, FERGUS R, et al. Learning spatiotemporal features with 3d convolutional networks[C]//Proceedings of the IEEE International Conference on Computer Vision. Santiago: IEEE, 2015: 4489-4497.
[19] TRAN D, WANG H, TORRESANI L, et al. A closer look at spatiotemporal convolutions for action recognition[C]//Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. Salt Lake City: IEEE, 2018: 6450-6459.
[20] QIU Zhaofan, YAO Ting, MEI Tao. Learning spatio-temporal representation with pseudo-3D residual networks[C]//Proceedings of the IEEE/CVF International Conference on Computer Vision. Venice: IEEE, 2017: 5534-5542.
[21] VASWANI A, SHAZEER N, PARMAR N, et al. Attention is all you need[J]. Advances in neural information processing systems, 2017, 30: 6000–6010.
[22] CAMGOZ N C, KOLLER O, HADFIELD S, et al. Sign language transformers: Joint end-to-end sign language recognition and translation[C]//Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. Seattle: IEEE, 2020: 10023-10033.
[23] VAROL G, MOMENI L, ALBANIE S, et al. Read and attend: temporal localisation in sign language videos[C]//Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. Nashville: IEEE, 2021: 16857-16866.
[24] CAMGOZ N C, KOLLER O, HADFIELDS, et al. Multi-channel transformers for multi-articulatory sign language translation[C]//16th European Conference on Computer Vision. Glasgow: Springer International Publishing, 2020: 301-319.
[25] SAUNDERS B, CAMGOZ N C, BOWDEN R. Progressive transformers for end-to-end sign language production[C]//16th European Conference on Computer Vision. Glasgow: Springer International Publishing, 2020: 687-705.
[26] CARION N, MASSA F, SYNNAEVE G, et al. End-to-end object detection with transformers[C]//16th European Conference on Computer Vision. Glasgow: Springer International Publishing, 2020: 213-229.
[27] FU Jun, LIU Jing, TIAN Haijie, et al. Dual attention network for scene segmentation[C]//Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. Long Beach: IEEE, 2019: 3146-3154.
[28] SUN Chen, MYERS A, VONDRICK C, et al. Videobert: a joint model for video and language representation learning[C]//Proceedings of the IEEE/CVF International Conference on Computer Vision. Seoul: IEEE, 2019: 7464-7473.
[29] TAKEMURA N, MAKIHARA Y, MURAMATSU D, et al. Multi-view large population gait dataset and its performance evaluation for cross-view gait recognition[J]. IPSJ transactions on computer vision and applications, 2018, 10: 1–14.
[30] KINGMA D P, BA J. Adam: a method for stochastic optimization[J]. International conference on learning representations, 2014, 1: 1–13.
[31] MAAS A L, HANNUN A Y, NG A Y. Rectifier nonlinearities improve neural network acoustic models[J]. Computer science, 2013, 30(1): 3–8.
[32] CHAO Hanqing, WANG Kun, HE Yiwei, et al. GaitSet: cross-view gait recognition through utilizing gait as a deep set[J]. IEEE transactions on pattern analysis and machine intelligence, 2021, 44(7): 3467–3478.
[33] 刘正道, 努尔毕亚·亚地卡尔, 木特力甫·马木提, 等. 基于优化GaitSet模型的步态识别研究[J]. 东北师范大学学报, 2022, 54(4): 77–86
LIU Zhengdao, NURBIYA·Yadikar, MUTELEP·Mamut, et al. Research on gait recognition based on optimized GaitSet model[J]. Journal of Northeastern University, 2022, 54(4): 77–86
Similar References:

Memo

-

Last Update: 1900-01-01

Copyright © CAAI Transactions on Intelligent Systems