[1]朱文霖,刘华平,王博文,等.基于视-触跨模态感知的智能导盲系统[J].智能系统学报,2020,15(1):33-40.[doi:10.11992/tis.201908015]
 ZHU Wenlin,LIU Huaping,WANG Bowen,et al.An intelligent blind guidance system based on visual-touch cross-modal perception[J].CAAI Transactions on Intelligent Systems,2020,15(1):33-40.[doi:10.11992/tis.201908015]
点击复制

基于视-触跨模态感知的智能导盲系统

参考文献/References:
[1] CHOPRA S, HADSELL R, LECUN Y. Learning a similarity metric discriminatively, with application to face verification[C]//2005 IEEE Computer Society Conference on Computer Vision and Pattern Recognition. San Diego, USA, 2005:539–546
[2] GOODFELLOW I J, POUGET-ABADIE J, MIRZA M, et al. Generative adversarial nets[C]//Proceedings of the 27th International Conference on Neural Information Processing Systems. Cambridge, USA, 2014: 2672–2680.
[3] MIRZA M, OSINDERO S. Conditional generative adversarial nets[J]. arXiv: 1411.1784, 2014.
[4] REED S, AKATA Z, YAN Xinchen, et al. Generative adversarial text to image synthesis[C]//Proceedings of the 33rd International Conference on International Conference on Machine Learning. New York, USA, 2016: 1060–1069.
[5] BAHADIR S K, KONCAR V, KALAOGLU F. Wearable obstacle detection system fully integrated to textile structures for visually impaired people[J]. Sensors and actuators A: physical, 2012, 179: 297–311.
[6] SHIN B S, LIM C S. Obstacle detection and avoidance system for visually impaired people[C]//Proceedings of the 2nd International Workshop on Haptic and Audio Interaction Design. Seoul, South Korea, 2007: 78–85.
[7] BOUSBIA-SALAH M, BETTAYEB M, LARBI A. A navigation aid for blind people[J]. Journal of intelligent & robotic systems, 2011, 64(3/4): 387–400.
[8] HASANUZZAMAN F M, YANG Xiaodong, TIAN Yingli. Robust and effective component-based banknote recognition for the blind[J]. IEEE transactions on systems, man, and cybernetics, part C (applications and reviews), 2012, 42(6): 1021–1030.
[9] GUEST S, DESSIRIER J, MEHRABYAN A. The development and validation of sensory and emotional scales of touch perception[J]. Attention perception & psychophysics, 2011, 73(2): 531–550.
[10] KIM D Y, YI K Y. A user-steered guide robot for the blind[C]//Proceedings of 2008 IEEE International Conference on Robotics and Biomimetics. Bangkok, Thailand, 2009: 114–119.
[11] TIWANA M, REDMOND S, LOVELL N. A review of tactile sensing technologies with applications in biomedical engineering[J]. Sensors and actuators: a physical, 2012, 179(5):17–31.
[12] TANG T J J, LUI W L D, LI W H. Plane-based detection of staircases using inverse depth[C]//Proceedings of 2012 Australasian Conference on Robotics and Automation. New Zealand, 2012: 1–10.
[13] AL KALBANI J, SUWAILAM R B, AL YAFAI A, et al. Bus detection system for blind people using RFID[C]//Proceedings of the 2015 IEEE 8th GCC Conference & Exhibition. Muscat, Oman, 2015: 1–6.
[14] KULKARNI A, BHURCHANDI K. Low cost E-book reading device for blind people[C]//Proceedings of 2015 International Conference on Computing Communication Control and Automation. Pune, India, 2015: 516–520.
[15] THILAGAVATHI B. Recognizing clothes patterns and colours for blind people using neural network[C]//Proceedings of 2015 International Conference on Innovations in Information, Embedded and Communication Systems. Coimbatore, India, 2015: 1–5.
[16] NICHOLLS H, LEE M. A survey of robot tactile sensing technology[J]. The international journal of robotics research, 1989, 8(3):3–30.
[17] STRESE M, SCHUWERK C, IEPURE A, et al. Multimodal feature-based surface material classification[J]. IEEE transactions on haptics, 2017, 10(2): 226–239.
[18] LI Xinwu, LIU Huaping, ZHOU Junfeng, et al. Learning cross-modal visual-tactile representation using ensembled generative adversarial networks[J]. Cognitive computation and systems, 2019, 1(2): 40–44.
[19] ISOLA P, ZHU Junyan, ZHOU Tinghui, et al. Image-to-image translation with conditional adversarial networks[C]//Proceedings of 2017 IEEE Conference on Computer Vision and Pattern Recognition. Honolulu, USA, 2017: 5967–5976.
[20] ZHU Junyan, PARK T, ISOLA P, et al. Unpaired image-to-image translation using cycle-consistent adversarial networks[C]//Proceedings of 2017 IEEE International Conference on Computer Vision. Venice, Italy, 2017: 2242–2251.
[21] KIM T, CHA M, KIM H, et al. Learning to discover cross-domain relations with generative adversarial networks[C]//Proceedings of the 34th International Conference on Machine Learning. Sydney, Australia, 2017: 1857–1865.
[22] GRIFFIN D, LIM J. Signal estimation from modified short-time fourier transform[J]. IEEE transactions on acoustics, speech, and signal processing, 1984, 32(2): 236–243.
[23] UJITOKO Y, BAN Y. Vibrotactile signal generation from texture images or attributes using generative adversarial network[C]//Proceedings of the 11th International Conference on Human Haptic Sensing and Touch Enabled Computer Applications. Pisa, Italy, 2018: 25–36.
[24] HUANG G, WANG D, LAN Y. Extreme learning machines: a survey[J]. International journal of machine learning & cybernetics, 2011, 2(2): 107–122.
[25] LEE K A, HICKS G, NINO-MURCIA G. Validity and reliability of a scale to assess fatigue[J]. Psychiatry research, 1991, 36(3): 291–298.

备注/Memo

收稿日期:2019-08-21。
基金项目:国家自然科学基金重点项目(U1613212);河北省自然科学基金项目(E2017202035)
作者简介:朱文霖,男,1994年生,硕士研究生,主要研究方向为新型磁性材料与器件、触觉交互;刘华平,男,1976年生,副教授,博士生导师,主要研究机器人感知、学习与控制,多模态信息融合。利用稀疏编码建立了机器人多模态融合感知与学习框架,在此基础上结合机器人的光学、红外、深度和触觉等不同模态信息开发了一系列多模态稀疏编码方法,并在移动机器人、灵巧机械臂等机器人平台上开展多模态感知融合的方法验证与应用。发表学术论文10余篇;王博文,男,1956年生,教授,博士生导师,主要研究方向为磁致伸缩材料与器件、振动发电技术、磁特性测试技术。承担国家自然科学基金等项目8项(其中主持5项)、省部级科研项目10项(其中主持8项);河北省科学技术突出贡献奖和省科技进步三等奖各1项。获专利授权6项;出版专著2部,发表学术论文200多篇
通讯作者:刘华平.E-mail:hpliu@tsinghua.edu.cn

更新日期/Last Update: 1900-01-01
Copyright © 《 智能系统学报》 编辑部
地址:(150001)黑龙江省哈尔滨市南岗区南通大街145-1号楼 电话:0451- 82534001、82518134 邮箱:tis@vip.sina.com