[1]何锐波,狄岚,梁久祯.一种改进的深度学习的道路交通标识识别算法[J].智能系统学报,2020,15(6):1121-1130.[doi:10.11992/tis.201811009]
 HE Ruibo,DI Lan,LIANG Jiuzhen.An improved deep learning algorithm for road traffic identification[J].CAAI Transactions on Intelligent Systems,2020,15(6):1121-1130.[doi:10.11992/tis.201811009]
点击复制

一种改进的深度学习的道路交通标识识别算法(/HTML)
分享到:

《智能系统学报》[ISSN:1673-4785/CN:23-1538/TP]

卷:
第15卷
期数:
2020年6期
页码:
1121-1130
栏目:
学术论文—机器学习
出版日期:
2020-11-05

文章信息/Info

Title:
An improved deep learning algorithm for road traffic identification
作者:
何锐波12 狄岚1 梁久祯3
1. 江南大学 人工智能与计算机学院, 江苏 无锡 214122;
2. 中国电子科技集团公司第二十八研究所, 江苏 南京 210007;
3. 常州大学 信息科学与工程学院, 江苏 常州 213164
Author(s):
HE Ruibo12 DI Lan1 LIANG Jiuzhen3
1. School of Artificial Intelligence and Computer Science, Jiangnan University, Wuxi 214122, China;
2. The 28th Research Institute of China Electronics Technology Group Corporation, Nanjing 210007, China;
3. School of Information Science and Engineering, Changzhou University, Changzhou 213164, China
关键词:
道路交通标识识别图像分割卷积神经网络去除复杂背景数据增强归一化压缩和激励网络残差连接
Keywords:
road traffic identificationimage segmentationconvolutional neural networkcomplex background eliminationdata enhancementnormalizationsqueeze-and-excitation networkresidual connection
分类号:
TP391.4
DOI:
10.11992/tis.201811009
摘要:
针对复杂的环境,结合图像预处理与深度学习神经网络,提出了一种道路交通标识识别算法。该方法不仅利用图像分割技术,而且利用卷积神经网络模型对道路交通标识进行了更准确的识别。首先,通过调节光照影响、去除复杂背景、数据增强和归一化等批量预处理操作,形成一个完整的数据集;然后,结合squeeze-and-excitation思想和残差网络结构,充分训练出自己的卷积神经网络模型;最后,将优化的网络模型用于道路交通标识的识别。实验结果表明,该方法使训练时间缩短了12%左右,识别精度可达99.26%。
Abstract:
This study proposes a road traffic identification algorithm based on image preprocessing and deep-learning neural networks for complex environments. The proposed method uses not only the image segmentation technology but also the convolutional neural network model to more accurately identify the road traffic signs. First, a complete dataset is obtained via batch preprocessing operations, including illumination effect adjustment, complex background elimination, data enhancement, and normalization. Next, the convolutional neural network model is sufficiently trained based on the combination of the squeeze-and-excitation network and residual network structure concepts. Finally, the optimized network model is used to identify the road traffic signs. The experimental result shows that the proposed method reduces the training time by approximately 12% and that the recognition accuracy can reach 99.26%.

参考文献/References:

[1] KHAN J F, ADHAMI R R, et al. Traffic sign recognition based on pictogram contours[J]. IEEE ninth international workshop on image analysis for multimedia interactive services, 2011, 1(1): 83-96.
[2] YANG H M, LIU Chaolin, LIU Kunhao, et al. Traffic sign recognition in disturbing environments[C]//Proceedings of the 14th International Symposium on Methodologies for Intelligent Systems. Maebashi City, Japan, 2003: 252-261.
[3] BARTNECK N, RITTER W. Colour segmentation with polynomial classification[C]//Proceedings of the 11th IAPR International Conference on Pattern Recognition. Vol. II. Conference B: Pattern Recognition Methodology and Systems. The Hague, Netherlands, 1992: 635-638.
[4] SANDOVAL H, HATTORI T, KITAGAWA S, et al. Angle-dependent edge detection for traffic signs recognition[C]//Proceedings of 2000 IEEE Intelligent Vehicles Symposium. Dearborn, USA, 2000: 308-313.
[5] BARNES N, ZELINSKY A, FLETCHER L S. Real-time speed sign detection using the radial symmetry detector[J]. IEEE transactions on intelligent transportation systems, 2008, 9(2): 322-332.
[6] PAULO C F, CORREIA P L. Traffic sign recognition based on pictogram contours[C]//Proceedings of 2008 IEEE Ninth International Workshop on Image Analysis for Multimedia Interactive Services. Klagenfurt, Austria, 2008: 67-70.
[7] GAO X W, PODLADCHIKOVA L, SHAPOSHNIKOV D, et al. Recognition of traffic signs based on their colour and shape features extracted using human vision models[J]. Journal of visual communication and image representation, 2006, 17(4): 675-685.
[8] STALLKAMP J, SCHLIPSING M, SALMEN J, et al. Man vs. computer: benchmarking machine learning algorithms for traffic sign recognition[J]. Neural networks, 2012, 32: 323-332.
[9] SEEB?CK P. Deep learning in medical image analysis[D]. Vienna, Vienna: Vienna University of Technology, 2015: 333-338.
[10] 丁頠洋, 刘格, 王贤哲, 等. 基于深度学习的游行示威视频图像检测方法[J]. 指挥信息系统与技术, 2018, 9(6): 75-79
DING Weiyang, LIU Ge, WANG Xianzhe, et al. Demonstration video image detection method based on deep learning[J]. Command information system and technology, 2018, 9(6): 75-79
[11] YANG H M, LIU C L, LIU K H, et al. Traffic sign recognition in disturbing environments[C]//Foundations of Intelligent Systems, 14th International Symposium, ISMIS 2003.Maebashi, Japan, 2003:28-31.
[12] ISLAM K T, RAJ R G. Real-time (vision-based) road sign recognition using an artificial neural network[J]. Sensors, 2017, 14(4): 853.
[13] PICCIOLI G, DE MICHELI E, PARODI P, et al. Robust method for road sign detection and recognition[J]. Image and vision computing, 1996, 14(3): 209-223.
[14] 房泽平, 段建民, 郑榜贵. 基于特征颜色和SNCC的交通标志识别与跟踪[J]. 交通运输系统工程与信息, 2014, 14(1): 47-52
FANG Zeping, DUAN Jianmin, ZHENG Banggui. Traffic signs recognition and tracking based on feature color and SNCC algorithm[J]. Journal of transportation systems engineering and information technology, 2014, 14(1): 47-52
[15] ZHOU Jun, BAO Xu, LI Dawei, et al. Traffic video image segmentation model based on Bayesian and spatio-temporal Markov random field[J]. Journal of physics: conference series, 2017, 910(1): 012041.
[16] 陈利, 刘伟峰, 杨爱兰. 基于OSPA距离和特征点采样的路标识别算法[J]. 哈尔滨师范大学自然科学学报, 2017, 33(2): 55-57
CHEN Li, LIU Weifeng, YANG Ailan. A recognition algorithm for road signs based on the OSPA metric and characteristic point sampling[J]. Natural Science Journal of Harbin Normal University, 2017, 33(2): 55-57
[17] SHEN Dinggang, WU Guorong, SUK H I. Deep learning in medical image analysis[J]. Annual review of biomedical engineering, 2017, 19: 221-248.
[18] HE Kaiming, ZHANG Xiangyu, REN Shaoqing, et al. Deep residual learning for image recognition[C]//Proceedings of 2016 IEEE Conference on Computer Vision and Pattern Recognition. Las Vegas, USA, 2016: 770-778.
[19] HU Jie, SHEN Li, SUN Gang. Squeeze-and-excitation networks[C]//Proceedings of 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition. Salt Lake City, USA, 2018: 7132-7141.
[20] 张懿, 刘旭, 李海峰. 数字RGB与YCbCr颜色空间转换的精度[J]. 江南大学学报(自然科学版), 2007, 6(2): 200-202
ZHANG Yi, LIU Xu, LI Haifeng. The precision of RGB color space convert to YCbCr color space[J]. Journal of JiangNan University (Natural Science Edition), 2007, 6(2): 200-202
[21] HU M K. Visual pattern recognition by moment invariants[J]. IRE transactions on information theory, 1962, 8(2): 179-187.
[22] 孙即祥. 模式识别中的特征提取与计算机视觉不变量[M]. 北京: 国防工业出版社, 2001.
[23] 温佩芝, 陈晓, 吴晓军, 等. 基于三次样条插值的GrabCut自动目标分割算法[J]. 计算机应用研究, 2014, 31(7): 2187-2190
WEN Peizhi, CHEN Xiao, WU Xiaojun, et al. Automatic target segmentation algorithm of GrabCut based on cubic spline interpolation[J]. Application research of computers, 2014, 31(7): 2187-2190
[24] ZEILER M D, FERGUS R. Visualizing and understanding convolutional networks[C]//Proceedings of the 13th European Conference on Computer Vision. Zurich, Switzerland, 2014: 818-833.
[25] KRIZHEVSKY A, SUTSKEVER I, HINTON G E. ImageNet classification with deep convolutional neural networks[C]//Proceedings of the 25th International Conference on Neural Information Processing Systems. Lake Tahoe, USA, 2012: 1097-1105.
[26] JIA Yangqing, SHELHAMER E, DONAHUE J, et al. Caffe: convolutional architecture for fast feature embedding[C]//Proceedings of the 22nd ACM International Conference on Multimedia. Orlando, USA, 2014: 675-678.

相似文献/References:

[1]王科俊,郭庆昌.基于粒子群优化算法和改进的Snake模型的图像分割算法[J].智能系统学报,2007,2(01):53.
 WANG Ke-jun,GUO Qing-chang.Image segmentation algorithm based on the PSO and improved Snake model[J].CAAI Transactions on Intelligent Systems,2007,2(6):53.
[2]陈小波,程显毅.一种基于MAS的自适应图像分割方法[J].智能系统学报,2007,2(04):80.
 CHEN Xiao-bo,CHENG Xian-yi.An adaptive image segmentation technique based on multiAgent system[J].CAAI Transactions on Intelligent Systems,2007,2(6):80.
[3]刘咏梅,代丽洁.基于空间位置约束的K均值图像分割[J].智能系统学报,2010,5(01):67.
 LIU Yong-mei,DAI Li-jie.An improved method of Kmeans image segmentation based on spatial position information[J].CAAI Transactions on Intelligent Systems,2010,5(6):67.
[4]吴一全,纪守新.灰度熵和混沌粒子群的图像多阈值选取[J].智能系统学报,2010,5(06):522.
 WU Yi-quan,JI Shou-xin.Multithreshold selection for an image based on gray entropy and chaotic particle swarm optimization[J].CAAI Transactions on Intelligent Systems,2010,5(6):522.
[5]尚倩,阮秋琦,李小利.双目立体视觉的目标识别与定位[J].智能系统学报,2011,6(04):303.
 SHANG Qian,RUAN Qiuqi,LI Xiaoli.Target recognition and location based on binocular stereo vision[J].CAAI Transactions on Intelligent Systems,2011,6(6):303.
[6]胡光龙,秦世引.动态成像条件下基于SURF和Mean shift的运动目标高精度检测[J].智能系统学报,2012,7(01):61.
 HU Guanglong,QIN Shiyin.High precision detection of a mobile object under dynamic imaging based on SURF and Mean shift[J].CAAI Transactions on Intelligent Systems,2012,7(6):61.
[7]马慧,王科俊.采用旋转校正的指静脉图像感兴趣区域提取方法[J].智能系统学报,2012,7(03):230.
 MA Hui,WANG Kejun.A region of interest extraction method using rotation rectified finger vein images[J].CAAI Transactions on Intelligent Systems,2012,7(6):230.
[8]尹雨山,王李进,尹义龙,等.回溯搜索优化算法辅助的多阈值图像分割[J].智能系统学报,2015,10(01):68.[doi:10.3969/j.issn.1673-4785.201410008]
 YIN Yushan,WANG Lijin,YIN Yilong,et al.Backtracking search optimization algorithm assisted multilevel threshold for image segmentation[J].CAAI Transactions on Intelligent Systems,2015,10(6):68.[doi:10.3969/j.issn.1673-4785.201410008]
[9]吴一全,王凯,曹鹏祥.蜂群优化的二维非对称Tsallis交叉熵图像阈值选取[J].智能系统学报,2015,10(01):103.[doi:10.3969/j.issn.1673-4785.201403040]
 WU Yiquan,WANG Kai,CAO Pengxiang.Two-dimensional asymmetric tsallis cross entropy image threshold selection using bee colony optimization[J].CAAI Transactions on Intelligent Systems,2015,10(6):103.[doi:10.3969/j.issn.1673-4785.201403040]
[10]龙鹏,鲁华祥.方差不对称先验信息引导的全局阈值分割方法[J].智能系统学报,2015,10(5):663.[doi:10.11992/tis.201412022]
 LONG Peng,LU Huaxiang.Global threshold segmentation technique guided by prior knowledge with asymmetric variance[J].CAAI Transactions on Intelligent Systems,2015,10(6):663.[doi:10.11992/tis.201412022]

备注/Memo

备注/Memo:
收稿日期:2018-11-11。
基金项目:江苏省研究生科研与实践创新计划项目(KYCX18_1872)
作者简介:何锐波,硕士研究生,主要研究方向为人工智能和数字图像处理;狄岚,副教授,中国人工智能学会粒计算与知识发现专业委员会委员,江苏省“六大人才高峰”资助对象,主要研究方向为数字图像处理和计算机仿真。近年主持及参与国家级、省部级科研项目7项,主持校级科研项目4项、企业合作项目近20项,获得省级自然科学学术奖1次,行业联合会技术奖3次。发表学术论文50余篇;梁久祯,教授,博士,中国计算机学会多媒体专业委员会委员,江苏省人工智能学会理事,主要研究方向为计算机视觉和数字图像处理。主持项目10余项,曾获得浙江省青年英才奖。取得专利成果57项,软件著作7项。发表学术论文160余篇,出版教材及专著4部
通讯作者:狄岚.E-mail:dilan@jiangnan.edu.cn
更新日期/Last Update: 2020-12-25