[1]谌贵辉,何龙,李忠兵,等.卷积神经网络的贴片电阻识别应用[J].智能系统学报,2019,14(02):263-272.[doi:10.11992/tis.201710005]
 CHEN Guihui,HE Long,LI Zhongbing,et al.Chip resistance recognition based on convolution neural network[J].CAAI Transactions on Intelligent Systems,2019,14(02):263-272.[doi:10.11992/tis.201710005]
点击复制

卷积神经网络的贴片电阻识别应用(/HTML)
分享到:

《智能系统学报》[ISSN:1673-4785/CN:23-1538/TP]

卷:
第14卷
期数:
2019年02期
页码:
263-272
栏目:
出版日期:
2019-03-05

文章信息/Info

Title:
Chip resistance recognition based on convolution neural network
作者:
谌贵辉 何龙 李忠兵 亢宇欣 江枭宇
西南石油大学 电气信息学院, 四川 成都 610500
Author(s):
CHEN Guihui HE Long LI Zhongbing KANG Yuxin JIANG Xiaoyu
School of Electrical Information, Southwest Petroleum University, Chengdu 610500, China
关键词:
贴片电阻识别卷积神经网络AlexNet模型GoogLeNet模型ResNet模型
Keywords:
Chip resistance recognitionconvolution neural networkAlexNet modelGoogLeNet modelResNet model
分类号:
TP391
DOI:
10.11992/tis.201710005
摘要:
贴片电阻由于其体积微小、性能稳定等独特的性质,在当今智能化的电子设备中被广泛使用。为保证贴片电阻的出厂质量,需要对其进行缺陷识别、极性方向识别、正反面识别和种类识别,目前很大程度上依靠人工肉眼进行识别检测,效率低、容易误检、成本高。本文针对传统图像识别方法的局限性,结合近年来卷积神经网络在图像识别方面所取得的巨大成就,基于AlexNet模型、GoogLeNet模型、ResNet模型思想设计了3种深度适宜、可训练参数约4×106(百万)的卷积神经网络,克服了当前主流卷积神经网络模型由于可训练参数过多、模型层数太深导致在贴片电阻识别应用中识别速度不能满足实时性要求、泛化识别准确率低的问题。实验表明,3种模型的识别准确率均超过90%,最高识别准确率达到95%,识别速度达到0.203 s/张(256像素×256像素,CORE I5)。因此,本文设计的3种卷积神经网络可根据具体实际需求进行选用,在实践中具有极强的可行性和可推广性,同时也在提升企业生产效率和产品质量方面具有重要意义。
Abstract:
Chip resistors are widely used in intelligent electronic devices because of their unique properties such as small size and stable performance. The chip resistors produced by the factory must be identified for defects in both front and back faces, polarity, and type in order to guarantee the quality. However, such identification largely relies on the eye detection, which is inefficient, prone to error, and costly. In this paper, considering the limitation of the traditional image recognition methods and the great achievements of convolutional neural network (CNN) in image recognition in recent years, three CNN models, AlexNet model, GoogLeNet model, and ResNet model, with appropriate depth and training parameters of about 4M (million) are designed to overcome the demerits of low speed that results in the inability to meet the real-time requirement. These models overcome the low accuracy problem of generalization recognition associated with the prevailing CNN models, which is caused by many trainable parameters and many layers of model. Experiments show that the recognition accuracy of these three models exceeds 90%. The highest recognition accuracy rate is 95%, and the recognition speed is 0.203 s/piece (256×256 pixels, CORE I5). Therefore, these three CNN models can be adopted in practice and have a strong feasibility and replicability; thus, they have a great potential to improve the production efficiency and product quality for chip resistors.

参考文献/References:

[1] GIUNCHIGLIA F, YATSKEVICH M. Element level semantic matching. Technical Report DIT-04-035[R]. Trento, Italy:Information Engineering and Computer Science, 2004.
[2] OMACHI M, OMACHI S. Fast two-dimensional template matching with fixed aspect ratio based on polynomial approximation[C]//Proceedings of the 9th International Conference on Signal Processing. Beijing, China, 2008:757-760.
[3] JOLLIFFE I T. Principal component analysis[M]. New York, USA:Springer-Verlag, 1986.
[4] ZHANG Xingfu, REN Xiangmin. Two dimensional principal component analysis based independent component analysis for face recognition[C]//Proceedings of 2011 International Conference on Multimedia Technology. Hangzhou, China, 2011:934-936.
[5] PELLEGRINO F A, VANZELLA W, TORRE V. Edge detection revisited[J]. IEEE transactions on systems, man, and cybernetics, part b (cybernetics), 2004, 34(3):1500-1518.
[6] DUCOTTET C, FOURNEL T, BARAT C. Scale-adaptive detection and local characterization of edges based on wavelet transform[J]. Signal processing, 2004, 84(11):2115-2137.
[7] SMITH S M, BRADY J M. SUSAN-a new approach to low level image processing[J]. International journal of computer vision, 1997, 23(1):45-78.
[8] CORTES C, VAPNIK V. Support-vector networks[J]. Machine learning, 1995, 20(3):273-297.
[9] GUO H, GELFAND S B. Classification trees with neural network feature extraction[J]. IEEE transactions on neural networks, 1992, 3(6):923-33.
[10] GOLDSZMIDT M. Bayesian network classifiers[J]. Machine learning, 2011, 29(2/3):598-605.
[11] 焦李成, 杨淑媛, 刘芳, 等. 神经网络七十年:回顾与展望[J]. 计算机学报, 2016, 39(8):1697-1717 JIAO Licheng, YANG Shuyuan, LIU Fang, et al. Seventy years beyond neural networks:retrospect and prospect[J]. Chinese journal of computers, 2016, 39(8):1697-1717
[12] HINTON G E, SALAKHUTDINOV R R. Reducing the dimensionality of data with neural networks[J]. Science, 2006, 313(5786):504-507.
[13] ERHAN D, MANZAGOL P A, BENGIO Y, et al. The difficulty of training deep architectures and the effect of unsupervised pre-training[C]//Appearing in Proceedings of the 12th International Conference on Artificial Intelligence and Statistics. Florida, USA, 2009:153-160.
[14] LECUN Y, BENGIO Y. Convolutional networks for images, speech, and time series[M]//ARBIB M A. The Handbook of Brain Theory and Neural Networks. Cambridge, MA, USA:MIT Press, 1998.
[15] KRIZHEVSKY A, SUTSKEVER I, HINTON G E. Imagenet classification with deep convolutional neural networks[C]//International Conference on Neural Information Processing Systems. Montreal, Canada, 2012:1097-1105.
[16] SZEGEDY C, LIU Wei, JIA Yangqing, et al. Going deeper with convolutions[C]//Proceedings of 2015 IEEE Conference on Computer Vision and Pattern Recognition. Boston, MA, USA, 2014:1-9.
[17] SZEGEDY C, IOFFE S, VANHOUCKE V, et al. Inception-v4, inception-resnet and the impact of residual connections on learning[J]. arXiv:1602.07261, 2016.
[18] HE Kaiming, ZHANG Xiangyu, REN Shaoqing, et al. Deep residual learning for image recognition[C]//Proceedings of 2016 IEEE Conference on Computer Vision and Pattern Recognition. Las Vegas, NV, USA, 2016:770-778.
[19] IOFFE S, SZEGEDY C. Batch normalization:accelerating deep network training by reducing internal covariate shift[C]//Proceedings of the 32nd International Conference on Machine Learning. Lille, France, 2015:448-456.
[20] GLOROT X, BORDES A, BENGIO Y. Deep sparse rectifier neural networks[C]//Proceedings of the 14th International Conference on Artificial Intelligence and Statistics. Fort Lauderdale, FL, USA, 2011:315-323.

相似文献/References:

[1]殷瑞,苏松志,李绍滋.一种卷积神经网络的图像矩正则化策略[J].智能系统学报,2016,11(1):43.[doi:10.11992/tis.201509018]
 YIN Rui,SU Songzhi,LI Shaozi.Convolutional neural network’s image moment regularizing strategy[J].CAAI Transactions on Intelligent Systems,2016,11(02):43.[doi:10.11992/tis.201509018]
[2]龚震霆,陈光喜,任夏荔,等.基于卷积神经网络和哈希编码的图像检索方法[J].智能系统学报,2016,11(3):391.[doi:10.11992/tis.201603028]
 GONG Zhenting,CHEN Guangxi,REN Xiali,et al.An image retrieval method based on a convolutional neural network and hash coding[J].CAAI Transactions on Intelligent Systems,2016,11(02):391.[doi:10.11992/tis.201603028]
[3]刘帅师,程曦,郭文燕,等.深度学习方法研究新进展[J].智能系统学报,2016,11(5):567.[doi:10.11992/tis.201511028]
 LIU Shuaishi,CHENG Xi,GUO Wenyan,et al.Progress report on new research in deep learning[J].CAAI Transactions on Intelligent Systems,2016,11(02):567.[doi:10.11992/tis.201511028]
[4]师亚亭,李卫军,宁欣,等.基于嘴巴状态约束的人脸特征点定位算法[J].智能系统学报,2016,11(5):578.[doi:10.11992/tis.201602006]
 SHI Yating,LI Weijun,NING Xin,et al.A facial feature point locating algorithmbased on mouth-state constraints[J].CAAI Transactions on Intelligent Systems,2016,11(02):578.[doi:10.11992/tis.201602006]
[5]宋婉茹,赵晴晴,陈昌红,等.行人重识别研究综述[J].智能系统学报,2017,12(06):770.[doi:10.11992/tis.201706084]
 SONG Wanru,ZHAO Qingqing,CHEN Changhong,et al.Survey on pedestrian re-identification research[J].CAAI Transactions on Intelligent Systems,2017,12(02):770.[doi:10.11992/tis.201706084]
[6]杨晓兰,强彦,赵涓涓,等.基于医学征象和卷积神经网络的肺结节CT图像哈希检索[J].智能系统学报,2017,12(06):857.[doi:10.11992/tis.201706035]
 YANG Xiaolan,QIANG Yan,ZHAO Juanjuan,et al.Hashing retrieval for CT images of pulmonary nodules based on medical signs and convolutional neural networks[J].CAAI Transactions on Intelligent Systems,2017,12(02):857.[doi:10.11992/tis.201706035]
[7]王科俊,赵彦东,邢向磊.深度学习在无人驾驶汽车领域应用的研究进展[J].智能系统学报,2018,13(01):55.[doi:10.11992/tis.201609029]
 WANG Kejun,ZHAO Yandong,XING Xianglei.Deep learning in driverless vehicles[J].CAAI Transactions on Intelligent Systems,2018,13(02):55.[doi:10.11992/tis.201609029]
[8]莫凌飞,蒋红亮,李煊鹏.基于深度学习的视频预测研究综述[J].智能系统学报,2018,13(01):85.[doi:10.11992/tis.201707032]
 MO Lingfei,JIANG Hongliang,LI Xuanpeng.Review of deep learning-based video prediction[J].CAAI Transactions on Intelligent Systems,2018,13(02):85.[doi:10.11992/tis.201707032]
[9]王成济,罗志明,钟准,等.一种多层特征融合的人脸检测方法[J].智能系统学报,2018,13(01):138.[doi:10.11992/tis.201707018]
 WANG Chengji,LUO Zhiming,ZHONG Zhun,et al.Face detection method fusing multi-layer features[J].CAAI Transactions on Intelligent Systems,2018,13(02):138.[doi:10.11992/tis.201707018]
[10]葛园园,许有疆,赵帅,等.自动驾驶场景下小且密集的交通标志检测[J].智能系统学报,2018,13(03):366.[doi:10.11992/tis.201706040]
 GE Yuanyuan,XU Youjiang,ZHAO Shuai,et al.Detection of small and dense traffic signs in self-driving scenarios[J].CAAI Transactions on Intelligent Systems,2018,13(02):366.[doi:10.11992/tis.201706040]

备注/Memo

备注/Memo:
收稿日期:2017-10-11。
基金项目:四川省科技支撑计划项目(2016GZ0107);四川省教育厅重点项目(16ZA0065);南充市重点科技项目(NC17SY4001).
作者简介:谌贵辉,男,1971年生,教授,主要研究方向为MEMS集成器件及传感器、智能仪表、计算机仿真及模拟技术及图像处理及模式识别技术。;何龙,男,1991年生,硕士研究生,主要研究方向为智能控制、模式识别。;李忠兵,男,1987年生,博士,主要研究方向为图像处理、精密仪器及现代信号处理。
通讯作者:何龙.E-mail:396024902@qq.com
更新日期/Last Update: 2019-04-25