CHEN Guihui,HE Long,LI Zhongbing,et al.Chip resistance recognition based on convolution neural network[J].CAAI Transactions on Intelligent Systems,2019,14(02):263-272.[doi:10.11992/tis.201710005]





Chip resistance recognition based on convolution neural network
谌贵辉 何龙 李忠兵 亢宇欣 江枭宇
西南石油大学 电气信息学院, 四川 成都 610500
CHEN Guihui HE Long LI Zhongbing KANG Yuxin JIANG Xiaoyu
School of Electrical Information, Southwest Petroleum University, Chengdu 610500, China
Chip resistance recognitionconvolution neural networkAlexNet modelGoogLeNet modelResNet model
贴片电阻由于其体积微小、性能稳定等独特的性质,在当今智能化的电子设备中被广泛使用。为保证贴片电阻的出厂质量,需要对其进行缺陷识别、极性方向识别、正反面识别和种类识别,目前很大程度上依靠人工肉眼进行识别检测,效率低、容易误检、成本高。本文针对传统图像识别方法的局限性,结合近年来卷积神经网络在图像识别方面所取得的巨大成就,基于AlexNet模型、GoogLeNet模型、ResNet模型思想设计了3种深度适宜、可训练参数约4×106(百万)的卷积神经网络,克服了当前主流卷积神经网络模型由于可训练参数过多、模型层数太深导致在贴片电阻识别应用中识别速度不能满足实时性要求、泛化识别准确率低的问题。实验表明,3种模型的识别准确率均超过90%,最高识别准确率达到95%,识别速度达到0.203 s/张(256像素×256像素,CORE I5)。因此,本文设计的3种卷积神经网络可根据具体实际需求进行选用,在实践中具有极强的可行性和可推广性,同时也在提升企业生产效率和产品质量方面具有重要意义。
Chip resistors are widely used in intelligent electronic devices because of their unique properties such as small size and stable performance. The chip resistors produced by the factory must be identified for defects in both front and back faces, polarity, and type in order to guarantee the quality. However, such identification largely relies on the eye detection, which is inefficient, prone to error, and costly. In this paper, considering the limitation of the traditional image recognition methods and the great achievements of convolutional neural network (CNN) in image recognition in recent years, three CNN models, AlexNet model, GoogLeNet model, and ResNet model, with appropriate depth and training parameters of about 4M (million) are designed to overcome the demerits of low speed that results in the inability to meet the real-time requirement. These models overcome the low accuracy problem of generalization recognition associated with the prevailing CNN models, which is caused by many trainable parameters and many layers of model. Experiments show that the recognition accuracy of these three models exceeds 90%. The highest recognition accuracy rate is 95%, and the recognition speed is 0.203 s/piece (256×256 pixels, CORE I5). Therefore, these three CNN models can be adopted in practice and have a strong feasibility and replicability; thus, they have a great potential to improve the production efficiency and product quality for chip resistors.


[1] GIUNCHIGLIA F, YATSKEVICH M. Element level semantic matching. Technical Report DIT-04-035[R]. Trento, Italy:Information Engineering and Computer Science, 2004.
[2] OMACHI M, OMACHI S. Fast two-dimensional template matching with fixed aspect ratio based on polynomial approximation[C]//Proceedings of the 9th International Conference on Signal Processing. Beijing, China, 2008:757-760.
[3] JOLLIFFE I T. Principal component analysis[M]. New York, USA:Springer-Verlag, 1986.
[4] ZHANG Xingfu, REN Xiangmin. Two dimensional principal component analysis based independent component analysis for face recognition[C]//Proceedings of 2011 International Conference on Multimedia Technology. Hangzhou, China, 2011:934-936.
[5] PELLEGRINO F A, VANZELLA W, TORRE V. Edge detection revisited[J]. IEEE transactions on systems, man, and cybernetics, part b (cybernetics), 2004, 34(3):1500-1518.
[6] DUCOTTET C, FOURNEL T, BARAT C. Scale-adaptive detection and local characterization of edges based on wavelet transform[J]. Signal processing, 2004, 84(11):2115-2137.
[7] SMITH S M, BRADY J M. SUSAN-a new approach to low level image processing[J]. International journal of computer vision, 1997, 23(1):45-78.
[8] CORTES C, VAPNIK V. Support-vector networks[J]. Machine learning, 1995, 20(3):273-297.
[9] GUO H, GELFAND S B. Classification trees with neural network feature extraction[J]. IEEE transactions on neural networks, 1992, 3(6):923-33.
[10] GOLDSZMIDT M. Bayesian network classifiers[J]. Machine learning, 2011, 29(2/3):598-605.
[11] 焦李成, 杨淑媛, 刘芳, 等. 神经网络七十年:回顾与展望[J]. 计算机学报, 2016, 39(8):1697-1717 JIAO Licheng, YANG Shuyuan, LIU Fang, et al. Seventy years beyond neural networks:retrospect and prospect[J]. Chinese journal of computers, 2016, 39(8):1697-1717
[12] HINTON G E, SALAKHUTDINOV R R. Reducing the dimensionality of data with neural networks[J]. Science, 2006, 313(5786):504-507.
[13] ERHAN D, MANZAGOL P A, BENGIO Y, et al. The difficulty of training deep architectures and the effect of unsupervised pre-training[C]//Appearing in Proceedings of the 12th International Conference on Artificial Intelligence and Statistics. Florida, USA, 2009:153-160.
[14] LECUN Y, BENGIO Y. Convolutional networks for images, speech, and time series[M]//ARBIB M A. The Handbook of Brain Theory and Neural Networks. Cambridge, MA, USA:MIT Press, 1998.
[15] KRIZHEVSKY A, SUTSKEVER I, HINTON G E. Imagenet classification with deep convolutional neural networks[C]//International Conference on Neural Information Processing Systems. Montreal, Canada, 2012:1097-1105.
[16] SZEGEDY C, LIU Wei, JIA Yangqing, et al. Going deeper with convolutions[C]//Proceedings of 2015 IEEE Conference on Computer Vision and Pattern Recognition. Boston, MA, USA, 2014:1-9.
[17] SZEGEDY C, IOFFE S, VANHOUCKE V, et al. Inception-v4, inception-resnet and the impact of residual connections on learning[J]. arXiv:1602.07261, 2016.
[18] HE Kaiming, ZHANG Xiangyu, REN Shaoqing, et al. Deep residual learning for image recognition[C]//Proceedings of 2016 IEEE Conference on Computer Vision and Pattern Recognition. Las Vegas, NV, USA, 2016:770-778.
[19] IOFFE S, SZEGEDY C. Batch normalization:accelerating deep network training by reducing internal covariate shift[C]//Proceedings of the 32nd International Conference on Machine Learning. Lille, France, 2015:448-456.
[20] GLOROT X, BORDES A, BENGIO Y. Deep sparse rectifier neural networks[C]//Proceedings of the 14th International Conference on Artificial Intelligence and Statistics. Fort Lauderdale, FL, USA, 2011:315-323.


 YIN Rui,SU Songzhi,LI Shaozi.Convolutional neural network’s image moment regularizing strategy[J].CAAI Transactions on Intelligent Systems,2016,11(02):43.[doi:10.11992/tis.201509018]
 GONG Zhenting,CHEN Guangxi,REN Xiali,et al.An image retrieval method based on a convolutional neural network and hash coding[J].CAAI Transactions on Intelligent Systems,2016,11(02):391.[doi:10.11992/tis.201603028]
 LIU Shuaishi,CHENG Xi,GUO Wenyan,et al.Progress report on new research in deep learning[J].CAAI Transactions on Intelligent Systems,2016,11(02):567.[doi:10.11992/tis.201511028]
 SHI Yating,LI Weijun,NING Xin,et al.A facial feature point locating algorithmbased on mouth-state constraints[J].CAAI Transactions on Intelligent Systems,2016,11(02):578.[doi:10.11992/tis.201602006]
 SONG Wanru,ZHAO Qingqing,CHEN Changhong,et al.Survey on pedestrian re-identification research[J].CAAI Transactions on Intelligent Systems,2017,12(02):770.[doi:10.11992/tis.201706084]
 YANG Xiaolan,QIANG Yan,ZHAO Juanjuan,et al.Hashing retrieval for CT images of pulmonary nodules based on medical signs and convolutional neural networks[J].CAAI Transactions on Intelligent Systems,2017,12(02):857.[doi:10.11992/tis.201706035]
 WANG Kejun,ZHAO Yandong,XING Xianglei.Deep learning in driverless vehicles[J].CAAI Transactions on Intelligent Systems,2018,13(02):55.[doi:10.11992/tis.201609029]
 MO Lingfei,JIANG Hongliang,LI Xuanpeng.Review of deep learning-based video prediction[J].CAAI Transactions on Intelligent Systems,2018,13(02):85.[doi:10.11992/tis.201707032]
 WANG Chengji,LUO Zhiming,ZHONG Zhun,et al.Face detection method fusing multi-layer features[J].CAAI Transactions on Intelligent Systems,2018,13(02):138.[doi:10.11992/tis.201707018]
 GE Yuanyuan,XU Youjiang,ZHAO Shuai,et al.Detection of small and dense traffic signs in self-driving scenarios[J].CAAI Transactions on Intelligent Systems,2018,13(02):366.[doi:10.11992/tis.201706040]


更新日期/Last Update: 2019-04-25