[1]陶岩,张辉,黄志鸿,等.面向配电网典型部件的热故障精准判别方法[J].智能系统学报,2025,20(2):506-515.[doi:10.11992/tis.202311035]
 TAO Yan,ZHANG Hui,HUANG Zhihong,et al.Accurate identification of thermal faults for typical components of distribution networks[J].CAAI Transactions on Intelligent Systems,2025,20(2):506-515.[doi:10.11992/tis.202311035]
点击复制

面向配电网典型部件的热故障精准判别方法

参考文献/References:
[1] TUBALLA M L, ABUNDO M L. A review of the development of Smart Grid technologies[J]. Renewable and sustainable energy reviews, 2016, 59: 710-725.
[2] 赵振兵, 冯烁, 赵文清, 等. 融合知识迁移和改进YOLOv6的变电设备热像检测方法[J]. 智能系统学报, 2023, 18(6): 1213-1222.
ZHAO Zhenbing, FENG Shuo, ZHAO Wenqing, et al. Thermd image detection method for substation equipment by incorporating knowledge migration and improved YOLOv6[J]. CAAI transactions on intelligent systems, 2023, 18(6): 1213-1222.
[3] 尹丹艳, 吴一全. 灰色预测和混沌PSO的红外小目标检测[J]. 智能系统学报, 2011, 6(2): 126-131.
YIN Danyan, WU Yiquan. The detection of a small infrared target based on gray prediction and chaotic PSO[J]. CAAI transactions on intelligent systems, 2011, 6(2): 126-131.
[4] 李洋, 焦淑红, 孙新童. 基于IHS和小波变换的可见光与红外图像融合[J]. 智能系统学报, 2012, 7(6): 554-559.
LI Yang, JIAO Shuhong, SUN Xintong. Fusion of visual and infrared images based on IHS and wavelet transforms[J]. CAAI transactions on intelligent systems, 2012, 7(6): 554-559.
[5] 曲海成, 王宇萍, 谢梦婷, 等. 结合亮度感知与密集卷积的红外与可见光图像融合[J]. 智能系统学报, 2022, 17(3): 643-652.
QU Haicheng, WANG Yuping, XIE Mengting, et al. Infrared and visible image fusion combined with brightness perception and dense convolution[J]. CAAI transactions on intelligent systems, 2022, 17(3): 643-652.
[6] LI He, LIU Lei, HUANG Wei, et al. An improved fusion algorithm for infrared and visible images based on multi-scale transform[J]. Infrared physics & technology, 2016, 74: 28-37.
[7] CVEJIC N, BULL D, CANAGARAJAH N. Region-based multimodal image fusion using ICA bases[J]. IEEE sensors journal, 2007, 7(5): 743-751.
[8] MOREL J M, YU Guoshen. ASIFT: a new framework for fully affine invariant image comparison[J]. SIAM journal on imaging sciences, 2009, 2(2): 438-469.
[9] SARVAIYA J N, PATNAIK S, BOMBAYWALA S. Image registration by template matching using normalized cross-correlation[C]//2009 International Conference on Advances in Computing, Control, and Telecommunication Technologies. Bangalore: IEEE, 2009: 819-822.
[10] WANG Di, LIU Jinyuan, FAN Xin, et al. Unsupervised misaligned infrared and visible image fusion via cross-modality image generation and registration[C]//Proceedings of the Thirty-First International Joint Conference on Artificial Intelligence. California: IJCAI Organization, 2022: 3508-3515.
[11] GONG Rui, LI Wen, CHEN Yuhua, et al. DLOW: domain flow for adaptation and generalization[C]//2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition. Long Beach: IEEE, 2019: 2472-2481.
[12] YUE Xiangyu, ZHANG Yang, ZHAO Sicheng, et al. Domain randomization and pyramid consistency: simulation-to-real generalization without accessing target domain data[C]//2019 IEEE/CVF International Conference on Computer Vision. Seoul: IEEE, 2019: 2100-2110.
[13] RAHMAN M M, FOOKES C, BAKTASHMOTLAGH M, et al. Correlation-aware adversarial domain adaptation and generalization[J]. Pattern recognition, 2020, 100: 107124.
[14] ZHANG Jian, QI Lei, SHI Yinghuan, et al. Generalizable semantic segmentation via model-agnostic learning and target-specific normalization[J]. Pattern recognition, 2022, 122, 108292.
[15] ZHU Junyan, PARK T, ISOLA P, et al. Unpaired image-to-image translation using cycle-consistent adversarial networks[C]//2017 IEEE International Conference on Computer Vision. Venice: IEEE, 2017: 2242-2251.
[16] LUO Yawei, ZHENG Liang, GUAN Tao, et al. Taking a closer look at domain shift: category-level adversaries for semantics consistent domain adaptation[C]//2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition. Long Beach: IEEE, 2019: 2502-2511.
[17] ZHAO Sicheng, LI Bo, YUE Xiangyu, et al. Multi-source domain adaptation for semantic segmentation[C]//33rd Conference on Neural Information Processing Systems. Red Hook: Curran Associates Inc., 2019: 1-14.
[18] ZHANG Yifan, REN Weiqiang, ZHANG Zhang, et al. Focal and efficient IOU loss for accurate bounding box regression[J]. Neurocomputing, 2022, 506: 146-157.
[19] GEVORGYAN Z. SIoU loss: more powerful learning for bounding box regression[EB/OL]. (2022-05-25)[2023-11-23]. https://arxiv.org/abs/2205.12740.
[20] TONG Zanjia, CHEN Yuhang, XU Zewei, et al. Wise-IoU: bounding box regression loss with dynamic focusing mechanism[EB/OL]. (2023-01-24)[2023-11-23]. https://arxiv.org/abs/2301.10051.
[21] REDMON J, FARHADI A. YOLOv3: an incremental improvement[EB/OL]. (2018-04-08)[2023-11-23]. https://arxiv.org/abs/1804.02767.
[22] BOCHKOVSKIY A, WANG C Y, LIAO H. YOLOv4: optimal speed and accuracy of object detection[EB/OL]. (2020-04-23)[2023-11-23]. https://arxiv.org/abs/2004.10934.
[23] WANG Chien-Yao, BOCHKOVSKIY A, LIAO Hong-Yuan M. Scaled-yolov4: scaling cross stage partial network[C]//2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition. Nashville: IEEE, 2021: 13029-13038.
[24] YU Jiahui, JIANG Yuning, WANG Zhangyang, et al. UnitBox: an advanced object detection network[C]//Proceedings of the 24th ACM international conference on Multimedia. Amsterdam: ACM, 2016: 516–520.
[25] ROBBINS H, MONRO S. A stochastic approximation method[J]. The annals of mathematical statistics, 1951, 22(3): 400-407.
[26] REN Shaoqing, HE Kaiming, GIRSHICK R, et al. Faster R-CNN: towards real-time object detection with region proposal networks[J]. IEEE transactions on pattern analysis and machine intelligence, 2017, 39(6): 1137-1149.
[27] LI Xiang, WANG Wenhai, HU Xiaolin, et al. Generalized focal loss V2: learning reliable localization quality estimation for dense object detection[C]//2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition. Nashville: IEEE, 2021: 11627-11636.

备注/Memo

收稿日期:2023-11-23。
基金项目:国家自然科学基金重大研究计划项目(92148204); 国家自然科学基金项目(62027810, 61971071); 国网湖南省电力有限公司科技项目(5216A522001Y); 湖南省科技创新领军人才项目(2022RC3063); 湖南省杰出青年科学基金项目(2021JJ10025); 湖南省重点研发计划项目(2021GK4011, 2022GK2011); 长沙理工大学研究生科研创新项目(CXCLY2022088); 湖南省研究生科研创新项目(CX20220923).
作者简介:陶岩,硕士研究生,主要研究方向为计算机视觉、电力设备巡检及维护。E-mail:taoyan_1999@16.com;张辉,教授,博士生导师,湖南大学机器人学院常务副院长,主要研究方向为计算机视觉。主持科技创新2030—新一代人工智能重大项目、国家自然科学基金共融机器人重大研究计划重点项目、国家重点研发计划子课题、国家科技支撑计划项目子课题等 20余项。获省部级科学技术奖励一等奖8项,获2022年湖南省第十三届教学成果特等奖,获发明专利授权38项。发表学术论文50余篇。E-mail:zhanghui1983@hnu.edu.cn;黄志鸿,高级工程师,博士,主要研究方向为电力人工智能、电力视觉。主持国网湖南省电力有限公司科技项目2项、数字化项目3项,相关成果获得国网湖南省电力有限公司科技进步奖2项。申请或获得发明专利授权13项,发表学术论文21篇。 E-mail:zhihong_huang111@16.com。
通讯作者:张辉. E-mail:zhanghui1983@hnu.edu.cn

更新日期/Last Update: 2025-03-05
Copyright © 《 智能系统学报》 编辑部
地址:(150001)黑龙江省哈尔滨市南岗区南通大街145-1号楼 电话:0451- 82534001、82518134 邮箱:tis@vip.sina.com