[1]刘威,郭直清,刘光伟,等.融合振幅随机补偿与步长演变机制的改进原子搜索优化算法[J].智能系统学报,2022,17(3):602-616.[doi:10.11992/tis.202103033]
LIU Wei,GUO Zhiqing,LIU Guangwei,et al.Improved atom search optimization by combining amplitude random compensation and step size evolution mechanism[J].CAAI Transactions on Intelligent Systems,2022,17(3):602-616.[doi:10.11992/tis.202103033]
点击复制
《智能系统学报》[ISSN 1673-4785/CN 23-1538/TP] 卷:
17
期数:
2022年第3期
页码:
602-616
栏目:
学术论文—人工智能基础
出版日期:
2022-05-05
- Title:
-
Improved atom search optimization by combining amplitude random compensation and step size evolution mechanism
- 作者:
-
刘威1,2,3, 郭直清1,2,3, 刘光伟4, 靳宝1,2,3, 王东4
-
1. 辽宁工程技术大学 理学院,辽宁 阜新 123000;
2. 辽宁工程技术大学 智能工程与数学研究院,辽宁 阜新 123000;
3. 辽宁工程技术大学 数学与系统科学研究所,辽宁 阜新 123000;
4. 辽宁工程技术大学 矿业学院,辽宁 阜新 123000
- Author(s):
-
LIU Wei1,2,3, GUO Zhiqing1,2,3, LIU Guangwei4, JIN Bao1,2,3, WANG Dong4
-
1. College of Science, Liaoning Technical University, Fuxin 123000, China;
2. Institute of Intelligent Engineering and Mathematics, Liao ning Technical University, Fuxin 123000, China;
3. Institute of Intelligent Engineering and Methematics, Liaoning Technical University, Fuxin 123000, China;
4. College of Mines, Liaoning Technical University, Fuxin 123000, China
-
- 关键词:
-
元启发式算法; 原子搜索优化算法; Tent混沌优化; 振幅随机补偿; 步长演变机制; BP神经网络参数优化; 分类; 机器学习
- Keywords:
-
meta-heuristics algorithms; atom search optimization; tent chaos optimization; amplitude random compensation; step size evolution mechanism; parameter optimization of BP neural network; classification; machine learning
- 分类号:
-
TP18
- DOI:
-
10.11992/tis.202103033
- 摘要:
-
针对原子优化算法寻优精度弱且易陷入局部极值的问题,本文从种群多样性、参数适应性和位置动态性角度提出一种融合混沌优化、振幅随机补偿和步长演变机制改进的原子搜索优化算法(improved atom search optimization, IASO),并将其成功应用于分类任务。首先,引入帐篷映射(Tent混沌)增强原子种群在搜索空间中的分布均匀性;其次,通过构建振幅函数对算法参数进行随机扰动并加入步长演变因子更新原子位置,以增强算法全局性和收敛性;最后,再将改进算法应用于误差反馈神经网络(BP神经网络)参数优化。通过与6种元启发式算法在20个基准测试函数下的数值实验对比表明:IASO不仅在求解多维基准函数上具有好的寻优性能,且在对BP神经网络参数进行优化时相较于2种对比算法具有更高的分类精度。
- Abstract:
-
The weak optimization accuracy of the atom search optimization algorithm can easily fall into a local extremum owing to population diversity, parameter adaptability, and position dynamics. To overcome this challenge, we propose an improved ASO algorithm (IASO) by integrating chaos optimization, amplitude random compensation, and step size evolution mechanism and successfully apply it to classification tasks. First, tent chaos is introduced to enhance the distribution uniformity of atomic population in the search space. Then, an amplitude function is constructed to randomly perturb the algorithm parameters, and the step evolution factor is added to update the atomic position to enhance the globality and convergence of the algorithm. Finally, the improved algorithm is applied to the parameter optimization of the error feedback BP neural network. Compared with the numerical calculations of six metaheuristic algorithms under 20 benchmark functions, the experimental results indicate that IASO not only shows a good optimization performance in solving multidimensional benchmark functions but also a higher classification accuracy than two comparison algorithms in optimizing the BP neural network parameters.
备注/Memo
收稿日期:2021-03-24。
基金项目:国家自然科学基金项目(51974144,51874160);辽宁省教育厅项目(LJKZ0340);辽宁工程技术大学学科创新团队资助项目(LNTU20TD-01,LNTU20TD-07).
作者简介:刘威,副教授,博士,中国人工智能学会会员,中国计算机学会会员,主要研究方向为深度神经网络、机器学习、矿业系统工程;郭直清,硕士研究生,主要研究方向为机器学习与优化算法、数学建模与数据分析;刘光伟,教授,博士生导师,博士,主要研究方向为露天矿开采设计理论、矿业系统工程
通讯作者:刘威.E-mail:lv8218218@126.com
更新日期/Last Update:
1900-01-01