[1]周强,陈军,陶卿.基于L1-mask约束的对抗攻击优化方法[J].智能系统学报,2025,20(3):594-604.[doi:10.11992/tis.202405037]
 ZHOU Qiang,CHEN Jun,TAO Qing.Adversarial attack optimization method based on L1-mask constraint[J].CAAI Transactions on Intelligent Systems,2025,20(3):594-604.[doi:10.11992/tis.202405037]
点击复制

基于L1-mask约束的对抗攻击优化方法

参考文献/References:
[1] 鲁思迪, 何元恺, 施巍松. 车计算: 自动驾驶时代的新型计算范式[J]. 计算机研究与发展, 2025, 62(1): 2-21.
LU Sidi, HE Yuankai, SHI Weisong. Vehicle computing: an emerging computing paradigm for the autonomous driving era[J]. Journal of computer research and development, 2025, 62(1): 2-21.
[2] 樊琳, 龚勋, 郑岑洋. 基于文本引导下的多模态医学图像分析算法[J]. 电子学报, 2024, 52(7): 2341-2355.
FAN Lin, GONG Xun, ZHENG Cenyang. A multi-modal medical image analysis algorithm based on text guidance[J]. Acta electronica sinica, 2024, 52(7): 2341-2355.
[3] GOODFELLOW I J, JONATHON S, SZEGEDY C. Exp-laining and harnessing adversarial examples[C]//International Conference on Learning Representations. Washington DC: ICLR, 2014.
[4] KURAKIN A, GOODFELLOW I J, BENGIO S. Adversarial examples in the physical world[M]//Artificial Intelligence Safety and Security. First edition. Boca Raton: Chapman and Hall/CRC, 2018: 99-112.
[5] 纪守领, 杜天宇, 邓水光, 等. 深度学习模型鲁棒性研究综述[J]. 计算机学报, 2022, 45(1): 190-206.
JI Shouling, DU Tianyu, DENG Shuiguang, et al. Robustness certification research on deep learning models: a survey[J]. Chinese journal of computers, 2022, 45(1): 190-206.
[6] DONOHO D L. Compressed sensing[J]. IEEE transactions on information theory, 2006, 52(4): 1289-1306.
[7] CANDES E J, WAKIN M B. An introduction to compressive sampling[J]. IEEE signal processing magazine, 2008, 25(2): 21-30.
[8] BAEHRENS D, SCHROETER T, HARMELING S, et al. How to explain individual classification decisions[EB/OL]. (2019-12-06)[2024-01-01]. https://arxiv.org/abs/0912.1128v1.
[9] DOSHI-VELEZ F, KIM B. Towards a rigorous science of interpretable machine learning[EB/OL]. (2017-03-02)[2024-01-01]. https://arxiv.org/abs/1702.08608v2.
[10] SIMONYAN K, VEDALDI A, ZISSERMAN A. Deep inside convolutional networks: visualising image classification models and saliency maps[EB/OL]. (2013-12-20)[2024-01-01]. https://arxiv.org/abs/1312.6034v2.
[11] SMILKOV D, THORAT N, KIM B, et al. SmoothGrad: removing noise by adding noise[EB/OL]. (2017-06-12)[2024-01-01]. https://arxiv.org/abs/1706.03825v1.
[12] DONG Yinpeng, LIAO Fangzhou, PANG Tianyu, et al. Boosting adversarial attacks with momentum[C]//2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition. Salt Lake City: IEEE, 2018: 9185-9193.
[13] LIN Jiadong, SONG Chuanbiao, HE Kun, et al. Nesterov accelerated gradient and scale invariance for adversarial attacks[EB/OL]. (2020-02-08)[2024-01-01]. https://arxiv.org/abs/1908.06281.
[14] DONG Yinpeng, PANG Tianyu, SU Hang, et al. Evading defenses to transferable adversarial examples by translation-invariant attacks[C]//2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition. Long Beach: IEEE, 2019: 4307-4316.
[15] XIE Cihang, ZHANG Zhishuai, ZHOU Yuyin, et al. Improving transferability of adversarial examples with input diversity[C]//2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition. Long Beach: IEEE, 2019: 2725-2734.
[16] GAO Lianli, ZHANG Qilong, SONG Jingkuan, et al. Patch-wise attack for fooling deep neural network[C]// Computer Vision–ECCV 2020. Cham: Springer International Publishing, 2020: 307-322.
[17] MADRY A, MAKELOV A, SCHMIDT L, et al. Towards deep learning models resistant to adversarial attacks[EB/OL]. (2019-09-04)[2024-01-01]. https://arxiv.org/abs/1706.06083.
[18] CROCE F, HEIN M. Reliable evaluation of adversarial robustness with an ensemble of diverse parameter-free attacks[EB/OL]. (2020-08-04)[2024-01-01]. https://arxiv.org/abs/2003.01690v2.
[19] CROCE F, HEIN M. Minimally distorted adversarial examples with a fast adaptive boundary attack[EB/OL]. (2020-07-20)[2024-01-01]. https://arxiv.org/abs/1907.02044.
[20] 陶卿, 高乾坤, 姜纪远, 等. 稀疏学习优化问题的求解综述[J]. 软件学报, 2013, 24(11): 2498-2507.
TAO Qing, GAO Qiankun, JIANG Jiyuan, et al. Survey of solving the optimization problems for sparse learning[J]. Journal of software, 2013, 24(11): 2498-2507.
[21] ANDRIUSHCHENKO M, CROCE F, FLAMMARION N, et al. Square attack: a query-efficient black-box adversarial attack via random search[M]//Computer Vision–ECCV 2020. Cham: Springer International Publishing, 2020: 484-501.
[22] CROCE F, HEIN M. Mind the box: l1-APGD for sparse adversarial attacks on image classifiers[EB/OL]. (2023-11-24)[2024-01-01]. https://arxiv.org/abs/2103.01208v3.
[23] CHEN Pinyu, SHARMA Y, ZHANG Huan, et al. EAD: elastic-net attacks to deep neural networks via adversarial examples[J]. Proceedings of the AAAI conference on artificial intelligence, 2018, 32(1): 10-17.
[24] SU Jiawei, VARGAS D V, SAKURAI K. One pixel attack for fooling deep neural networks[C]//IEEE Transactions on Evolutionary Computation. [S.l.]: IEEE, 2019: 828-841.
[25] MODAS A, MOOSAVI-DEZFOOLI S M, FROSSARD P. SparseFool: a few pixels make a big difference[C]//2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition. Long Beach: IEEE, 2019: 9079-9088.
[26] POMPONI J, SCARDAPANE S, UNCINI A. Pixle: a fast and effective black-box attack based on rearranging pixels[C]//2022 International Joint Conference on Neural Networks. Padua: IEEE, 2022: 1-7.
[27] PAPERNOT N, MCDANIEL P, JHA S, et al. The limitations of deep learning in adversarial settings[C]//2016 IEEE European Symposium on Security and Privacy. Saarbruecken: IEEE, 2016: 372-387.
[28] DUCHI J, SHALEV-SHWARTZ S, SINGER Y, et al. Efficient projections onto the l1-ball for learning in high dimensions[C]//Proceedings of the 25th International Conference on Machine Learning. Helsinki: ACM, 2008: 272-279.
[29] SZEGEDY C, VANHOUCKE V, IOFFE S, et al. Rethinking the inception architecture for computer vision[C]//2016 IEEE Conference on Computer Vision and Pattern Recognition. Las Vegas: IEEE, 2016: 2818-2826.
[30] SZEGEDY C, LIU Wei, JIA Yangqing, et al. Going deeper with convolutions[C]//2015 IEEE Conference on Computer Vision and Pattern Recognition. Boston: IEEE, 2015: 1-9.
[31] HE Kaiming, ZHANG Xiangyu, REN Shaoqing, et al. Deep residual learning for image recognition[C]//2016 IEEE Conference on Computer Vision and Pattern Recognition. Las Vegas: IEEE, 2016: 770-778.
[32] SIMONYAN K, ZISSERMAN A. Very deep convolutional networks for large-scale image recognition[EB/OL]. (2015-04-10)[2024-01-01]. https://arxiv.org/abs/1409.1556v6.
[33] SANDLER M, HOWARD A, ZHU Menglong, et al. MobileNetV2: inverted residuals and linear bottlenecks[C]//2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition. Salt Lake City: IEEE, 2018: 4510-4520.
[34] ALEXEY D, LUCAS B, ALEXACDER K, et al. An image is worth 16×16 words: transformers for image recognition at scale[C]//International Conference on Learning Representations. Washington DC: ICLR, 2021.
[35] PASZKE, ADAM, SAM G, et al. PyTorch: an imperative style, high- performance deep learning library[EB/OL]. (2019-12-03)[2024-01-01]. https://arxiv.org/abs/1912.01703.
[36] KIM H. Torchattacks: a PyTorch repository for adversarial attacks[J]. (2021-02-19)[2024-01-01]. https://arxiv.org/abs/2010.01950.
[37] MARTIN H, HUBER R, THOMAS U, et al. Gans trained by a two time-scale update rule converge to a local nash equilibrium[C]//Advances in Neural Information Processing Systems. San Diego: NIPS, 2017.

备注/Memo

收稿日期:2024-5-27。
基金项目:国家自然科学基金项目(62076252).
作者简介:周强,硕士研究生,主要研究方向为机器学习、数学优化。E-mail:1071391319@qq.com。;陈军,硕士研究生,主要研究方向为机器学习、数学优化。E-mail:1358749376@qq.com。;陶卿,教授,博士生导师,博士,中国计算机学会高级会员,主要研究方向为机器学习、模式识别、应用数学。E-mail:taoqing@gmail.com。
通讯作者:陶卿. E-mail:taoqing@gmail.com

更新日期/Last Update: 1900-01-01
Copyright © 《 智能系统学报》 编辑部
地址:(150001)黑龙江省哈尔滨市南岗区南通大街145-1号楼 电话:0451- 82534001、82518134 邮箱:tis@vip.sina.com