[1]ZHOU Qiang,CHEN Jun,TAO Qing.Adversarial attack optimization method based on L1-mask constraint[J].CAAI Transactions on Intelligent Systems,2025,20(3):594-604.[doi:10.11992/tis.202405037]
Copy
CAAI Transactions on Intelligent Systems[ISSN 1673-4785/CN 23-1538/TP] Volume:
20
Number of periods:
2025 3
Page number:
594-604
Column:
学术论文—机器学习
Public date:
2025-05-05
- Title:
-
Adversarial attack optimization method based on L1-mask constraint
- Author(s):
-
ZHOU Qiang; CHEN Jun; TAO Qing
-
Department of Information Engineering, PLA Army Academy of Artillery and Air Defense, Hefei 230031, China
-
- Keywords:
-
adversarial attack; L1 norm; mask; saliency; imperceptibility; transferability; sparse; constraint
- CLC:
-
TP181
- DOI:
-
10.11992/tis.202405037
- Abstract:
-
The existing adversarial attack methods generally utilize infinite or L2 norms to measure distance. However, these methods can be improved in terms of imperceptibility. Moreover, the L1 norm, as a conventionally employed metric method in sparse learning, has not been extensively studied in terms of improving the imperceptibility of adversarial samples. To address this research gap, an adversarial attack method based on the L1 norm constraint is proposed, and it focuses limited perturbations on more crucial features by performing feature differentiation processing. Additionally, an L1-mask constraint method based on saliency analysis is proposed to improve attack targeting by masking low-saliency features. The results reveal that these improvements enhance the imperceptibility of adversarial samples and reduce the risk of overfitting alternative models with adversarial samples, thereby enhancing the transferability of adversarial attacks. Experiments using the ImageNet compatible dataset reveal that the imperceptibility FID index of the L1-constrained adversarial attack methods is approximately 5.7% lower than that of the infinite norm while maintaining the same success rate for black box attacks. Conversely, the FID index of L1-mask-constrained adversarial attack methods is approximately 9.5% lower.