The IRMA Community
Newsletters
Research IRM
Click a keyword to search titles using our InfoSci-OnDemand powered search:
|
Defending Deep Learning Models Against Adversarial Attacks
Abstract
Deep learning (DL) has been used globally in almost every sector of technology and society. Despite its huge success, DL models and applications have been susceptible to adversarial attacks, impacting the accuracy and integrity of these models. Many state-of-the-art models are vulnerable to attacks by well-crafted adversarial examples, which are perturbed versions of clean data with a small amount of noise added, imperceptible to the human eyes, and can quite easily fool the targeted model. This paper introduces six most effective gradient-based adversarial attacks on the ResNet image recognition model, and demonstrates the limitations of traditional adversarial retraining technique. The authors then present a novel ensemble defense strategy based on adversarial retraining technique. The proposed method is capable of withstanding the six adversarial attacks on cifar10 dataset with accuracy greater than 89.31% and as high as 96.24%. The authors believe the design methodologies and experiments demonstrated are widely applicable to other domains of machine learning, DL, and computation intelligence securities.
Related Content
David Juárez-Varón, Manuel Ángel Juárez-Varón.
© 2024.
26 pages.
|
Piyush Bagla, Kuldeep Kumar.
© 2023.
14 pages.
|
Irfan M. Leghari, Syed Asif Ali.
© 2023.
11 pages.
|
Dingju Zhu, Jianbin Tan, Guangbo Luo, Haoxiang Gu, Zhanhao Ye, Renfeng Deng, Keyi He, KaiLeung Yung, Andrew W. H. Ip.
© 2023.
16 pages.
|
Hongli Chu, Yanhong Ji, Dingju Zhu, Zhanhao Ye, Jianbin Tan, Xianping Hou, Yujie Lin.
© 2023.
25 pages.
|
Mohammad Alauthman, Ahmad al-Qerem, Someah Alangari, Ali Mohd Ali, Ahmad Nabo, Amjad Aldweesh, Issam Jebreen, Ammar Almomani, Brij B. Gupta.
© 2023.
24 pages.
|
Charles Shi Tan.
© 2023.
19 pages.
|
|
|