Alzaidy, Sharoug and Binsalleeh, Hamad (2024) Adversarial Attacks with Defense Mechanisms on Convolutional Neural Networks and Recurrent Neural Networks for Malware Classification. Applied Sciences, 14 (4). p. 1673. ISSN 2076-3417
applsci-14-01673.pdf - Published Version
Download (1MB)
Abstract
In the field of behavioral detection, deep learning has been extensively utilized. For example, deep learning models have been utilized to detect and classify malware. Deep learning, however, has vulnerabilities that can be exploited with crafted inputs, resulting in malicious files being misclassified. Cyber-Physical Systems (CPS) may be compromised by malicious files, which can have catastrophic consequences. This paper presents a method for classifying Windows portable executables (PEs) using Recurrent Neural Networks (RNNs) and Convolutional Neural Networks (CNNs). To generate malware executable adversarial examples of PE, we conduct two white-box attacks, Jacobian-based Saliency Map Attack (JSMA) and Carlini and Wagner attack (C&W). An adversarial payload was injected into the DOS header, and a section was added to the file to preserve the PE functionality. The attacks successfully evaded the CNN model with a 91% evasion rate, whereas the RNN model evaded attacks at an 84.6% rate. Two defense mechanisms based on distillation and training techniques are examined in this study for overcoming adversarial example challenges. Distillation and training against JSMA resulted in the highest reductions in the evasion rates of 48.1% and 41.49%, respectively. Distillation and training against C&W resulted in the highest decrease in evasion rates, at 48.1% and 49.9%, respectively.
Item Type: | Article |
---|---|
Subjects: | Euro Archives > Multidisciplinary |
Depositing User: | Managing Editor |
Date Deposited: | 20 Feb 2024 04:19 |
Last Modified: | 20 Feb 2024 04:19 |
URI: | http://publish7promo.com/id/eprint/4472 |