Adversarial examples on discrete sequences for beating whole-binary malware detection

F. Kreuk, A. Barak, S. Aviv-Reuven, M. Baruch, B. Pinkas, J. Keshet

Research output: Working paper / PreprintPreprint

Abstract

In recent years, deep learning has shown performance breakthroughs in many applications, such as image detection, image segmentation, pose estimation, and speech recognition. It was also applied successfully to malware
detection. However, this comes with a major concern: deep networks have been found to be vulnerable to adversarial examples. So far successful attacks have been proved to be very effective especially in the domains of images and speech, where small perturbations to the input signal do not change how it is perceived by humans but greatly affect the classification of the model under
attack. Our goal is to modify a malicious binary so it would be detected as benign while preserving its original functionality. In contrast to images or speech, small modifications to bytes of the binary lead to significant changes
in the functionality. We introduce a novel approach to generating adversarial example for attacking a whole-binary malware detector. We append to the binary file a small section, which contains a selected sequence of bytes that steers the prediction of the network from malicious to be benign with high confidence. We applied this approach to a CNN based malware detection model and showed extremely high rates of success in the attack.
Original languageEnglish
Pages1-12
Number of pages12
StatePublished - 13 Feb 2018

Publication series

NamearXiv preprint arXiv:1802.,

Fingerprint

Dive into the research topics of 'Adversarial examples on discrete sequences for beating whole-binary malware detection'. Together they form a unique fingerprint.

Cite this