LaVAN: Localized and Visible Adversarial Noise

Research output: Contribution to journalConference articlepeer-review

91 Scopus citations

Abstract

Most works on adversarial examples for deep-learning based image classifiers use noise that, while small, covers the entire image. We explore the case where the noise is allowed to be visible but confined to a small, localized patch of the image, without covering any of the main object(s) in the image. We show that it is possible to generate localized adversarial noises that cover only 2% of the pixels in the image, none of them over the main object, and that are transferable across images and locations, and successfully fool a state-of-the-art Inception v3 model with very high success rates.

Original languageEnglish
Pages (from-to)2507-2515
Number of pages9
JournalProceedings of Machine Learning Research
Volume80
StatePublished - 2018
Event35th International Conference on Machine Learning, ICML 2018 - Stockholm, Sweden
Duration: 10 Jul 201815 Jul 2018

Bibliographical note

Publisher Copyright:
© 2018 by the author(s).

Fingerprint

Dive into the research topics of 'LaVAN: Localized and Visible Adversarial Noise'. Together they form a unique fingerprint.

Cite this