Generative Adversarial Perturbations
Research output: Contribution to journal › Conference article › Research › peer-review
In this paper, we propose novel generative models for creating adversarial examples, slightly perturbed images resembling natural images but maliciously crafted to fool pre-trained models. We present trainable deep neural networks for transforming images to adversarial perturbations. Our proposed models can produce image-agnostic and image-dependent perturbations for targeted and non-targeted attacks. We also demonstrate that similar architectures can achieve impressive results in fooling both classification and semantic segmentation models, obviating the need for hand-crafting attack methods for each task. Using extensive experiments on challenging high-resolution datasets such as ImageNet and Cityscapes, we show that our perturbations achieve high fooling rates with small perturbation norms. Moreover, our attacks are considerably faster than current iterative methods at inference time.
Original language | English |
---|---|
Journal | Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition |
Pages (from-to) | 4422-4431 |
Number of pages | 10 |
ISSN | 1063-6919 |
DOIs | |
Publication status | Published - 14 Dec 2018 |
Externally published | Yes |
Event | 31st Meeting of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, CVPR 2018 - Salt Lake City, United States Duration: 18 Jun 2018 → 22 Jun 2018 |
Conference
Conference | 31st Meeting of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, CVPR 2018 |
---|---|
Country | United States |
City | Salt Lake City |
Period | 18/06/2018 → 22/06/2018 |
Bibliographical note
Publisher Copyright:
© 2018 IEEE.
ID: 301825242