Robustness and Generalization via Generative Adversarial Training
Research output: Contribution to journal › Conference article › Research › peer-review
Standard
Robustness and Generalization via Generative Adversarial Training. / Belongie, Serge; Poursaeed, Omid; Jiang, Tianxing; Yang, Harry; Lim, Ser-Nam.
In: IEEE Xplore Digital Library, Vol. 2021 IEEE/CVF International Conference on Computer Vision (ICCV), 28.02.2022, p. 15691-15700.Research output: Contribution to journal › Conference article › Research › peer-review
Harvard
APA
Vancouver
Author
Bibtex
}
RIS
TY - GEN
T1 - Robustness and Generalization via Generative Adversarial Training
AU - Belongie, Serge
AU - Poursaeed, Omid
AU - Jiang, Tianxing
AU - Yang, Harry
AU - Lim, Ser-Nam
PY - 2022/2/28
Y1 - 2022/2/28
N2 - While deep neural networks have achieved remarkable success in various computer vision tasks, they often fail to generalize to subtle variations of input images. Several defenses have been proposed to improve the robustness against these variations. However, current defenses can only withstand the specific attack used in training, and the models often remain vulnerable to other input variations. Moreover, these methods often degrade performance of the model on clean images. In this paper, we present Generative Adversarial Training, an approach to simultaneously improve the model's generalization and robustness to unseen adversarial attacks. Instead of altering a single pre-defined aspect of images, we generate a spectrum of low-level, mid-level and high-level changes using generative models with a disentangled latent space. Adversarial training with these examples enable the model to withstand a wide range of attacks by observing a variety of input alterations during training. We show that our approach not only improves performance of the model on clean images but also makes it robust against unforeseen attacks and outperforms prior work. We validate effectiveness of our method by demonstrating results on various tasks such as classification, semantic segmentation and object detection.
AB - While deep neural networks have achieved remarkable success in various computer vision tasks, they often fail to generalize to subtle variations of input images. Several defenses have been proposed to improve the robustness against these variations. However, current defenses can only withstand the specific attack used in training, and the models often remain vulnerable to other input variations. Moreover, these methods often degrade performance of the model on clean images. In this paper, we present Generative Adversarial Training, an approach to simultaneously improve the model's generalization and robustness to unseen adversarial attacks. Instead of altering a single pre-defined aspect of images, we generate a spectrum of low-level, mid-level and high-level changes using generative models with a disentangled latent space. Adversarial training with these examples enable the model to withstand a wide range of attacks by observing a variety of input alterations during training. We show that our approach not only improves performance of the model on clean images but also makes it robust against unforeseen attacks and outperforms prior work. We validate effectiveness of our method by demonstrating results on various tasks such as classification, semantic segmentation and object detection.
UR - https://openaccess.thecvf.com/content/ICCV2021/html/Poursaeed_Robustness_and_Generalization_via_Generative_Adversarial_Training_ICCV_2021_paper.html
U2 - 10.1109/ICCV48922.2021.01542
DO - 10.1109/ICCV48922.2021.01542
M3 - Conference article
VL - 2021 IEEE/CVF International Conference on Computer Vision (ICCV)
SP - 15691
EP - 15700
JO - IEEE Xplore Digital Library
JF - IEEE Xplore Digital Library
ER -
ID: 303805415