Learning single-view 3D reconstruction with limited pose supervision
Research output: Contribution to journal › Conference article › Research › peer-review
It is expensive to label images with 3D structure or precise camera pose. Yet, this is precisely the kind of annotation required to train single-view 3D reconstruction models. In contrast, unlabeled images or images with just category labels are easy to acquire, but few current models can use this weak supervision. We present a unified framework that can combine both types of supervision: a small amount of camera pose annotations are used to enforce pose-invariance and view-point consistency, and unlabeled images combined with an adversarial loss are used to enforce the realism of rendered, generated models. We use this unified framework to measure the impact of each form of supervision in three paradigms: semi-supervised, multi-task, and transfer learning. We show that with a combination of these ideas, we can train single-view reconstruction models that improve up to 7 points in performance (AP) when using only 1% pose annotated training data.
Original language | English |
---|---|
Journal | Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) |
Pages (from-to) | 90-105 |
Number of pages | 16 |
ISSN | 0302-9743 |
DOIs | |
Publication status | Published - 2018 |
Externally published | Yes |
Event | 15th European Conference on Computer Vision, ECCV 2018 - Munich, Germany Duration: 8 Sep 2018 → 14 Sep 2018 |
Conference
Conference | 15th European Conference on Computer Vision, ECCV 2018 |
---|---|
Country | Germany |
City | Munich |
Period | 08/09/2018 → 14/09/2018 |
Bibliographical note
Publisher Copyright:
© Springer Nature Switzerland AG 2018.
- Few-shot learning, GANs, Single-image 3D-reconstruction
Research areas
ID: 301825834