Learning to detect and match keypoints with deep architectures
Research output: Contribution to conference › Paper › Research › peer-review
Standard
Learning to detect and match keypoints with deep architectures. / Altwaijry, Hani; Veit, Andreas; Belongie, Serge.
2016. 49.1-49.12 Paper presented at 27th British Machine Vision Conference, BMVC 2016, York, United Kingdom.Research output: Contribution to conference › Paper › Research › peer-review
Harvard
APA
Vancouver
Author
Bibtex
}
RIS
TY - CONF
T1 - Learning to detect and match keypoints with deep architectures
AU - Altwaijry, Hani
AU - Veit, Andreas
AU - Belongie, Serge
N1 - Funding Information: We would like to thank Michael Wilber and Tsung-Yi Lin for their valuable input. This work was supported by the KACST Graduate Studies Scholarship. Publisher Copyright: © 2016. The copyright of this document resides with its authors.
PY - 2016
Y1 - 2016
N2 - Feature detection and description is a pivotal step in many computer vision pipelines. Traditionally, human engineered features have been the main workhorse in this domain. In this paper, we present a novel approach for learning to detect and describe keypoints from images leveraging deep architectures. To allow for a learning based approach, we collect a large-scale dataset of patches with matching multiscale keypoints. The proposed model learns from this vast dataset to identify and describe meaningful keypoints. We evaluate our model for the effectiveness of its learned representations for detecting multiscale keypoints and describing their respective support regions.
AB - Feature detection and description is a pivotal step in many computer vision pipelines. Traditionally, human engineered features have been the main workhorse in this domain. In this paper, we present a novel approach for learning to detect and describe keypoints from images leveraging deep architectures. To allow for a learning based approach, we collect a large-scale dataset of patches with matching multiscale keypoints. The proposed model learns from this vast dataset to identify and describe meaningful keypoints. We evaluate our model for the effectiveness of its learned representations for detecting multiscale keypoints and describing their respective support regions.
UR - http://www.scopus.com/inward/record.url?scp=85029570955&partnerID=8YFLogxK
U2 - 10.5244/C.30.49
DO - 10.5244/C.30.49
M3 - Paper
AN - SCOPUS:85029570955
SP - 49.1-49.12
T2 - 27th British Machine Vision Conference, BMVC 2016
Y2 - 19 September 2016 through 22 September 2016
ER -
ID: 301828084