Cross-view image geolocalization
Research output: Contribution to journal › Conference article › Research › peer-review
Standard
Cross-view image geolocalization. / Lin, Tsung Yi; Belongie, Serge; Hays, James.
In: Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, 2013, p. 891-898.Research output: Contribution to journal › Conference article › Research › peer-review
Harvard
APA
Vancouver
Author
Bibtex
}
RIS
TY - GEN
T1 - Cross-view image geolocalization
AU - Lin, Tsung Yi
AU - Belongie, Serge
AU - Hays, James
PY - 2013
Y1 - 2013
N2 - The recent availability of large amounts of geotagged imagery has inspired a number of data driven solutions to the image geolocalization problem. Existing approaches predict the location of a query image by matching it to a database of georeferenced photographs. While there are many geotagged images available on photo sharing and street view sites, most are clustered around landmarks and urban areas. The vast majority of the Earth's land area has no ground level reference photos available, which limits the applicability of all existing image geolocalization methods. On the other hand, there is no shortage of visual and geographic data that densely covers the Earth - we examine overhead imagery and land cover survey data - but the relationship between this data and ground level query photographs is complex. In this paper, we introduce a cross-view feature translation approach to greatly extend the reach of image geolocalization methods. We can often localize a query even if it has no corresponding ground level images in the database. A key idea is to learn the relationship between ground level appearance and overhead appearance and land cover attributes from sparsely available geotagged ground-level images. We perform experiments over a 1600 km2 region containing a variety of scenes and land cover types. For each query, our algorithm produces a probability density over the region of interest.
AB - The recent availability of large amounts of geotagged imagery has inspired a number of data driven solutions to the image geolocalization problem. Existing approaches predict the location of a query image by matching it to a database of georeferenced photographs. While there are many geotagged images available on photo sharing and street view sites, most are clustered around landmarks and urban areas. The vast majority of the Earth's land area has no ground level reference photos available, which limits the applicability of all existing image geolocalization methods. On the other hand, there is no shortage of visual and geographic data that densely covers the Earth - we examine overhead imagery and land cover survey data - but the relationship between this data and ground level query photographs is complex. In this paper, we introduce a cross-view feature translation approach to greatly extend the reach of image geolocalization methods. We can often localize a query even if it has no corresponding ground level images in the database. A key idea is to learn the relationship between ground level appearance and overhead appearance and land cover attributes from sparsely available geotagged ground-level images. We perform experiments over a 1600 km2 region containing a variety of scenes and land cover types. For each query, our algorithm produces a probability density over the region of interest.
UR - http://www.scopus.com/inward/record.url?scp=84887356836&partnerID=8YFLogxK
U2 - 10.1109/CVPR.2013.120
DO - 10.1109/CVPR.2013.120
M3 - Conference article
AN - SCOPUS:84887356836
SP - 891
EP - 898
JO - I E E E Conference on Computer Vision and Pattern Recognition. Proceedings
JF - I E E E Conference on Computer Vision and Pattern Recognition. Proceedings
SN - 1063-6919
M1 - 6618964
T2 - 26th IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2013
Y2 - 23 June 2013 through 28 June 2013
ER -
ID: 293151299