Abstract
The task of Visual Place Recognition (VPR) is to predict the location of a query image from a database of geo-tagged images. Recent studies in VPR have highlighted the significant advantage of employing pre-trained foundation models like DINOv2 for the VPR task. However, these models are often deemed inadequate for VPR without further fine-tuning on VPR-specific data. In this paper, we present an effective approach to harness the potential of a foundation model for VPR. We show that features extracted from self-attention layers can act as a powerful re-ranker for VPR, even in a zero-shot setting. Our method not only outperforms previous zero-shot approaches but also introduces results competitive with several supervised methods. We then show that a single-stage approach utilizing internal ViT layers for pooling can produce global features that achieve state-of-the-art performance, with impressive feature compactness down to 128D. Moreover, integrating our local foundation features for re-ranking further widens this performance gap. Our method also demonstrates exceptional robustness and generalization, setting new state-of-the-art performance, while handling challenging conditions such as occlusion, day-night transitions, and seasonal variations.
| Original language | English |
|---|---|
| Title of host publication | 13th International Conference on Learning Representations, ICLR 2025 |
| Publisher | International Conference on Learning Representations, ICLR |
| Pages | 62408-62430 |
| Number of pages | 23 |
| ISBN (Electronic) | 9798331320850 |
| State | Published - 2025 |
| Externally published | Yes |
| Event | 13th International Conference on Learning Representations, ICLR 2025 - Singapore, Singapore Duration: 24 Apr 2025 → 28 Apr 2025 |
Publication series
| Name | 13th International Conference on Learning Representations, ICLR 2025 |
|---|
Conference
| Conference | 13th International Conference on Learning Representations, ICLR 2025 |
|---|---|
| Country/Territory | Singapore |
| City | Singapore |
| Period | 24/04/25 → 28/04/25 |
Bibliographical note
Publisher Copyright:© 2025 13th International Conference on Learning Representations, ICLR 2025. All rights reserved.
Fingerprint
Dive into the research topics of 'EFFOVPR: EFFECTIVE FOUNDATION MODEL UTILIZATION FOR VISUAL PLACE RECOGNITION'. Together they form a unique fingerprint.Cite this
- APA
- Author
- BIBTEX
- Harvard
- Standard
- RIS
- Vancouver