Joint Detection and Matching of Feature Points in Multimodal Images

Elad Ben Baruch, Yosi Keller

Research output: Contribution to journalArticlepeer-review

10 Scopus citations

Abstract

In this work, we propose a novel Convolutional Neural Network (CNN) architecture for the joint detection and matching of feature points in images acquired by different sensors using a single forward pass. The resulting feature detector is tightly coupled with the feature descriptor, in contrast to classical approaches (SIFT, etc.), where the detection phase precedes and differs from computing the descriptor. Our approach utilizes two CNN subnetworks, the first being a Siamese CNN and the second, consisting of dual non-weight-sharing CNNs. This allows simultaneous processing and fusion of the joint and disjoint cues in the multimodal image patches. The proposed approach is experimentally shown to outperform contemporary state-of-the-art schemes when applied to multiple datasets of multimodal images. It is also shown to provide repeatable feature points detections across multi-sensor images, outperforming state-of-the-art detectors. To the best of our knowledge, it is the first unified approach for the detection and matching of such images.

Original languageEnglish
Pages (from-to)6585-6593
Number of pages9
JournalIEEE Transactions on Pattern Analysis and Machine Intelligence
Volume44
Issue number10
DOIs
StatePublished - 1 Oct 2022

Bibliographical note

Publisher Copyright:
© 1979-2012 IEEE.

Keywords

  • Deep learning
  • feature points detection
  • image matching
  • multisensor images

Fingerprint

Dive into the research topics of 'Joint Detection and Matching of Feature Points in Multimodal Images'. Together they form a unique fingerprint.

Cite this