Learning local invariant mahalanobis distances

Ethan Fetaya, Shimon Ullman

Research output: Chapter in Book/Report/Conference proceedingConference contributionpeer-review

15 Scopus citations

Abstract

For many tasks and data types, there are natural transformations to which the data should be invariant or insensitive. For instance, in visual recognition, natural images should be insensitive to rotation and translation. This requirement and its implications have been important in many machine learning applications, and tolerance for image transformations was primarily achieved by using robust feature vectors. In this paper we propose a novel and computationally efficient way to learn a local Mahalanobis metric per datum, and show how we can learn a local invariant metric to any transformation in order to improve performance.

Original languageEnglish
Title of host publication32nd International Conference on Machine Learning, ICML 2015
EditorsFrancis Bach, David Blei
PublisherInternational Machine Learning Society (IMLS)
Pages162-168
Number of pages7
ISBN (Electronic)9781510810587
StatePublished - 2015
Externally publishedYes
Event32nd International Conference on Machine Learning, ICML 2015 - Lile, France
Duration: 6 Jul 201511 Jul 2015

Publication series

Name32nd International Conference on Machine Learning, ICML 2015
Volume1

Conference

Conference32nd International Conference on Machine Learning, ICML 2015
Country/TerritoryFrance
CityLile
Period6/07/1511/07/15

Fingerprint

Dive into the research topics of 'Learning local invariant mahalanobis distances'. Together they form a unique fingerprint.

Cite this