Multi-modal Differentiable Unsupervised Feature Selection

Junchen Yang, Ofir Lindenbaum, Yuval Kluger, Ariel Jaffe

Research output: Contribution to journalConference articlepeer-review

Abstract

Multi-modal high throughput biological data presents a great scientific opportunity and a significant computational challenge. In multi-modal measurements, every sample is observed simultaneously by two or more sets of sensors. In such settings, many observed variables in both modalities are often nuisance and do not carry information about the phenomenon of interest. Here, we propose a multi-modal unsupervised feature selection framework: identifying informative variables based on coupled high-dimensional measurements. Our method is designed to identify features associated with two types of latent low-dimensional structures: (i) shared structures that govern the observations in both modalities, and (ii) differential structures that appear in only one modality. To that end, we propose two Laplacian-based scoring operators. We incorporate the scores with differentiable gates that mask nuisance features and enhance the accuracy of the structure captured by the graph Laplacian. The performance of the new scheme is illustrated using synthetic and real datasets, including an extended biological application to single-cell multi-omics.

Original languageEnglish
Pages (from-to)2400-2410
Number of pages11
JournalProceedings of Machine Learning Research
Volume216
StatePublished - 2023
Event39th Conference on Uncertainty in Artificial Intelligence, UAI 2023 - Pittsburgh, United States
Duration: 31 Jul 20234 Aug 2023

Bibliographical note

Publisher Copyright:
© UAI 2023. All rights reserved.

Fingerprint

Dive into the research topics of 'Multi-modal Differentiable Unsupervised Feature Selection'. Together they form a unique fingerprint.

Cite this