Unsupervised Acoustic Scene Mapping Based on Acoustic Features and Dimensionality Reduction

Research output: Contribution to journalConference articlepeer-review

Abstract

Classical methods for acoustic scene mapping require the estimation of the time difference of arrival (TDOA) between microphones. Unfortunately, TDOA estimation is very sensitive to reverberation and additive noise. We introduce an unsupervised data-driven approach that exploits the natural structure of the data. Toward this goal, we adapt the recently proposed local conformal autoencoder (LOCA) - an offline deep learning scheme for extracting standardized data coordinates from measurements. Our experimental setup includes a microphone array that measures the transmitted sound source, whose position is unknown, at multiple locations across the acoustic enclosure. We demonstrate that our proposed scheme learns an isometric representation of the microphones' spatial locations and can perform extrapolation over new and unvisited regions. The performance of our method is evaluated using a series of realistic simulations and compared with a classical approach and other dimensionality-reduction schemes. We further assess reverberation's influence on our framework's results and show that it demonstrates considerable robustness.

Original languageEnglish
Pages (from-to)386-390
Number of pages5
JournalProceedings - ICASSP, IEEE International Conference on Acoustics, Speech and Signal Processing
DOIs
StatePublished - 2024
Event49th IEEE International Conference on Acoustics, Speech, and Signal Processing, ICASSP 2024 - Seoul, Korea, Republic of
Duration: 14 Apr 202419 Apr 2024

Bibliographical note

Publisher Copyright:
© 2024 IEEE.

Keywords

  • acoustic scene mapping
  • dimensionality reduction
  • local conformal autoencoder (LOCA)
  • relative transfer function (RTF)
  • unsupervised learning

Fingerprint

Dive into the research topics of 'Unsupervised Acoustic Scene Mapping Based on Acoustic Features and Dimensionality Reduction'. Together they form a unique fingerprint.

Cite this