Abstract
Direction-of-arrival (DOA) estimation using deep neural networks has shown great potential for applications in complicated environments. However, conventional matrix-based deep neural networks vectorize multi-dimensional signal statistics into an excessively long input, necessitating a large number of parameters in neural layers. These parameters require substantial computational resources for training. To address the problem, we propose a resource-efficient tensorized neural network for deep tensor two-dimensional DOA estimation. In this network, the covariance tensor corresponding to the uniform rectangular array (URA) is propagated to hidden state tensors that encapsulate essential signal features. To reduce the number of trainable parameters, the feedforward propagation is formulated as inverse Tucker decomposition, compressing the parameters into inverse Tucker factors. An effective tensorized backpropagation procedure is then designed to train the compressed parameters, and the Tucker rank sequences are tuned through Bayesian optimization to ensure satisfactory network performance. Our simulation results demonstrate the superiority of the proposed tensorized deep neural network over its matrix-based counterpart. In a scenario with a 10× 10 URA and 2 sources, the proposed network reduces the number of trained parameters by more than 122,000 times. Consequently, it achieves faster training speed and utilizes less GPU memory, while maintains comparable estimation accuracy and angular resolution even under non-ideal conditions and in varying scenarios.
Original language | English |
---|---|
Pages (from-to) | 4065-4080 |
Number of pages | 16 |
Journal | IEEE Transactions on Signal Processing |
Volume | 72 |
DOIs | |
State | Published - 2024 |
Externally published | Yes |
Bibliographical note
Publisher Copyright:© 1991-2012 IEEE.
Keywords
- Deep neural network
- Tucker decomposition
- direction-of-arrival estimation
- tensorized neural network