We propose a novel reinforcement learning-based approach for adaptive and iterative feature selection. Given a masked vector of input features, a reinforcement learning agent iteratively selects certain features to be unmasked, and uses them to predict an outcome when it is sufficiently confident. The algorithm makes use of a novel environment setting, corresponding to a non-stationary Markov Decision Process. A key component of our approach is a guesser network, trained to predict the outcome from the selected features and parametrizing the reward function. Applying our method to a national survey dataset, we show that it not only outperforms strong baselines when requiring the prediction to be made based on a small number of input features, but is also highly more interpretable. Our code is publicly available at https://github.com/ushaham/adaptiveFS.
|Number of pages||25|
|Journal||Proceedings of Machine Learning Research|
|State||Published - 2020|
|Event||5th Machine Learning for Healthcare Conference, MLHC 2020 - Virtual, Online|
Duration: 7 Aug 2020 → 8 Aug 2020
Bibliographical notePublisher Copyright:
© 2020 Uri U. Shaham.