Abstract
Feature Ensembles are a robust and effective method for finding the feature set that yields the best predictive accuracy for learning agents. However, current feature ensemble algorithms do not consider explainability as a key factor in their construction. To address this limitation, we present an algorithm that optimizes for the explainability and performance of a model – the Optimizing Feature Ensembles for Explainability (OFEE) algorithm. OFEE uses intersections of feature sets to produce a feature ensemble that optimally balances explainability and performance. Furthermore, OFEE is parameter-free and as such optimizes itself to a given dataset and explainability requirements. To evaluated OFEE, we considered two explainability measures, one based on ensemble size and the other based on ensemble stability. We found that OFEE was overall extremely effective within the nine canonical datasets we considered. It outperformed other feature selection algorithms by an average of over 8% and 7% respectively when considering the size and stability explainability measures.
Original language | English |
---|---|
Pages (from-to) | 2248-2260 |
Number of pages | 13 |
Journal | Applied Intelligence |
Volume | 54 |
Issue number | 2 |
DOIs | |
State | Published - Jan 2024 |
Externally published | Yes |
Bibliographical note
Publisher Copyright:© The Author(s) 2024.
Funding
The authors wish to thank Roni Reznik for his valuable help in the algorithm’s implementation and experiments running.
Keywords
- Ensemble feature selection
- Explainable AI
- Machine learning
- Optimized feature selection