An algorithm to optimize explainability using feature ensembles

Teddy Lazebnik, Svetlana Bunimovich-Mendrazitsky, Avi Rosenfeld

Research output: Contribution to journalArticlepeer-review

3 Scopus citations

Abstract

Feature Ensembles are a robust and effective method for finding the feature set that yields the best predictive accuracy for learning agents. However, current feature ensemble algorithms do not consider explainability as a key factor in their construction. To address this limitation, we present an algorithm that optimizes for the explainability and performance of a model – the Optimizing Feature Ensembles for Explainability (OFEE) algorithm. OFEE uses intersections of feature sets to produce a feature ensemble that optimally balances explainability and performance. Furthermore, OFEE is parameter-free and as such optimizes itself to a given dataset and explainability requirements. To evaluated OFEE, we considered two explainability measures, one based on ensemble size and the other based on ensemble stability. We found that OFEE was overall extremely effective within the nine canonical datasets we considered. It outperformed other feature selection algorithms by an average of over 8% and 7% respectively when considering the size and stability explainability measures.

Original languageEnglish
Pages (from-to)2248-2260
Number of pages13
JournalApplied Intelligence
Volume54
Issue number2
DOIs
StatePublished - Jan 2024
Externally publishedYes

Bibliographical note

Publisher Copyright:
© The Author(s) 2024.

Funding

The authors wish to thank Roni Reznik for his valuable help in the algorithm’s implementation and experiments running.

Keywords

  • Ensemble feature selection
  • Explainable AI
  • Machine learning
  • Optimized feature selection

Fingerprint

Dive into the research topics of 'An algorithm to optimize explainability using feature ensembles'. Together they form a unique fingerprint.

Cite this