Abstract
Deep learning algorithms and deep neural networks (DNNs) have become extremely popular due to their high-performance accuracy in complex fields, such as image and text classification, speech understanding, document segmentation, credit scoring, and facial recognition. As a result of the highly nonlinear structure of deep learning algorithms, these networks are hard to interpret; thus, it is not clear how the models reach their conclusions and therefore, they are often considered black-box models. The poor transparency of these models is a major drawback despite their effectiveness. In addition, recent regulations such as the General Data Protection Regulation (GDPR), require that, in many cases, an explanation will be provided whenever the learning model may affect a person’s life. For example, in autonomous vehicle applications, methods for visualizing, explaining, and interpreting deep learning models that analyze driver behavior and the road environment have become standard. Explainable artificial intelligence (XAI) or interpretable machine learning (IML) programs aim to enable a suite of methods and techniques that produce more explainable models while maintaining a high level of output accuracy [1–4]. These programs enable human users to better understand, trust, and manage the emerging generation of artificially intelligent systems [4].
| Original language | English |
|---|---|
| Title of host publication | Machine Learning for Data Science Handbook |
| Subtitle of host publication | Data Mining and Knowledge Discovery Handbook, Third Edition |
| Publisher | Springer International Publishing |
| Pages | 971-985 |
| Number of pages | 15 |
| ISBN (Electronic) | 9783031246289 |
| ISBN (Print) | 9783031246272 |
| DOIs | |
| State | Published - 1 Jan 2023 |
| Externally published | Yes |
Bibliographical note
Publisher Copyright:© Springer Nature Switzerland AG 2023.