IoT-based intrusion detection system using explainable multi-class deep learning approaches

  • Sapna Sadhwani
  • , Ameya Navare
  • , Alan Mohan
  • , Raja Muthalagu
  • , Pranav M. Pawar

Research output: Contribution to journalArticlepeer-review

12 Scopus citations

Abstract

With the surge in Internet of Things (IoT) across various domains and the rise in security threats, researchers have developed Intrusion Detection Systems (IDS) attacks in networks. These Machine Learning (ML) and Deep Learning (DL) models are powerful in detecting and classifying attacks; however, they have a black-box nature and lack interpretability. Explainable Artificial Intelligence (XAI) works towards this and improves the model's transparency and trustworthiness with research in XAI increasing significantly. However, its application within cybersecurity and IoT Intrusion Detection, particularly, requires more work to interpret various IDS models and provide explanations on how various cyber-attacks occur. This work proposes various DL-based IDS trained on four datasets: NSL-KDD, UNSW-NB15, TON-IoT and X-IIoTID and applies XAI using Shapely Additive Explanations (SHAP) to interpret these models. Utilizing four datasets captures diverse network environments to thoroughly evaluate and interpret the model. Convolutional Neural Networks (CNN), Long Short-Term Memory (LSTM) and Bidirectional LSTM (Bi-LSTM) based models were trained for multi-class classification. The best model (based on performance and training time) was chosen for each dataset and SHAP was applied to it. Furthermore, a novel set of 15 features, which impacted the model's decisions the most, were extracted using explanations generated from SHAP. The models trained on these reduced features required less training time without significant impact on training time and achieving a higher performance in comparison to peer models. This work achieves model accuracies of 98.21 % in NSL-KDD, 97.80 % in TON-IoT, 92.9 % in UNSW-NB15 and 98.09 % in X-IIoTID dataset using a CNN-based model, CNN-X and using a subset of only 15 features in each dataset. This work achieves high model performances, while improving the efficiency and interpretability of IDS.

Original languageEnglish
Article number110256
JournalComputers and Electrical Engineering
Volume123
DOIs
StatePublished - Apr 2025
Externally publishedYes

Bibliographical note

Publisher Copyright:
© 2025

Keywords

  • Artificial Intelligence (AI)
  • Explainable AI (XAI)
  • Internet of Things (IoT)
  • Intrusion Detection Systems (IDS)
  • Local Interpretable Model-agnostic Explanations (LIME)
  • Long Short-Term Memory (LSTM)
  • Shapley Additive Explanations (SHAP)

Fingerprint

Dive into the research topics of 'IoT-based intrusion detection system using explainable multi-class deep learning approaches'. Together they form a unique fingerprint.

Cite this