A Systematic Review of Adversarial Machine Learning Attacks, Defensive Controls, and Technologies

Jasmita Malik, Raja Muthalagu, Pranav M. Pawar

Research output: Contribution to journalArticlepeer-review

Abstract

Adversarial machine learning (AML) attacks have become a major concern for organizations in recent years, as AI has become the industry’s focal point and GenAI applications have grown in popularity around the world. Organizations are eager to invest in GenAI applications and develop their own large language models, but they face numerous security and data privacy issues, particularly AML attacks. AML attacks have jeopardized numerous large-scale machine learning models. If carried out successfully, AML attacks can significantly reduce the efficiency and precision of machine learning models. They have far-reaching negative consequences in the context of critical healthcare and autonomous transportation systems. In this paper, AML attacks are identified, analyzed, and classified using adversarial tactics and techniques. This research also recommends open-source tools for testing AI and ML models against AML attacks. Furthermore, this research suggests specific mitigating measures against each attack. It aims to serve as a guidance for organizations to defend against AML attacks and gain assurance in the security of ML models.

Original languageEnglish
Pages (from-to)99382-99421
Number of pages40
JournalIEEE Access
Volume12
DOIs
StatePublished - 2024
Externally publishedYes

Bibliographical note

Publisher Copyright:
© 2024 The Authors.

Keywords

  • AI assurance
  • Adversarial machine learning
  • cybersecurity
  • data privacy
  • secure software development lifecycle

Fingerprint

Dive into the research topics of 'A Systematic Review of Adversarial Machine Learning Attacks, Defensive Controls, and Technologies'. Together they form a unique fingerprint.

Cite this