Cyber risk quantification for adversarial machine learning attacks

  • Jasmita Malik
  • , Raja Muthalagu
  • , Pranav M. Pawar
  • , Mithun Mukherjee

Research output: Contribution to journalArticlepeer-review

Abstract

Adversarial machine learning (AML) attacks including evasion, poisoning, and privacy-targeting techniques represent a new class of evolving threats to AI systems. However, traditional cyber risk quantification approaches struggle to capture the uncertainty and impact of such dynamic threats. This study introduces a novel framework to quantify cyber risk exposure and business impact stemming from new-age AML attacks. Leveraging Monte Carlo simulations, the framework models probabilistic loss distributions based on attack likelihoods and impact ranges. Applied to a ransomware attack scenario on a machine learning system, the framework estimates an Annualized Loss Expectancy of approximately $1.6 million to an organization, revealing the potential for unexpected heavy-tail, high-cost outcomes. The framework is further validated across diverse adversarial scenarios, including evasion, poisoning, and privacy attacks. The results provide decision-makers with a structured way to assess control effectiveness and prioritize cybersecurity investments using quantitative metrics. This work bridges the gap between technical threat intelligence and strategic cybersecurity investment financial planning, offering a practical path toward resilient and secure deployment of AI systems in organizations.

Original languageEnglish
Article number110964
JournalComputers and Electrical Engineering
Volume131
DOIs
StatePublished - Mar 2026
Externally publishedYes

Bibliographical note

Publisher Copyright:
© 2026 Elsevier Ltd.

Keywords

  • Adversarial machine learning
  • AI security
  • Cyber risk quantification
  • Evasion
  • Monte Carlo simulation
  • Poisoning
  • Privacy
  • Ransomware

Fingerprint

Dive into the research topics of 'Cyber risk quantification for adversarial machine learning attacks'. Together they form a unique fingerprint.

Cite this