TY - JOUR
T1 - Cyber risk quantification for adversarial machine learning attacks
AU - Malik, Jasmita
AU - Muthalagu, Raja
AU - Pawar, Pranav M.
AU - Mukherjee, Mithun
N1 - Publisher Copyright:
© 2026 Elsevier Ltd.
PY - 2026/3
Y1 - 2026/3
N2 - Adversarial machine learning (AML) attacks including evasion, poisoning, and privacy-targeting techniques represent a new class of evolving threats to AI systems. However, traditional cyber risk quantification approaches struggle to capture the uncertainty and impact of such dynamic threats. This study introduces a novel framework to quantify cyber risk exposure and business impact stemming from new-age AML attacks. Leveraging Monte Carlo simulations, the framework models probabilistic loss distributions based on attack likelihoods and impact ranges. Applied to a ransomware attack scenario on a machine learning system, the framework estimates an Annualized Loss Expectancy of approximately $1.6 million to an organization, revealing the potential for unexpected heavy-tail, high-cost outcomes. The framework is further validated across diverse adversarial scenarios, including evasion, poisoning, and privacy attacks. The results provide decision-makers with a structured way to assess control effectiveness and prioritize cybersecurity investments using quantitative metrics. This work bridges the gap between technical threat intelligence and strategic cybersecurity investment financial planning, offering a practical path toward resilient and secure deployment of AI systems in organizations.
AB - Adversarial machine learning (AML) attacks including evasion, poisoning, and privacy-targeting techniques represent a new class of evolving threats to AI systems. However, traditional cyber risk quantification approaches struggle to capture the uncertainty and impact of such dynamic threats. This study introduces a novel framework to quantify cyber risk exposure and business impact stemming from new-age AML attacks. Leveraging Monte Carlo simulations, the framework models probabilistic loss distributions based on attack likelihoods and impact ranges. Applied to a ransomware attack scenario on a machine learning system, the framework estimates an Annualized Loss Expectancy of approximately $1.6 million to an organization, revealing the potential for unexpected heavy-tail, high-cost outcomes. The framework is further validated across diverse adversarial scenarios, including evasion, poisoning, and privacy attacks. The results provide decision-makers with a structured way to assess control effectiveness and prioritize cybersecurity investments using quantitative metrics. This work bridges the gap between technical threat intelligence and strategic cybersecurity investment financial planning, offering a practical path toward resilient and secure deployment of AI systems in organizations.
KW - Adversarial machine learning
KW - AI security
KW - Cyber risk quantification
KW - Evasion
KW - Monte Carlo simulation
KW - Poisoning
KW - Privacy
KW - Ransomware
UR - https://www.scopus.com/pages/publications/105027932156
U2 - 10.1016/j.compeleceng.2026.110964
DO - 10.1016/j.compeleceng.2026.110964
M3 - ???researchoutput.researchoutputtypes.contributiontojournal.article???
AN - SCOPUS:105027932156
SN - 0045-7906
VL - 131
JO - Computers and Electrical Engineering
JF - Computers and Electrical Engineering
M1 - 110964
ER -