Abstract
Though robustness of networks to random attacks has been widely studied, intentional destruction by an intelligent agent is not tractable with previous methods. Here we devise a single-player game on a lattice that mimics the logic of an attacker attempting to destroy a network. The objective of the game is to disable all nodes in the fewest number of steps. We develop a reinforcement learning approach using deep Q-learning that is capable of learning to play this game successfully, and in so doing, to optimally attack a network. Because the learning algorithm is universal, we train agents on different definitions of robustness and compare the learned strategies. We find that superficially similar definitions of robustness induce different strategies in the trained agent, implying that optimally attacking or defending a network is sensitive to the particular objective. Our method provides an approach to understand network robustness, with potential applications to other discrete processes in disordered systems.
Original language | English |
---|---|
Article number | 013067 |
Journal | Physical Review Research |
Volume | 6 |
Issue number | 1 |
DOIs | |
State | Published - Jan 2024 |
Externally published | Yes |
Bibliographical note
Publisher Copyright:© 2024 authors. Published by the American Physical Society. Published by the American Physical Society under the terms of the Creative Commons Attribution 4.0 International license. Further distribution of this work must maintain attribution to the author(s) and the published article's title, journal citation, and DOI.
Funding
S.P.C. acknowledges support from the Natural Sciences and Engineering Research Council of Canada (NSERC, RGPIN-2020-05015). The authors thank O. Varol, A. Grishchenko, and X. Meng for helpful discussions.
Funders | Funder number |
---|---|
Natural Sciences and Engineering Research Council of Canada | RGPIN-2020-05015 |