TY - JOUR
T1 - Lessons from complex systems science for AI governance
AU - Kolt, Noam
AU - Shur-Ofry, Michal
AU - Cohen, Reuven
N1 - Publisher Copyright:
© 2025 The Author(s)
PY - 2025/8/8
Y1 - 2025/8/8
N2 - The study of complex adaptive systems, pioneered in physics, biology, and the social sciences, offers important lessons for artificial intelligence (AI) governance. Contemporary AI systems and the environments in which they operate exhibit many of the properties characteristic of complex systems, including nonlinear growth patterns, emergent phenomena, and cascading effects that can lead to catastrophic failures. Complex systems science can help illuminate the features of AI that pose central challenges for policymakers, such as feedback loops induced by training AI models on synthetic data and the interconnectedness between AI systems and critical infrastructure. Drawing on insights from other domains shaped by complex systems, including public health and climate change, we examine how efforts to govern AI are marked by deep uncertainty. To contend with this challenge, we propose three desiderata for designing a set of complexity-compatible AI governance principles comprised of early and scalable intervention, adaptive institutional design, and risk thresholds calibrated to trigger timely and effective regulatory responses.
AB - The study of complex adaptive systems, pioneered in physics, biology, and the social sciences, offers important lessons for artificial intelligence (AI) governance. Contemporary AI systems and the environments in which they operate exhibit many of the properties characteristic of complex systems, including nonlinear growth patterns, emergent phenomena, and cascading effects that can lead to catastrophic failures. Complex systems science can help illuminate the features of AI that pose central challenges for policymakers, such as feedback loops induced by training AI models on synthetic data and the interconnectedness between AI systems and critical infrastructure. Drawing on insights from other domains shaped by complex systems, including public health and climate change, we examine how efforts to govern AI are marked by deep uncertainty. To contend with this challenge, we propose three desiderata for designing a set of complexity-compatible AI governance principles comprised of early and scalable intervention, adaptive institutional design, and risk thresholds calibrated to trigger timely and effective regulatory responses.
KW - cascading risks
KW - complex adaptive systems
KW - emergence
KW - feedback loops
KW - regulation and governance
KW - scaling
UR - https://www.scopus.com/pages/publications/105012499053
U2 - 10.1016/j.patter.2025.101341
DO - 10.1016/j.patter.2025.101341
M3 - ???researchoutput.researchoutputtypes.contributiontojournal.systematicreview???
C2 - 40843345
AN - SCOPUS:105012499053
SN - 2666-3899
VL - 6
JO - Patterns
JF - Patterns
IS - 8
M1 - 101341
ER -