Lessons from complex systems science for AI governance

Noam Kolt, Michal Shur-Ofry, Reuven Cohen

Research output: Contribution to journalReview articlepeer-review

Abstract

The study of complex adaptive systems, pioneered in physics, biology, and the social sciences, offers important lessons for artificial intelligence (AI) governance. Contemporary AI systems and the environments in which they operate exhibit many of the properties characteristic of complex systems, including nonlinear growth patterns, emergent phenomena, and cascading effects that can lead to catastrophic failures. Complex systems science can help illuminate the features of AI that pose central challenges for policymakers, such as feedback loops induced by training AI models on synthetic data and the interconnectedness between AI systems and critical infrastructure. Drawing on insights from other domains shaped by complex systems, including public health and climate change, we examine how efforts to govern AI are marked by deep uncertainty. To contend with this challenge, we propose three desiderata for designing a set of complexity-compatible AI governance principles comprised of early and scalable intervention, adaptive institutional design, and risk thresholds calibrated to trigger timely and effective regulatory responses.

Original languageEnglish
Article number101341
JournalPatterns
Volume6
Issue number8
DOIs
StatePublished - 8 Aug 2025

Bibliographical note

Publisher Copyright:
© 2025 The Author(s)

Keywords

  • cascading risks
  • complex adaptive systems
  • emergence
  • feedback loops
  • regulation and governance
  • scaling

Fingerprint

Dive into the research topics of 'Lessons from complex systems science for AI governance'. Together they form a unique fingerprint.

Cite this