TY - JOUR
T1 - Aligning generalization between humans and machines
AU - Ilievski, Filip
AU - Hammer, Barbara
AU - van Harmelen, Frank
AU - Paassen, Benjamin
AU - Saralajew, Sascha
AU - Schmid, Ute
AU - Biehl, Michael
AU - Bolognesi, Marianna
AU - Dong, Xin Luna
AU - Gashteovski, Kiril
AU - Hitzler, Pascal
AU - Marra, Giuseppe
AU - Minervini, Pasquale
AU - Mundt, Martin
AU - Ngomo, Axel Cyrille Ngonga
AU - Oltramari, Alessandro
AU - Pasi, Gabriella
AU - Saribatur, Zeynep G.
AU - Serafini, Luciano
AU - Shawe-Taylor, John
AU - Shwartz, Vered
AU - Skitalinskaya, Gabriella
AU - Stachl, Clemens
AU - van de Ven, Gido M.
AU - Villmann, Thomas
N1 - Publisher Copyright:
© Springer Nature Limited 2025.
PY - 2025/9
Y1 - 2025/9
N2 - Recent advances in artificial intelligence (AI)—including generative approaches—have resulted in technology that can support humans in scientific discovery and forming decisions, but may also disrupt democracies and target individuals. The responsible use of AI and its participation in human–AI teams increasingly shows the need for AI alignment, that is, to make AI systems act according to our preferences. A crucial yet often overlooked aspect of these interactions is the different ways in which humans and machines generalize. In cognitive science, human generalization commonly involves abstraction and concept learning. By contrast, AI generalization encompasses out-of-domain generalization in machine learning, rule-based reasoning in symbolic AI, and abstraction in neurosymbolic AI. Here we combine insights from AI and cognitive science to identify key commonalities and differences across three dimensions: notions of, methods for, and evaluation of generalization. We map the different conceptualizations of generalization in AI and cognitive science along these three dimensions and consider their role for alignment in human–AI teaming. This results in interdisciplinary challenges across AI and cognitive science that must be tackled to support effective and cognitively supported alignment in human–AI teaming scenarios.
AB - Recent advances in artificial intelligence (AI)—including generative approaches—have resulted in technology that can support humans in scientific discovery and forming decisions, but may also disrupt democracies and target individuals. The responsible use of AI and its participation in human–AI teams increasingly shows the need for AI alignment, that is, to make AI systems act according to our preferences. A crucial yet often overlooked aspect of these interactions is the different ways in which humans and machines generalize. In cognitive science, human generalization commonly involves abstraction and concept learning. By contrast, AI generalization encompasses out-of-domain generalization in machine learning, rule-based reasoning in symbolic AI, and abstraction in neurosymbolic AI. Here we combine insights from AI and cognitive science to identify key commonalities and differences across three dimensions: notions of, methods for, and evaluation of generalization. We map the different conceptualizations of generalization in AI and cognitive science along these three dimensions and consider their role for alignment in human–AI teaming. This results in interdisciplinary challenges across AI and cognitive science that must be tackled to support effective and cognitively supported alignment in human–AI teaming scenarios.
UR - https://www.scopus.com/pages/publications/105016471528
U2 - 10.1038/s42256-025-01109-4
DO - 10.1038/s42256-025-01109-4
M3 - ???researchoutput.researchoutputtypes.contributiontojournal.article???
AN - SCOPUS:105016471528
SN - 2522-5839
VL - 7
SP - 1378
EP - 1389
JO - Nature Machine Intelligence
JF - Nature Machine Intelligence
IS - 9
ER -