TY - JOUR
T1 - Provable limitations of acquiring meaning from ungrounded form
T2 - What will future language models understand?
AU - Merrill, William
AU - Goldberg, Yoav
AU - Schwartz, Roy
AU - Smith, Noah A.
N1 - Publisher Copyright:
© 2021 Association for Computational Linguistics.
PY - 2021/9/21
Y1 - 2021/9/21
N2 - Language models trained on billions of tokens have recently led to unprecedented results on many NLP tasks. This success raises the question of whether, in principle, a system can ever ‘‘understand’’ raw text without access to some form of grounding. We formally investigate the abilities of ungrounded systems to acquire meaning. Our analysis focuses on the role of ‘‘assertions’’: textual contexts that provide indirect clues about the underlying semantics. We study whether assertions enable a system to emulate representations preserving semantic relations like equivalence. We find that assertions enable semantic emulation of languages that satisfy a strong notion of semantic transparency. However, for classes of languages where the same expression can take different values in different contexts, we show that emulation can become uncomputable. Finally, we discuss differences between our formal model and natural language, exploring how our results generalize to a modal setting and other semantic relations. Together, our results suggest that assertions in code or language do not provide sufficient signal to fully emulate semantic representations. We formalize ways in which ungrounded language models appear to be fundamentally limited in their ability to ‘‘understand’’.
AB - Language models trained on billions of tokens have recently led to unprecedented results on many NLP tasks. This success raises the question of whether, in principle, a system can ever ‘‘understand’’ raw text without access to some form of grounding. We formally investigate the abilities of ungrounded systems to acquire meaning. Our analysis focuses on the role of ‘‘assertions’’: textual contexts that provide indirect clues about the underlying semantics. We study whether assertions enable a system to emulate representations preserving semantic relations like equivalence. We find that assertions enable semantic emulation of languages that satisfy a strong notion of semantic transparency. However, for classes of languages where the same expression can take different values in different contexts, we show that emulation can become uncomputable. Finally, we discuss differences between our formal model and natural language, exploring how our results generalize to a modal setting and other semantic relations. Together, our results suggest that assertions in code or language do not provide sufficient signal to fully emulate semantic representations. We formalize ways in which ungrounded language models appear to be fundamentally limited in their ability to ‘‘understand’’.
UR - http://www.scopus.com/inward/record.url?scp=85117974192&partnerID=8YFLogxK
U2 - 10.1162/tacl_a_00412
DO - 10.1162/tacl_a_00412
M3 - ???researchoutput.researchoutputtypes.contributiontojournal.article???
AN - SCOPUS:85117974192
SN - 2307-387X
VL - 9
SP - 1047
EP - 1060
JO - Transactions of the Association for Computational Linguistics
JF - Transactions of the Association for Computational Linguistics
ER -