Social norms-the unspoken commonsense rules about acceptable social behavior-are crucial in understanding the underlying causes and intents of people's actions in narratives. For example, underlying an action such as “wanting to call cops on my neighbor” are social norms that inform our conduct, such as “It is expected that you report crimes.” We present SOCIAL CHEMISTRY, a new conceptual formalism to study people's everyday social norms and moral judgments over a rich spectrum of real life situations described in natural language. We introduce SOCIAL-CHEM-101, a large-scale corpus that catalogs 292k rules-of-thumb such as “It is rude to run a blender at 5am” as the basic conceptual units. Each rule-of-thumb is further broken down with 12 different dimensions of people's judgments, including social judgments of good and bad, moral foundations, expected cultural pressure, and assumed legality, which together amount to over 4.5 million annotations of categorical labels and free-text descriptions. Comprehensive empirical results based on state-of-the-art neural models demonstrate that computational modeling of social norms is a promising research direction. Our model framework, NEURAL NORM TRANSFORMER, learns and generalizes SOCIAL-CHEM-101 to successfully reason about previously unseen situations, generating relevant (and potentially novel) attribute-aware social rules-of-thumb.
|Title of host publication||EMNLP 2020 - 2020 Conference on Empirical Methods in Natural Language Processing, Proceedings of the Conference|
|Publisher||Association for Computational Linguistics (ACL)|
|Number of pages||18|
|State||Published - 2020|
|Event||2020 Conference on Empirical Methods in Natural Language Processing, EMNLP 2020 - Virtual, Online|
Duration: 16 Nov 2020 → 20 Nov 2020
|Name||EMNLP 2020 - 2020 Conference on Empirical Methods in Natural Language Processing, Proceedings of the Conference|
|Conference||2020 Conference on Empirical Methods in Natural Language Processing, EMNLP 2020|
|Period||16/11/20 → 20/11/20|
Bibliographical noteFunding Information:
The authors would like to thank Nicholas Lourie, Rowan Zellers, and Chandra Bhagavatula. This material is based upon work supported by the National Science Foundation Graduate Research Fellowship under Grant No. DGE1256082, and in part by NSF (IIS-1714566), DARPA CwC through ARO (W911NF15-1-0543), DARPA MCS program through NIWC Pacific (N66001-19-2-4031), and Allen Institute for AI.
© 2020 Association for Computational Linguistics