Abstract
We report on the SemEval 2019 task on math question answering. We provided a question set derived from Math SAT practice exams, including 2778 training questions and 1082 test questions. For a significant subset of these questions, we also provided SMT-LIB logical form annotations and an interpreter that could solve these logical forms. Systems were evaluated based on the percentage of correctly answered questions. The top system correctly answered 45% of the test questions, a considerable improvement over the 17% random guessing baseline.
Original language | English |
---|---|
Title of host publication | NAACL HLT 2019 - International Workshop on Semantic Evaluation, SemEval 2019, Proceedings of the 13th Workshop |
Publisher | Association for Computational Linguistics (ACL) |
Pages | 893-899 |
Number of pages | 7 |
ISBN (Electronic) | 9781950737062 |
State | Published - 2019 |
Externally published | Yes |
Event | 13th International Workshop on Semantic Evaluation, SemEval 2019, co-located with the 17th Annual Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, NAACL HLT 2019 - Minneapolis, United States Duration: 6 Jun 2019 → 7 Jun 2019 |
Publication series
Name | NAACL HLT 2019 - International Workshop on Semantic Evaluation, SemEval 2019, Proceedings of the 13th Workshop |
---|
Conference
Conference | 13th International Workshop on Semantic Evaluation, SemEval 2019, co-located with the 17th Annual Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, NAACL HLT 2019 |
---|---|
Country/Territory | United States |
City | Minneapolis |
Period | 6/06/19 → 7/06/19 |
Bibliographical note
Publisher Copyright:© 2019 Association for Computational Linguistics