Evaluating Interactive Summarization: an Expansion-Based Framework

Ori Shapira, Ramakanth Pasunuru, Hadar Ronen, Mohit Bansal, Yael Amsterdamer, Ido Dagan

Research output: Working paper / PreprintPreprint

Abstract

Allowing users to interact with multi-document summarizers is a promising direction towards improving and customizing summary results. Different ideas for interactive summarization have been proposed in previous work but these solutions are highly divergent and incomparable. In this paper, we develop an end-to-end evaluation framework for expansion-based interactive summarization, which considers the accumulating information along an interactive session. Our framework includes a procedure of collecting real user sessions and evaluation measures relying on standards, but adapted to reflect interaction. All of our solutions are intended to be released publicly as a benchmark, allowing comparison of future developments in interactive summarization. We demonstrate the use of our framework by evaluating and comparing baseline implementations that we developed for this purpose, which will serve as part of our benchmark. Our extensive experimentation and analysis of these systems motivate our design choices and support the viability of our framework.
Original languageEnglish
PublisherarXiv preprint arXiv:1508.02374
Number of pages19
DOIs
StatePublished - 17 Sep 2020

Keywords

  • Computation and Language (cs.CL)
  • FOS: Computer and information sciences

Fingerprint

Dive into the research topics of 'Evaluating Interactive Summarization: an Expansion-Based Framework'. Together they form a unique fingerprint.

Cite this