scispace - formally typeset
Open AccessProceedings ArticleDOI

Joint Optimization of User-desired Content in Multi-document Summaries by Learning from User Feedback

Reads0
Chats0
TLDR
This method interactively obtains user feedback to gradually improve the results of a state-of-the-art integer linear programming (ILP) framework for MDS and complement fully automatic methods in producing high-quality summaries with a minimum number of iterations and feedbacks.
Abstract
In this paper, we propose an extractive multi-document summarization (MDS) system using joint optimization and active learning for content selection grounded in user feedback. Our method interactively obtains user feedback to gradually improve the results of a state-of-the-art integer linear programming (ILP) framework for MDS. Our methods complement fully automatic methods in producing high-quality summaries with a minimum number of iterations and feedbacks. We conduct multiple simulation-based experiments and analyze the effect of feedback-based concept selection in the ILP setup in order to maximize the user-desired content in the summary.

read more

Content maybe subject to copyright    Report

Citations
More filters

An Assessment of the Accuracy of Automatic Evaluation in Summarization | NIST

TL;DR: An assessment of the automatic evaluations used for multi-document summarization of news, and recommendations about how any evaluation, manual or automatic, should be used to find statistically significant differences between summarization systems.
Posted Content

SUPERT: Towards New Frontiers in Unsupervised Evaluation Metrics for Multi-Document Summarization

TL;DR: This work proposes SUPERT, which rates the quality of a summary by measuring its semantic similarity with a pseudo reference summary, i.e. selected salient sentences from the source documents, using contextualized embeddings and soft token alignment techniques.
Proceedings ArticleDOI

SUPERT: Towards New Frontiers in Unsupervised Evaluation Metrics for Multi-Document Summarization

TL;DR: The authors propose to measure the quality of a summary by measuring its semantic similarity with a pseudo reference summary, i.e. selected salient sentences from the source documents, using contextualized embeddings and soft token alignment techniques.
Proceedings ArticleDOI

Adapting Neural Single-Document Summarization Model for Abstractive Multi-Document Summarization: A Pilot Study.

TL;DR: This paper proposes an approach to extend the neural abstractive model trained on large scale SDS data to the MDS task, which makes use of a small number of multi-document summaries for fine tuning.
Posted Content

Towards a Neural Network Approach to Abstractive Multi-Document Summarization

TL;DR: This paper proposes an approach to extend the neural abstractive model trained on large scale SDS data to the MDS task, which makes use of a small number of multi-document summaries for fine tuning.
References
More filters
Posted Content

Efficient Estimation of Word Representations in Vector Space

TL;DR: This paper proposed two novel model architectures for computing continuous vector representations of words from very large data sets, and the quality of these representations is measured in a word similarity task and the results are compared to the previously best performing techniques based on different types of neural networks.
Proceedings Article

ROUGE: A Package for Automatic Evaluation of Summaries

TL;DR: Four different RouGE measures are introduced: ROUGE-N, ROUge-L, R OUGE-W, and ROUAGE-S included in the Rouge summarization evaluation package and their evaluations.
Proceedings Article

Efficient Estimation of Word Representations in Vector Space

TL;DR: Two novel model architectures for computing continuous vector representations of words from very large data sets are proposed and it is shown that these vectors provide state-of-the-art performance on the authors' test set for measuring syntactic and semantic word similarities.
Proceedings Article

TextRank: Bringing Order into Text

Rada Mihalcea, +1 more
TL;DR: TextRank, a graph-based ranking model for text processing, is introduced and it is shown how this model can be successfully used in natural language applications.