scispace - formally typeset
Search or ask a question

What are all papers about text summarization in 2017? 


Best insight from top research papers

Text summarization was a popular topic in 2017. Several papers explored different approaches and techniques for summarizing text. Tang et al. proposed a two-stage filter system for real-time summarization, using methods such as negative KL-divergence and cosine distance for text similarity modeling . Patel et al. discussed both extractive and abstractive summarization strategies, including TF-IDF, graph theory, and machine learning approaches . AbuRa’ed et al. presented their work on text summarization at the CL-SciSumm 2017 conference . Narang et al. introduced a genetic algorithm-based approach for text summarization, aiming to create less redundant summaries that cover all topics in the document . Rautray and Balabantaray proposed a nature-inspired cuckoo search optimization algorithm for optimal sentence selection in text summarization, achieving significant results in multi-document summarization .

Answers from top 4 papers

More filters
Papers (4)Insight
Open accessJournal ArticleDOI
Anmol Narang, Neelam R. Prakash, Amit Arora 
30 Jun 2017
The provided paper is about text summarization using genetic algorithm. It does not mention any other papers about text summarization in 2017.
The provided paper is about the PKUICST's approaches and results in the Real-Time Summarization track at TREC 2017. It does not provide information about other papers on text summarization in 2017.
The provided paper is about the LaSTUS/TALN system presented at CL-SciSumm 2017. It does not provide information about other papers on text summarization in 2017.
The provided paper does not mention any specific papers about text summarization in 2017.

Related Questions

What is summarizing?5 answersSummarizing is the process of extracting important information from a body of text and presenting it in a concise form. It aims to determine the theme expressed and reduce reading time while hastening the flow of information retrieval. There are two main classifications in text summarization: extractive and abstractive. Extractive summarization involves identifying key sentences or phrases from the source text and grouping them to produce a summary without rewriting the original text. Abstractive summarization, on the other hand, generates new sentences that are not present in the original text, focusing on the meaning of the source text and reducing redundancy. Automatic text summarization is a well-known task in the field of natural language processing (NLP) and has various applications such as social media monitoring, question-answering bots, and medical cases.
What are the different approaches to generating research paper summaries using AI?4 answersDifferent approaches to generating research paper summaries using AI include utilizing powerful language models like GPT, comparing and implementing pretrained models like BERT, BART, and T5, and using pointer-generator networks with coverage mechanism and contextual embedding layers. These approaches aim to automatically generate concise and relevant summaries of lengthy research papers, providing users with time-saving and meaningful information. The effectiveness of these approaches is evaluated using metrics such as Rouge, F1 scores, METEOR scores, and BERTScore F1. The proposed models have shown promising results, outperforming other baselines and achieving the best performance in terms of summarization quality. These approaches have the potential to be applied in various domains and can serve as a foundation for further research in the field of text summarization and natural language processing.
What are the best practices for summarizing literature reviews?5 answersThe best practices for summarizing literature reviews include: having a clearly focused question, conducting a thorough search for relevant studies, using explicit criteria for including studies in the review, employing explicit criteria for judging the validity of the studies, utilizing an appropriate method of data synthesis, and providing suitable interpretation of the data presented. Additionally, it is important to employ a systematic approach to literature reviews, which involves following a pre-defined protocol to identify relevant and trustworthy literature. This ensures that the review is replicable and that the reasons for the author's conclusions are explicit. It is also beneficial to use free and open source software (FOSS) and methods to improve the quality of literature reviews, as this can lower labor and economic costs, improve researcher control, and increase potential for collaboration. Researchers should also develop skills in literature searching and critical appraisal to enhance the quality of their reviews.
What are the five most recent papers in video summarization?5 answersVideo summarization has seen significant advancements in recent years. Several papers have contributed to this field. Tiwari and Bhatnagar provide a comprehensive view of existing video summarization approaches and techniques, highlighting recent advances and discussing the paradigm shift that has occurred over the last two decades. Otani, Song, and Wang present an overview of video summarization, covering early studies as well as recent approaches that utilize deep learning techniques. Apostolidis, Adamantidou, Metsai, Mezaris, and Patras propose a new evaluation approach for video summarization algorithms, addressing the shortcomings of the established evaluation protocol. Zhao, Gong, and Li propose an AudioVisual Recurrent Network (AVRN) that jointly exploits audio and visual information for video summarization, demonstrating its effectiveness on benchmark datasets. These papers provide valuable insights into the current state of video summarization and offer new approaches and evaluation methods for further advancements in the field.
Which are the models used for NLG tasks including summarization?5 answersThere are several models used for NLG tasks including summarization. One approach is to leverage pre-trained sequence-to-sequence models, which have shown strong performance. These models include multilingual language-specific pre-trained models that have been used for generating text in multiple languages. Another approach is to use conversational Large Language Models (LLMs) as automatic evaluators for open-ended NLG tasks, which has been demonstrated to be viable. Additionally, a unified framework for multimodal summarization has been proposed, which covers both single-modal output summarization and multimodal output summarization. This framework includes unsupervised graph-based models for different scenarios, such as generic multimodal ranking, modal-dominated multimodal ranking, and non-redundant text-image multimodal ranking. Finally, a convolutional transformer model has been proposed for capturing unique style and quirks in conversational chatbots, specifically for short responses.
What is document summarisation?3 answersDocument summarization is a text compression technology that automatically converts a document or a collection of documents into a short summary. It is used to extract condensed information for readers in the era of information overload. There are three main approaches to document summarization: extractive, abstractive, and hybrid. Extractive summarization involves selecting important sentences or phrases from the original document, while abstractive summarization involves generating new sentences that capture the main ideas. Hybrid summarization combines elements of both approaches. Document summarization can also be used in a comparative setting, where the goal is to select representative documents from different groups and distinguish them from others. This can be achieved through objective functions based on machine learning and data subset selection techniques.