scispace - formally typeset
S

Shashi Narayan

Researcher at Google

Publications -  77
Citations -  4435

Shashi Narayan is an academic researcher from Google. The author has contributed to research in topics: Automatic summarization & Computer science. The author has an hindex of 23, co-authored 63 publications receiving 2636 citations. Previous affiliations of Shashi Narayan include University of Lorraine & University of Edinburgh.

Papers
More filters
Proceedings ArticleDOI

Don't Give Me the Details, Just the Summary! Topic-Aware Convolutional Neural Networks for Extreme Summarization

TL;DR: This paper proposed a novel abstractive model which is conditioned on the article's topics and based entirely on convolutional neural networks and demonstrated experimentally that this architecture captures long-range dependencies in a document and recognizes pertinent content, outperforming an oracle extractive system and state-of-theart abstractive approaches when evaluated automatically and by humans.
Proceedings ArticleDOI

On Faithfulness and Factuality in Abstractive Summarization

TL;DR: It is found that neural abstractive summarization models are highly prone to hallucinate content that is unfaithful to the input document and textual entailment measures better correlate with faithfulness than standard metrics, potentially leading the way to automatic evaluation metrics as well as training and decoding criteria.
Posted Content

Don't Give Me the Details, Just the Summary! Topic-Aware Convolutional Neural Networks for Extreme Summarization

TL;DR: A novel abstractive model is proposed which is conditioned on the article’s topics and based entirely on convolutional neural networks, outperforming an oracle extractive system and state-of-the-art abstractive approaches when evaluated automatically and by humans.
Proceedings ArticleDOI

Ranking Sentences for Extractive Summarization with Reinforcement Learning

TL;DR: The authors conceptualized extractive summarization as a sentence ranking task and proposed a novel training algorithm which globally optimizes the ROUGE evaluation metric through a reinforcement learning objective, which outperforms state-of-the-art extractive and abstractive systems when evaluated automatically and by humans.
Journal ArticleDOI

Leveraging Pre-trained Checkpoints for Sequence Generation Tasks

TL;DR: A Transformer-based sequence-to-sequence model that is compatible with publicly available pre-trained BERT, GPT-2, and RoBERTa checkpoints is developed and an extensive empirical study on the utility of initializing the model, both encoder and decoder, with these checkpoints is conducted.