scispace - formally typeset
Search or ask a question
Topic

Natural language understanding

About: Natural language understanding is a research topic. Over the lifetime, 2577 publications have been published within this topic receiving 51905 citations.


Papers
More filters
Patent
18 May 1988
TL;DR: In this article, a hybrid natural language understanding (NLU) system is designed for processing natural language text, which includes a preprocessor, a word look-up and morphology module which communicates with a lexicon and a learning module; a syntactic parser which interfaces with an augmented transition network (ATN) grammar; a case frame applier, which converts the syntactic structure into canonical, semantic "case frames"; and a discourse analysis component which integrates explicit and implied information in the text into a conceptual structure which represents its meaning.
Abstract: A hybrid natural language understanding (NLU) system which is particularly designed for processing natural language text. Primary functional components of the NLU system include a preprocessor; a word look-up and morphology module which communicates with a lexicon and a learning module; a syntactic parser which interfaces with an augmented transition network (ATN) grammar; a case frame applier, which converts the syntactic structure into canonical, semantic "case frames"; and a discourse analysis component which integrates explicit and implied information in the text into a conceptual structure which represents its meaning. This structure may be passed on to a knowledge based system, data base, to interested analysts or decision makers, etc. Significant feedback points are provided, e.g., the case frame applier may notify the syntactic parser of a semantically incorrect parse, or the syntactic parser may seek a semantic judgment based on a fragmentary parse. This system incorporates a novel semantic analysis approach based largely on case grammar.

429 citations

Posted Content
TL;DR: This work proposes a joint intent classification and slot filling model based on BERT that achieves significant improvement on intent classification accuracy, slot filling F1, and sentence-level semantic frame accuracy on several public benchmark datasets, compared to the attention-based recurrent neural network models and slot-gated models.
Abstract: Intent classification and slot filling are two essential tasks for natural language understanding. They often suffer from small-scale human-labeled training data, resulting in poor generalization capability, especially for rare words. Recently a new language representation model, BERT (Bidirectional Encoder Representations from Transformers), facilitates pre-training deep bidirectional representations on large-scale unlabeled corpora, and has created state-of-the-art models for a wide variety of natural language processing tasks after simple fine-tuning. However, there has not been much effort on exploring BERT for natural language understanding. In this work, we propose a joint intent classification and slot filling model based on BERT. Experimental results demonstrate that our proposed model achieves significant improvement on intent classification accuracy, slot filling F1, and sentence-level semantic frame accuracy on several public benchmark datasets, compared to the attention-based recurrent neural network models and slot-gated models.

399 citations

Posted Content
Li Dong1, Nan Yang1, Wenhui Wang1, Furu Wei1, Xiaodong Liu1, Yu Wang1, Jianfeng Gao1, Ming Zhou1, Hsiao-Wuen Hon1 
TL;DR: A new Unified pre-trained Language Model (UniLM) that can be fine-tuned for both natural language understanding and generation tasks that compares favorably with BERT on the GLUE benchmark, and the SQuAD 2.0 and CoQA question answering tasks.
Abstract: This paper presents a new Unified pre-trained Language Model (UniLM) that can be fine-tuned for both natural language understanding and generation tasks. The model is pre-trained using three types of language modeling tasks: unidirectional, bidirectional, and sequence-to-sequence prediction. The unified modeling is achieved by employing a shared Transformer network and utilizing specific self-attention masks to control what context the prediction conditions on. UniLM compares favorably with BERT on the GLUE benchmark, and the SQuAD 2.0 and CoQA question answering tasks. Moreover, UniLM achieves new state-of-the-art results on five natural language generation datasets, including improving the CNN/DailyMail abstractive summarization ROUGE-L to 40.51 (2.04 absolute improvement), the Gigaword abstractive summarization ROUGE-L to 35.75 (0.86 absolute improvement), the CoQA generative question answering F1 score to 82.5 (37.1 absolute improvement), the SQuAD question generation BLEU-4 to 22.12 (3.75 absolute improvement), and the DSTC7 document-grounded dialog response generation NIST-4 to 2.67 (human performance is 2.65). The code and pre-trained models are available at this https URL.

390 citations

Book
01 Jan 2002

388 citations

Patent
13 Mar 2002
TL;DR: In this paper, a hierarchical structure of semantic categories is exploited to assist in the natural language understanding and a request-to-answer model is used to enable dynamic NLP understanding.
Abstract: Described are methods and systems for dynamic natural language understanding. A hierarchical structure of semantic categories is exploited to assist in the natural language understanding. Optionally, the natural language to be understood includes a request.

349 citations


Network Information
Related Topics (5)
Natural language
31.1K papers, 806.8K citations
88% related
Recurrent neural network
29.2K papers, 890K citations
81% related
Ontology (information science)
57K papers, 869.1K citations
81% related
Graph (abstract data type)
69.9K papers, 1.2M citations
79% related
Deep learning
79.8K papers, 2.1M citations
79% related
Performance
Metrics
No. of papers in the topic in previous years
YearPapers
202395
2022189
2021319
2020357
2019274
2018161