Text Chunking Using Transformation-Based Learning
Lance Ramshaw,Mitchell Marcus +1 more
- pp 157-176
Reads0
Chats0
TLDR
This work has shown that the transformation-based learning approach can be applied at a higher level of textual interpretation for locating chunks in the tagged text, including non-recursive “baseNP” chunks.Abstract:
Transformation-based learning, a technique introduced by Eric Brill (1993b), has been shown to do part-of-speech tagging with fairly high accuracy. This same method can be applied at a higher level of textual interpretation for locating chunks in the tagged text, including non-recursive “baseNP” chunks. For this purpose, it is convenient to view chunking as a tagging problem by encoding the chunk structure in new tags attached to each word. In automatic tests using Treebank-derived data, this technique achieved recall and precision rates of roughly 93% for baseNP chunks (trained on 950K words) and 88% for somewhat more complex chunks that partition the sentence (trained on 200K words). Working in this new application and with larger template and training sets has also required some interesting adaptations to the transformation-based learning approach.read more
Citations
More filters
Proceedings ArticleDOI
Warped Language Models for Noise Robust Language Understanding
TL;DR: Warped Language Models (WLM) as mentioned in this paper is a self-supervised neural network trained to fill in the blanks in a given sentence with masked tokens, but it is not robust to spontaneous conversational speech recognition noise.
Proceedings Article
Structured Aspect Extraction
TL;DR: This paper proposes an unsupervised and scalable method for structured aspect extraction consisting of statistical noun phrase clustering, cPMI-based noun phrase segmentation, and hierarchical pattern induction and shows a substantial improvement over existing methods in terms of both quality and computational efficiency.
Proceedings ArticleDOI
Towards semantic tagging in collaborative environments
TL;DR: A technique called SCM/THD is investigated in this paper that extracts entities from free-text annotations, and using the Lin similarity measure over the WordNet thesaurus classifies them into a controlled vocabulary of tags.
Proceedings ArticleDOI
newsSweeper at SemEval-2020 Task 11: Context-Aware Rich Feature Representations for Propaganda Classification.
TL;DR: This article used a pre-trained BERT language model enhanced with tagging techniques developed for the task of Named Entity Recognition (NER) to identify propaganda spans in the text and incorporated contextual features in a RoBERTa model for the classification of propaganda techniques.
Proceedings ArticleDOI
BERTifying the Hidden Markov Model for Multi-Source Weakly Supervised Named Entity Recognition
TL;DR: This paper proposed a conditional hidden Markov model (CHMM) which can effectively infer true labels from multi-source noisy labels in an unsupervised way, which can learn token-wise transition and emission probabilities from the BERT embeddings of the input tokens.
References
More filters
Book ChapterDOI
Parsing By Chunks
TL;DR: The typical chunk consists of a single content word surrounded by a constellation of function words, matching a fixed template, and the relationships between chunks are mediated more by lexical selection than by rigid templates.
Proceedings ArticleDOI
A Stochastic Parts Program and Noun Phrase Parser for Unrestricted Text
TL;DR: The authors used a linear-time dynamic programming algorithm to find an assignment of parts of speech to words that optimizes the product of (a) lexical probabilities (probability of observing part of speech i given word i) and (b) contextual probabilities (pb probability of observing n following partsof speech).
Proceedings Article
Some advances in transformation-based part of speech tagging
TL;DR: In this article, a rule-based approach to tagging unknown words is described, where the tagger-can be extended into a k-best tagger, where multiple tags can be assigned to words in some cases of uncertainty.
Journal ArticleDOI
Performance structures: A psycholinguistic and linguistic appraisal☆
James Paul Gee,François Grosjean +1 more
TL;DR: In this paper, two lines of research are combined to deal with a long-standing problem in both fields: why the performance structures of sentences (structures based on experimental data, such as pausing and parsing values) are not fully accountable for by linguistic theories of phrase structure.
Book
A corpus-based approach to language learning
TL;DR: A learning algorithm is described that takes a small structurally annotated corpus of text and a larger unannotated corpus as input, and automatically learns how to assign accurate structural descriptions to sentences not in the training corpus.