scispace - formally typeset
Open AccessProceedings ArticleDOI

Neural End-to-End Learning for Computational Argumentation Mining

Reads0
Chats0
TLDR
The authors investigate neural techniques for end-to-end computational argumentation mining (AM) and find that framing AM as dependency parsing leads to sub-par performance, while less complex (local) tagging models based on BiLSTMs perform robustly across classification scenarios, being able to catch long-range dependencies inherent to the AM problem.
Abstract
We investigate neural techniques for end-to-end computational argumentation mining (AM). We frame AM both as a token-based dependency parsing and as a token-based sequence tagging problem, including a multi-task learning setup. Contrary to models that operate on the argument component level, we find that framing AM as dependency parsing leads to subpar performance results. In contrast, less complex (local) tagging models based on BiLSTMs perform robustly across classification scenarios, being able to catch long-range dependencies inherent to the AM problem. Moreover, we find that jointly learning ‘natural’ subtasks, in a multi-task learning setup, improves performance.

read more

Content maybe subject to copyright    Report

Citations
More filters
Journal ArticleDOI

Argument Mining: A Survey

TL;DR: The techniques that establish the foundations for argument mining are explored, a review of recent advances in argument mining techniques are provided, and the challenges faced in automatically extracting a deeper understanding of reasoning expressed in language in general are discussed.
Proceedings ArticleDOI

Five Years of Argument Mining: a Data-driven Analysis

TL;DR: This paper presents the argument mining tasks, and the obtained results in the area from a data-driven perspective, and highlights the main weaknesses suffered by the existing work in the literature, and proposes open challenges to be faced in the future.
Proceedings ArticleDOI

Classification and Clustering of Arguments with Contextualized Word Embeddings

TL;DR: For the first time, it is shown how to leverage the power of contextualized word embeddings to classify and cluster topic-dependent arguments, achieving impressive results on both tasks and across multiple datasets.
Proceedings ArticleDOI

Cross-topic Argument Mining from Heterogeneous Sources

TL;DR: This paper proposes a new sentential annotation scheme that is reliably applicable by crowd workers to arbitrary Web texts and shows that integrating topic information into bidirectional long short-term memory networks outperforms vanilla BiLSTMs in F1 in two- and three-label cross-topic settings.
Proceedings ArticleDOI

ArgumenText: Searching for Arguments in Heterogeneous Sources

TL;DR: This paper presents an argument retrieval system capable of retrieving sentential arguments for any given controversial topic, and finds that its system covers 89% of arguments found in expert-curated lists of arguments from an online debate portal, and also identifies additional valid arguments.
References
More filters
Journal ArticleDOI

Long short-term memory

TL;DR: A novel, efficient, gradient based method called long short-term memory (LSTM) is introduced, which can learn to bridge minimal time lags in excess of 1000 discrete-time steps by enforcing constant error flow through constant error carousels within special units.
Proceedings ArticleDOI

Glove: Global Vectors for Word Representation

TL;DR: A new global logbilinear regression model that combines the advantages of the two major model families in the literature: global matrix factorization and local context window methods and produces a vector space with meaningful substructure.
Proceedings Article

Neural Machine Translation by Jointly Learning to Align and Translate

TL;DR: It is conjecture that the use of a fixed-length vector is a bottleneck in improving the performance of this basic encoder-decoder architecture, and it is proposed to extend this by allowing a model to automatically (soft-)search for parts of a source sentence that are relevant to predicting a target word, without having to form these parts as a hard segment explicitly.
Posted Content

Neural Machine Translation by Jointly Learning to Align and Translate

TL;DR: In this paper, the authors propose to use a soft-searching model to find the parts of a source sentence that are relevant to predicting a target word, without having to form these parts as a hard segment explicitly.
Journal Article

Random search for hyper-parameter optimization

TL;DR: This paper shows empirically and theoretically that randomly chosen trials are more efficient for hyper-parameter optimization than trials on a grid, and shows that random search is a natural baseline against which to judge progress in the development of adaptive (sequential) hyper- parameter optimization algorithms.
Related Papers (5)