scispace - formally typeset
Search or ask a question
Conference

Conference of the European Chapter of the Association for Computational Linguistics 

About: Conference of the European Chapter of the Association for Computational Linguistics is an academic conference. The conference publishes majorly in the area(s): Computer science & Parsing. Over the lifetime, 2143 publications have been published by the conference receiving 56074 citations.


Papers
More filters
Proceedings ArticleDOI
01 Apr 2017
TL;DR: FastText as mentioned in this paper explores a simple and efficient baseline for text classification, which is often on par with deep learning classifiers in terms of accuracy and many orders of magnitude faster for training and evaluation.
Abstract: This paper explores a simple and efficient baseline for text classification. Our experiments show that our fast text classifier fastText is often on par with deep learning classifiers in terms of accuracy, and many orders of magnitude faster for training and evaluation. We can train fastText on more than one billion words in less than ten minutes using a standard multicore CPU, and classify half a million sentences among 312K classes in less than a minute.

3,765 citations

Proceedings Article
23 Apr 2012
TL;DR: The brat rapid annotation tool (BRAT) is introduced, an intuitive web-based tool for text annotation supported by Natural Language Processing (NLP) technology and an evaluation of annotation assisted by semantic class disambiguation on a multicategory entity mention annotation task, showing a 15% decrease in total annotation time.
Abstract: We introduce the brat rapid annotation tool (BRAT), an intuitive web-based tool for text annotation supported by Natural Language Processing (NLP) technology. BRAT has been developed for rich structured annotation for a variety of NLP tasks and aims to support manual curation efforts and increase annotator productivity using NLP techniques. We discuss several case studies of real-world annotation projects using pre-release versions of BRAT and present an evaluation of annotation assisted by semantic class disambiguation on a multicategory entity mention annotation task, showing a 15% decrease in total annotation time. BRAT is available under an open-source license from: http://brat.nlplab.org

1,121 citations

Proceedings Article
01 Apr 2006
TL;DR: A disambiguation SVM kernel is trained to exploit the high coverage and rich structure of the knowledge encoded in an online encyclopedia and significantly outperforms a less informed baseline.
Abstract: We present a new method for detecting and disambiguating named entities in open domain text. A disambiguation SVM kernel is trained to exploit the high coverage and rich structure of the knowledge encoded in an online encyclopedia. The resulting model significantly outperforms a less informed baseline.

953 citations

Proceedings ArticleDOI
03 Apr 2017
TL;DR: Very deep convolutional networks (VDCNN) as mentioned in this paper have been applied to text classification. And they have achieved state-of-the-art performance on several public text classification tasks.
Abstract: The dominant approach for many NLP tasks are recurrent neural networks, in particular LSTMs, and convolutional neural networks. However, these architectures are rather shallow in comparison to the deep convolutional networks which have pushed the state-of-the-art in computer vision. We present a new architecture (VDCNN) for text processing which operates directly at the character level and uses only small convolutions and pooling operations. We are able to show that the performance of this model increases with the depth: using up to 29 convolutional layers, we report improvements over the state-of-the-art on several public text classification tasks. To the best of our knowledge, this is the first time that very deep convolutional nets have been applied to text processing.

881 citations

Proceedings ArticleDOI
01 Jan 2017
TL;DR: The authors introduced a neural network-based text-in, text-out end-to-end trainable goal-oriented dialogue system along with a new way of collecting dialogue data based on a novel pipe-lined Wizard-of-Oz framework.
Abstract: Teaching machines to accomplish tasks by conversing naturally with humans is challenging. Currently, developing task-oriented dialogue systems requires creating multiple components and typically this involves either a large amount of handcrafting, or acquiring costly labelled datasets to solve a statistical learning problem for each component. In this work we introduce a neural network-based text-in, text-out end-to-end trainable goal-oriented dialogue system along with a new way of collecting dialogue data based on a novel pipe-lined Wizard-of-Oz framework. This approach allows us to develop dialogue systems easily and without making too many assumptions about the task at hand. The results show that the model can converse with human subjects naturally whilst helping them to accomplish tasks in a restaurant search domain.

796 citations

Performance
Metrics
No. of papers from the Conference in previous years
YearPapers
2023227
202290
2021410
2017289
2014222
2012139