scispace - formally typeset
Journal ArticleDOI

Deep neural network for hierarchical extreme multi-label text classification

Reads0
Chats0
TLDR
An analysis of a Deep Learning architecture devoted to text classification, considering the extreme multi-class and multi-label text classification problem, when a hierarchical label set is defined and a methodology named Hierarchical Label Set Expansion (HLSE) is presented.
About
This article is published in Applied Soft Computing.The article was published on 2019-06-01. It has received 86 citations till now. The article focuses on the topics: Word embedding & Automatic summarization.

read more

Citations
More filters
Journal ArticleDOI

Cryptocurrency malware hunting: A deep Recurrent Neural Network approach

TL;DR: This paper proposes a novel deep Recurrent Neural Network ( RNN) learning model that utilizes the RNN to analyze Windows applications’ operation codes (Opcodes) as a case study and applies traditional machine learning classifiers to show the applicability of deep learners ( LSTM ) versus traditional models in dealing with cryptocurrency malware.
Journal ArticleDOI

A Compact Convolutional Neural Network Augmented with Multiscale Feature Extraction of Acquired Monitoring Data for Mechanical Intelligent Fault Diagnosis

TL;DR: A compact convolutional neural network augmented with multiscale feature extraction unit is introduced to extract features at different time scales without adding convolution layers, which can reduce the depth of the network while ensuring classification ability and alleviating the overfitting problem caused by the network being too complicated.

Does BERT Make Any Sense? Interpretable Word Sense Disambiguation with Contextualized Embeddings

TL;DR: A simple but effective approach to WSD using a nearest neighbor classification on CWEs and it is shown that the pre-trained BERT model is able to place polysemic words into distinct 'sense' regions of the embedding space, while ELMo and Flair NLP do not seem to possess this ability.
Journal ArticleDOI

A multi-label text classification method via dynamic semantic representation model and deep neural network

TL;DR: A novel multi-label text classification method that combines dynamic semantic representation model and deep neural network (DSRM-DNN) is proposed that outperforms the state-of-the-art methods.
Journal ArticleDOI

Feature selection for label distribution learning via feature similarity and label correlation

TL;DR: Experimental results indicate that the proposed feature selection algorithm for label distribution learning was more effective than five state-of-art feature selection algorithms on twelve datasets, with respect to six representative evaluation measures.
References
More filters
Proceedings Article

Distributed Representations of Words and Phrases and their Compositionality

TL;DR: This paper presents a simple method for finding phrases in text, and shows that learning good vector representations for millions of phrases is possible and describes a simple alternative to the hierarchical softmax called negative sampling.
Posted Content

Efficient Estimation of Word Representations in Vector Space

TL;DR: This paper proposed two novel model architectures for computing continuous vector representations of words from very large data sets, and the quality of these representations is measured in a word similarity task and the results are compared to the previously best performing techniques based on different types of neural networks.
Book

Introduction to Information Retrieval

TL;DR: In this article, the authors present an up-to-date treatment of all aspects of the design and implementation of systems for gathering, indexing, and searching documents; methods for evaluating systems; and an introduction to the use of machine learning methods on text collections.
Posted Content

Distributed Representations of Words and Phrases and their Compositionality

TL;DR: In this paper, the Skip-gram model is used to learn high-quality distributed vector representations that capture a large number of precise syntactic and semantic word relationships and improve both the quality of the vectors and the training speed.
Journal Article

Statistical Comparisons of Classifiers over Multiple Data Sets

TL;DR: A set of simple, yet safe and robust non-parametric tests for statistical comparisons of classifiers is recommended: the Wilcoxon signed ranks test for comparison of two classifiers and the Friedman test with the corresponding post-hoc tests for comparisons of more classifiers over multiple data sets.
Related Papers (5)