scispace - formally typeset
Open AccessProceedings ArticleDOI

Dependency and AMR Embeddings for Drug-Drug Interaction Extraction from Biomedical Literature

Reads0
Chats0
TLDR
The experimental results show the effectiveness of dependency and AMR embeddings in the DDI extraction task, and a novel syntactic embedding approach using AMR, which aims to abstract away from syntactic idiosyncrasies and attempts to capture only the core meaning of a sentence, which could potentially improve D DI extraction from sentences.
Abstract
Drug-drug interaction (DDI) is an unexpected change in a drug's effect on the human body when the drug and a second drug are co-prescribed and taken together. As many DDIs are frequently reported in biomedical literature, it is important to mine DDI information from literature to keep DDI knowledge up to date. One of the SemEval challenges in the year 2011 and 2013 was designed to tackle the task where the best system achieved an F1 score of 0.80. In this paper, we propose to utilize dependency embeddings and Abstract Meaning Representation (AMR) embeddings as features for extracting DDIs. Our contribution is two-fold. First, we employed dependency embeddings, previously shown effective for sentence classification, for DDI extraction. The dependency embeddings incorporated structural syntactic contexts into the embeddings, which were not present in the conventional word embeddings. Second, we proposed a novel syntactic embedding approach using AMR. AMR aims to abstract away from syntactic idiosyncrasies and attempts to capture only the core meaning of a sentence, which could potentially improve DDI extraction from sentences. Two classifiers (Support Vector Machine and Random Forest) taking these embedding features as input were evaluated on the DDIExtraction 2013 challenge corpus. The experimental results show the effectiveness of dependency and AMR embeddings in the DDI extraction task. The best performance was obtained by combining word, dependency and AMR embeddings (F1 score=0.84).

read more

Citations
More filters
Journal ArticleDOI

A comparison of word embeddings for the biomedical natural language processing

TL;DR: The qualitative evaluation shows that the word embeddings trained from EHR and MedLit can find more similar medical terms than those trained from GloVe and Google News, and the intrinsic quantitative evaluation verifies that the semantic similarity captured by the wordEmbedded is closer to human experts' judgments on all four tested datasets.
Journal ArticleDOI

A clinical text classification paradigm using weak supervision and deep representation.

TL;DR: In this article, a clinical text classification paradigm using weak supervision and deep representation was proposed to reduce human efforts of labeled training data creation and feature engineering for applying machine learning to clinical text classi cation.
Journal ArticleDOI

Computational and Mathematical Methods in Medicine

TL;DR: This issue marks a transition and a changing of the guard for Computational and Mathematical Methods in Medicine as Hindawi takes the helm and converts CMMM to the community-based, open access model that they have so successfully championed.
Proceedings Article

Dialogue-AMR: Abstract Meaning Representation for Dialogue.

TL;DR: A schema that enriches Abstract Meaning Representation (AMR) in order to provide a semantic representation for facilitating Natural Language Understanding (NLU) in dialogue systems is described and an enhanced AMR that represents not only the content of an utterance, but the illocutionary force behind it, as well as tense and aspect is presented.
References
More filters
Journal ArticleDOI

Random Forests

TL;DR: Internal estimates monitor error, strength, and correlation and these are used to show the response to increasing the number of features used in the forest, and are also applicable to regression.
Journal ArticleDOI

Deep learning

TL;DR: Deep learning is making major advances in solving problems that have resisted the best attempts of the artificial intelligence community for many years, and will have many more successes in the near future because it requires very little engineering by hand and can easily take advantage of increases in the amount of available computation and data.
Proceedings Article

Distributed Representations of Words and Phrases and their Compositionality

TL;DR: This paper presents a simple method for finding phrases in text, and shows that learning good vector representations for millions of phrases is possible and describes a simple alternative to the hierarchical softmax called negative sampling.
Posted Content

Efficient Estimation of Word Representations in Vector Space

TL;DR: This paper proposed two novel model architectures for computing continuous vector representations of words from very large data sets, and the quality of these representations is measured in a word similarity task and the results are compared to the previously best performing techniques based on different types of neural networks.
Proceedings Article

Efficient Estimation of Word Representations in Vector Space

TL;DR: Two novel model architectures for computing continuous vector representations of words from very large data sets are proposed and it is shown that these vectors provide state-of-the-art performance on the authors' test set for measuring syntactic and semantic word similarities.
Related Papers (5)