scispace - formally typeset
Open accessJournal ArticleDOI: 10.1162/TACL_A_00357

Modeling Content and Context with Deep Relational Learning

04 Mar 2021-Transactions of the Association for Computational Linguistics (MIT Press - Journals)-Vol. 9, pp 100-119
Abstract: Building models for realistic natural language tasks requires dealing with long texts and accounting for complicated structural dependencies. Neural-symbolic representations have emerged as a way to combine the reasoning capabilities of symbolic methods, with the expressiveness of neural networks. However, most of the existing frameworks for combining neural and symbolic representations have been designed for classic relational learning tasks that work over a universe of symbolic entities and relations. In this paper, we present DRaiL, an open-source declarative framework for specifying deep relational models, designed to support a variety of NLP scenarios. Our framework supports easy integration with expressive language encoders, and provides an interface to study the interactions between representation, inference and learning.

... read more

Topics: Statistical relational learning (61%), Natural language (52%), Artificial neural network (51%) ... read more
Citations
  More

8 results found


Open accessProceedings ArticleDOI: 10.18653/V1/2021.EACL-MAIN.100
01 Apr 2021-
Abstract: Expressive text encoders such as RNNs and Transformer Networks have been at the center of NLP models in recent work. Most of the effort has focused on sentence-level tasks, capturing the dependencies between words in a single sentence, or pairs of sentences. However, certain tasks, such as argumentation mining, require accounting for longer texts and complicated structural dependencies between them. Deep structured prediction is a general framework to combine the complementary strengths of expressive neural encoders and structured inference for highly structured domains. Nevertheless, when the need arises to go beyond sentences, most work relies on combining the output scores of independently trained classifiers. One of the main reasons for this is that constrained inference comes at a high computational cost. In this paper, we explore the use of randomized inference to alleviate this concern and show that we can efficiently leverage deep structured prediction and expressive neural encoders for a set of tasks involving complicated argumentative structures.

... read more

Topics: Structured prediction (60%), Inference (54%)

3 Citations


Open accessProceedings ArticleDOI: 10.18653/V1/2021.SOCIALNLP-1.1
01 Jun 2021-
Abstract: The Moral Foundation Theory suggests five moral foundations that can capture the view of a user on a particular issue. It is widely used to identify sentence-level sentiment. In this paper, we study the Moral Foundation Theory in tweets by US politicians on two politically divisive issues - Gun Control and Immigration. We define the nuanced stance of politicians on these two topics by the grades given by related organizations to the politicians. First, we identify moral foundations in tweets from a huge corpus using deep relational learning. Then, qualitative and quantitative evaluations using the corpus show that there is a strong correlation between the moral foundation usage and the politicians’ nuanced stance on a particular topic. We also found substantial differences in moral foundation usage by different political parties when they address different entities. All of these results indicate the need for more intense research in this area.

... read more

2 Citations


Open accessProceedings ArticleDOI: 10.18653/V1/2020.COLING-MAIN.35
Aldo Porco1, Dan Goldwasser2Institutions (2)
01 Dec 2020-
Abstract: The ability to change a person’s mind on a given issue depends both on the arguments they are presented with and on their underlying perspectives and biases on that issue. Predicting stance changes require characterizing both aspects and the interaction between them, especially in realistic settings in which stance changes are very rare. In this paper, we suggest a modular learning approach, which decomposes the task into multiple modules, focusing on different aspects of the interaction between users, their beliefs, and the arguments they are exposed to. Our experiments show that our modular approach archives significantly better results compared to the end-to-end approach using BERT over the same inputs.

... read more

2 Citations


Open accessPosted Content
Abstract: Extracting moral sentiment from text is a vital component in understanding public opinion, social movements, and policy decisions. The Moral Foundation Theory identifies five moral foundations, each associated with a positive and negative polarity. However, moral sentiment is often motivated by its targets, which can correspond to individuals or collective entities. In this paper, we introduce morality frames, a representation framework for organizing moral attitudes directed at different entities, and come up with a novel and high-quality annotated dataset of tweets written by US politicians. Then, we propose a relational learning model to predict moral attitudes towards entities and moral foundations jointly. We do qualitative and quantitative evaluations, showing that moral sentiment towards entities differs highly across political ideologies.

... read more

Topics: Morality (56%)


References
  More

64 results found


Proceedings ArticleDOI: 10.18653/V1/N19-1423
11 Oct 2018-
Abstract: We introduce a new language representation model called BERT, which stands for Bidirectional Encoder Representations from Transformers. Unlike recent language representation models (Peters et al., 2018a; Radford et al., 2018), BERT is designed to pre-train deep bidirectional representations from unlabeled text by jointly conditioning on both left and right context in all layers. As a result, the pre-trained BERT model can be fine-tuned with just one additional output layer to create state-of-the-art models for a wide range of tasks, such as question answering and language inference, without substantial task-specific architecture modifications. BERT is conceptually simple and empirically powerful. It obtains new state-of-the-art results on eleven natural language processing tasks, including pushing the GLUE score to 80.5 (7.7 point absolute improvement), MultiNLI accuracy to 86.7% (4.6% absolute improvement), SQuAD v1.1 question answering Test F1 to 93.2 (1.5 point absolute improvement) and SQuAD v2.0 Test F1 to 83.1 (5.1 point absolute improvement).

... read more

Topics: Question answering (54%), Language model (52%)

24,672 Citations


Proceedings ArticleDOI: 10.3115/V1/D14-1162
01 Oct 2014-
Abstract: Recent methods for learning vector space representations of words have succeeded in capturing fine-grained semantic and syntactic regularities using vector arithmetic, but the origin of these regularities has remained opaque. We analyze and make explicit the model properties needed for such regularities to emerge in word vectors. The result is a new global logbilinear regression model that combines the advantages of the two major model families in the literature: global matrix factorization and local context window methods. Our model efficiently leverages statistical information by training only on the nonzero elements in a word-word cooccurrence matrix, rather than on the entire sparse matrix or on individual context windows in a large corpus. The model produces a vector space with meaningful substructure, as evidenced by its performance of 75% on a recent word analogy task. It also outperforms related models on similarity tasks and named entity recognition.

... read more

Topics: Word2vec (64%), Word embedding (56%), Sparse matrix (54%) ... read more

23,307 Citations


Journal ArticleDOI: 10.1146/ANNUREV.SOC.27.1.415
Abstract: Similarity breeds connection. This principle—the homophily principle—structures network ties of every type, including marriage, friendship, work, advice, support, information transfer, exchange, comembership, and other types of relationship. The result is that people's personal networks are homogeneous with regard to many sociodemographic, behavioral, and intrapersonal characteristics. Homophily limits people's social worlds in a way that has powerful implications for the information they receive, the attitudes they form, and the interactions they experience. Homophily in race and ethnicity creates the strongest divides in our personal environments, with age, religion, education, occupation, and gender following in roughly that order. Geographic propinquity, families, organizations, and isomorphic positions in social systems all create contexts in which homophilous relations form. Ties between nonsimilar individuals also dissolve at a higher rate, which sets the stage for the formation of niches (localize...

... read more

Topics: Homophily (74%), Propinquity (56%), Assortative mixing (56%) ... read more

13,795 Citations


Open accessProceedings ArticleDOI: 10.18653/V1/N18-1202
Matthew E. Peters1, Mark Neumann1, Mohit Iyyer2, Matt Gardner1  +3 moreInstitutions (4)
15 Feb 2018-
Abstract: We introduce a new type of deep contextualized word representation that models both (1) complex characteristics of word use (e.g., syntax and semantics), and (2) how these uses vary across linguistic contexts (i.e., to model polysemy). Our word vectors are learned functions of the internal states of a deep bidirectional language model (biLM), which is pre-trained on a large text corpus. We show that these representations can be easily added to existing models and significantly improve the state of the art across six challenging NLP problems, including question answering, textual entailment and sentiment analysis. We also present an analysis showing that exposing the deep internals of the pre-trained network is crucial, allowing downstream models to mix different types of semi-supervision signals.

... read more

Topics: Textual entailment (57%), Text corpus (53%), Syntax (53%) ... read more

6,141 Citations


Open accessProceedings ArticleDOI: 10.1145/2623330.2623732
Bryan Perozzi1, Rami Al-Rfou1, Steven Skiena1Institutions (1)
24 Aug 2014-
Abstract: We present DeepWalk, a novel approach for learning latent representations of vertices in a network. These latent representations encode social relations in a continuous vector space, which is easily exploited by statistical models. DeepWalk generalizes recent advancements in language modeling and unsupervised feature learning (or deep learning) from sequences of words to graphs.DeepWalk uses local information obtained from truncated random walks to learn latent representations by treating walks as the equivalent of sentences. We demonstrate DeepWalk's latent representations on several multi-label network classification tasks for social networks such as BlogCatalog, Flickr, and YouTube. Our results show that DeepWalk outperforms challenging baselines which are allowed a global view of the network, especially in the presence of missing information. DeepWalk's representations can provide F1 scores up to 10% higher than competing methods when labeled data is sparse. In some experiments, DeepWalk's representations are able to outperform all baseline methods while using 60% less training data.DeepWalk is also scalable. It is an online learning algorithm which builds useful incremental results, and is trivially parallelizable. These qualities make it suitable for a broad class of real world applications such as network classification, and anomaly detection.

... read more

Topics: Feature learning (58%), Deep learning (56%), Language model (52%)

5,535 Citations


Performance
Metrics
No. of citations received by the Paper in previous years
YearCitations
20217
20201