scispace - formally typeset
Journal ArticleDOI

Lingual markers for automating personality profiling: background and road ahead

Mohmad Azhar Teli, +1 more
- 22 Sep 2022 - 
- Vol. 5, Iss: 2, pp 1663-1707
Reads0
Chats0
About
This article is published in Journal of computational social science.The article was published on 2022-09-22. It has received 1 citations till now. The article focuses on the topics: Computer science & Personality.

read more

Content maybe subject to copyright    Report

Citations
More filters
Proceedings ArticleDOI

Pre-trained Word Embeddings In Deep Multi-label Personality Classification Of YouTube Transliterations

TL;DR: In this paper , transliterations from the YouTube personality dataset were used to classify personalities using multi-label semi-supervised learning algorithms, and the results showed that inter label-correlations could aid in creating better models for APRT.
References
More filters
Proceedings ArticleDOI

Glove: Global Vectors for Word Representation

TL;DR: A new global logbilinear regression model that combines the advantages of the two major model families in the literature: global matrix factorization and local context window methods and produces a vector space with meaningful substructure.
Posted Content

BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding

TL;DR: A new language representation model, BERT, designed to pre-train deep bidirectional representations from unlabeled text by jointly conditioning on both left and right context in all layers, which can be fine-tuned with just one additional output layer to create state-of-the-art models for a wide range of tasks.
Journal ArticleDOI

WordNet: a lexical database for English

TL;DR: WordNet1 provides a more effective combination of traditional lexicographic information and modern computing, and is an online lexical database designed for use under program control.
Posted Content

RoBERTa: A Robustly Optimized BERT Pretraining Approach

TL;DR: It is found that BERT was significantly undertrained, and can match or exceed the performance of every model published after it, and the best model achieves state-of-the-art results on GLUE, RACE and SQuAD.
Posted Content

Distributed Representations of Words and Phrases and their Compositionality

TL;DR: In this paper, the Skip-gram model is used to learn high-quality distributed vector representations that capture a large number of precise syntactic and semantic word relationships and improve both the quality of the vectors and the training speed.
Related Papers (5)