scispace - formally typeset
Search or ask a question

Showing papers on "Similarity (psychology) published in 2022"


Journal ArticleDOI
TL;DR: This work proposes a general framework for learning node representations in a self supervised manner called Graph Constrastive Learning (GraphCL), which learns node embeddings by maximizing the similarity between the nodes representations of two randomly perturbed versions of the same graph.

17 citations


Journal ArticleDOI
TL;DR: In this paper, the authors examined how the demographic (age, race, gender, and income) similarity of other consumers in a service setting impacts one's attitude and behavior and found that age, gender and income similarity between the focal customer and other customers increases intention to return.

7 citations


Journal ArticleDOI
TL;DR: This article found that when a person's entire brain (Experiments 3 and 4) or soul (Experiment 4) has been replaced with that of another person, the majority of participants judge that numerical identity has changed.

3 citations


Journal ArticleDOI
TL;DR: In this article, the authors tested the hypothesis that the strength of similarity-based arguments can be predicted from the structure of the conceptual space in which the items being reasoned about are represented.

3 citations


Journal ArticleDOI
TL;DR: It is suggested that SIM causes durable, extensive changes in across both episodic and semantic self-knowledge, as well as semantically related traits and cross-language traits.

3 citations


Journal ArticleDOI
TL;DR: This paper used natural language processing methods to understand the semantic content of scales measuring psychological constructs correlated with prosociality, which can be used to assess the novelty or redundancy of new scales, understand the overlap among different psychological constructs, and compare different measures of the same construct.
Abstract: Prosociality (measured with economic games) is correlated with individual differences in psychological constructs (measured with self-report scales). We review how methods from natural language processing, a subfield of computer science focused on processing natural text, can be applied to understand the semantic content of scales measuring psychological constructs correlated with prosociality. Methods for clustering language and assessing similarity between text documents can be used to assess the novelty (or redundancy) of new scales, to understand the overlap among different psychological constructs, and to compare different measures of the same construct. These examples illustrate how natural language processing methods can augment traditional survey- and game-based approaches to studying individual differences in prosociality.

1 citations



Journal ArticleDOI
TL;DR: This paper applied network embedding techniques to turn a distributional thesaurus network into dense word vectors and investigated the usefulness of distributional Thesaurus embedding in improving the overall word vector representation.
Abstract: Word representations obtained from text using the distributional hypothesis have proved to be useful for various natural language processing tasks. To prepare vector representation from the text, some researchers use predictive model (Word2vec) or dense count-based model (GloVe), whereas others attempt to explore network structure obtained from text namely, distributional thesaurus network where the neighborhood of a word is a set of words having adequate context feature overlap. Being inspired by the successful application of network embedding techniques (DeepWalk, LINE, node2vec, etc.) in various tasks, we attempt to apply network embedding techniques to turn a distributional thesaurus network into dense word vectors and investigate the usefulness of distributional thesaurus embedding in improving the overall word vector representation. This is the first attempt where we show that combining the proposed word representation obtained by distributional thesaurus embedding with the state-of-the-art word representations helps in improving the performance by a significant margin when evaluated against several NLP tasks which include intrinsic tasks like word similarity and relatedness, subspace alignment, synonym detection, analogy detection; extrinsic tasks like noun compound interpretation, sentence pair similarity task as well as subconscious intrinsic evaluation methods using neural activation pattern in the brain, etc.