scispace - formally typeset
Proceedings ArticleDOI

Learning a Hierarchical Embedding Model for Personalized Product Search

Reads0
Chats0
TLDR
The hierarchical embedding model is the first latent space model that jointly learns distributed representations for queries, products and users with a deep neural network and experiments show that it significantly outperforms existing product search baselines on multiple benchmark datasets.
Abstract
Product search is an important part of online shopping. In contrast to many search tasks, the objectives of product search are not confined to retrieving relevant products. Instead, it focuses on finding items that satisfy the needs of individuals and lead to a user purchase. The unique characteristics of product search make search personalization essential for both customers and e-shopping companies. Purchase behavior is highly personal in online shopping and users often provide rich feedback about their decisions (e.g. product reviews). However, the severe mismatch found in the language of queries, products and users make traditional retrieval models based on bag-of-words assumptions less suitable for personalization in product search. In this paper, we propose a hierarchical embedding model to learn semantic representations for entities (i.e. words, products, users and queries) from different levels with their associated language data. Our contributions are three-fold: (1) our work is one of the initial studies on personalized product search; (2) our hierarchical embedding model is the first latent space model that jointly learns distributed representations for queries, products and users with a deep neural network; (3) each component of our network is designed as a generative model so that the whole structure is explainable and extendable. Following the methodology of previous studies, we constructed personalized product search benchmarks with Amazon product data. Experiments show that our hierarchical embedding model significantly outperforms existing product search baselines on multiple benchmark datasets.

read more

Citations
More filters
Proceedings ArticleDOI

Towards Conversational Search and Recommendation: System Ask, User Respond

TL;DR: This paper proposes a System Ask -- User Respond (SAUR) paradigm for conversational search, defines the major components of the paradigm, and designs a unified implementation of the framework for product search and recommendation in e-commerce.
Journal ArticleDOI

A Deep Look into neural ranking models for information retrieval

TL;DR: A deep look into the neural ranking models from different dimensions is taken to analyze their underlying assumptions, major design principles, and learning strategies to obtain a comprehensive empirical understanding of the existing techniques.
Journal ArticleDOI

Learning over Knowledge-Base Embeddings for Recommendation.

TL;DR: This work proposes a knowledge-base representation learning framework to embed heterogeneous entities for recommendation, and based on the embedded knowledge base, a soft matching algorithm is proposed to generate personalized explanations for the recommended items.
Proceedings ArticleDOI

BiNE: Bipartite Network Embedding

TL;DR: This work develops a representation learning method named BiNE, short for Bipartite Network Embedding, to learn the vertex representations for bipartite networks, and proposes a novel optimization framework by accounting for both the explicit and implicit relations in learning the vertices.
Journal ArticleDOI

Attentive Long Short-Term Preference Modeling for Personalized Product Search

TL;DR: Zhang et al. as mentioned in this paper proposed an attention mechanism to integrate both long-term and short-term user preferences with the given query for personalized product search, which can capture users' current search intentions more accurately.
References
More filters
Journal ArticleDOI

Latent dirichlet allocation

TL;DR: This work proposes a generative model for text and other collections of discrete data that generalizes or improves on several previous models including naive Bayes/unigram, mixture of unigrams, and Hofmann's aspect model.
Proceedings Article

Latent Dirichlet Allocation

TL;DR: This paper proposed a generative model for text and other collections of discrete data that generalizes or improves on several previous models including naive Bayes/unigram, mixture of unigrams, and Hof-mann's aspect model, also known as probabilistic latent semantic indexing (pLSI).
Proceedings Article

Distributed Representations of Words and Phrases and their Compositionality

TL;DR: This paper presents a simple method for finding phrases in text, and shows that learning good vector representations for millions of phrases is possible and describes a simple alternative to the hierarchical softmax called negative sampling.
Posted Content

Efficient Estimation of Word Representations in Vector Space

TL;DR: This paper proposed two novel model architectures for computing continuous vector representations of words from very large data sets, and the quality of these representations is measured in a word similarity task and the results are compared to the previously best performing techniques based on different types of neural networks.
Proceedings ArticleDOI

Learning Phrase Representations using RNN Encoder--Decoder for Statistical Machine Translation

TL;DR: In this paper, the encoder and decoder of the RNN Encoder-Decoder model are jointly trained to maximize the conditional probability of a target sequence given a source sequence.
Related Papers (5)