scispace - formally typeset
A

Ali Elkahky

Researcher at Columbia University

Publications -  15
Citations -  1006

Ali Elkahky is an academic researcher from Columbia University. The author has contributed to research in topics: Computer science & Deep learning. The author has an hindex of 7, co-authored 10 publications receiving 794 citations. Previous affiliations of Ali Elkahky include Microsoft.

Papers
More filters
Proceedings ArticleDOI

A Multi-View Deep Learning Approach for Cross Domain User Modeling in Recommendation Systems

TL;DR: This work proposes a content-based recommendation system to address both the recommendation quality and the system scalability, and proposes to use a rich feature set to represent users, according to their web browsing history and search queries, using a Deep Learning approach.
Proceedings ArticleDOI

Multi-Rate Deep Learning for Temporal Recommendation

TL;DR: A novel deep neural network based architecture that models the combination of long-term static and short-term temporal user preferences to improve the recommendation performance and a novel pre-train method to reduce the number of free parameters significantly is proposed.
Proceedings ArticleDOI

Extending domain coverage of language understanding systems via intent transfer between domains using knowledge graphs and search query click logs

TL;DR: Experimental results show that the proposed technique can in fact be used in extending NLU system's domain coverage in fulfilling the user's request.
Patent

Transferring information across language understanding model domains

TL;DR: In this article, the authors provide a technique to validate the transfer of intents or entities between existing natural language model domains using click logs, a knowledge graph, or both, and validate that the intent or entity is transferable between domains.
Proceedings Article

Multilingual Multi-class Sentiment Classification Using Convolutional Neural Networks

TL;DR: The proposed language-independent model for multi-class sentiment analysis using a simple neural network architecture of five layers (Embedding, Conv1D, GlobalMaxPooling and two Fully-Connected) does not rely on language-specific features such as ontologies, dictionaries, or morphological or syntactic pre-processing.