scispace - formally typeset
H

Holakou Rahmanian

Researcher at Microsoft

Publications -  10
Citations -  81

Holakou Rahmanian is an academic researcher from Microsoft. The author has contributed to research in topics: Tree (data structure) & Online algorithm. The author has an hindex of 3, co-authored 8 publications receiving 72 citations. Previous affiliations of Holakou Rahmanian include University of California, Santa Cruz.

Papers
More filters
Proceedings ArticleDOI

Deep Embedding Forest: Forest-based Serving with Deep Embedding Features

TL;DR: In this paper, the authors proposed a Deep Embedding Forest (DEF) model, which consists of a number of embedding layers and a forest/tree layer to map high dimensional (hundreds of thousands to millions) and heterogeneous low-level features to the lower dimensional (thousands) vectors.
Posted Content

Online Dynamic Programming

TL;DR: The goal is to develop algorithms with the property that their total average search cost (loss) in all trials is close to the total loss of the best tree chosen in hindsight for all trials.
Posted Content

Online Learning of Combinatorial Objects via Extended Formulation

TL;DR: A general framework for converting extended formulations into efficient online algorithms with good relative loss bounds is developed and applications are presented, which can be applied to other combinatorial objects.
Proceedings Article

Online Dynamic Programming

TL;DR: In this paper, the authors consider the problem of repeatedly solving a variant of the same dynamic programming problem in successive trials and develop algorithms with the property that their total average search cost (loss) in all trials is close to the total loss of the best tree chosen in hindsight.
Proceedings ArticleDOI

Toward Understanding Privileged Features Distillation in Learning-to-Rank

TL;DR: This paper first study PFD empirically on three public ranking datasets and an industrial-scale ranking problem derived from Amazon’s logs, and shows that PFD outperforms several baselines (no-distillation, pretraining-finetuning, self- Distillation, and generalized distillation) on all these datasets.