scispace - formally typeset
H

Himanshu Jain

Researcher at Indian Institute of Technology Delhi

Publications -  30
Citations -  6569

Himanshu Jain is an academic researcher from Indian Institute of Technology Delhi. The author has contributed to research in topics: Optimization problem & Computer science. The author has an hindex of 11, co-authored 24 publications receiving 4366 citations. Previous affiliations of Himanshu Jain include Indian Institutes of Technology & Indian Institute of Technology Kanpur.

Papers
More filters
Proceedings ArticleDOI

Slice: Scalable Linear Extreme Classifiers Trained on 100 Million Labels for Related Searches

TL;DR: The Slice algorithm is developed which can be accurately trained on low-dimensional, dense deep learning features popularly used to represent queries and which efficiently scales to 100 million labels and 240 million training points and is found to improve the accuracy of recommendations by 10% as compared to state-of-the-art related searches techniques.
Proceedings ArticleDOI

Handling many-objective problems using an improved NSGA-II procedure

TL;DR: A reference-point based many-objective NSGA-II that emphasizes population members which are non-dominated yet close to a set of well-distributed reference points is suggested that is applied to a number of many- objective test problems having three to 10 objectives and compared with a recently suggested EMO algorithm.
Book ChapterDOI

An Improved Adaptive Approach for Elitist Nondominated Sorting Genetic Algorithm for Many-Objective Optimization

TL;DR: In this paper, NSGA-III’s reference point allocation task is made adaptive so that a better distribution of points can be found and a previous adaptive approach on a number of constrained and unconstrained many-objective optimization problems is compared.
Proceedings ArticleDOI

DeepXML: A Deep Extreme Multi-Label Learning Framework Applied to Short Text Documents

TL;DR: DeepXML as discussed by the authors decomposes the deep extreme multi-label task into four simpler sub-tasks, each of which can be trained accurately and efficiently, and chooses different components for each sub-task to generate algorithms with varying trade-offs between accuracy and scalability.
Proceedings Article

Long-tail learning via logit adjustment

TL;DR: This paper revisited the classic idea of logit adjustment based on the label frequencies, which encourages a large relative margin between logits of rare positive versus dominant negative labels, and proposed two techniques for long-tail learning, where such adjustment is either applied post hoc to a trained model, or enforced in the loss during training.