scispace - formally typeset
H

Himanshu Jain

Researcher at Indian Institute of Technology Delhi

Publications -  30
Citations -  6569

Himanshu Jain is an academic researcher from Indian Institute of Technology Delhi. The author has contributed to research in topics: Optimization problem & Computer science. The author has an hindex of 11, co-authored 24 publications receiving 4366 citations. Previous affiliations of Himanshu Jain include Indian Institutes of Technology & Indian Institute of Technology Kanpur.

Papers
More filters
Journal ArticleDOI

An Evolutionary Many-Objective Optimization Algorithm Using Reference-Point-Based Nondominated Sorting Approach, Part I: Solving Problems With Box Constraints

TL;DR: A reference-point-based many-objective evolutionary algorithm that emphasizes population members that are nondominated, yet close to a set of supplied reference points is suggested that is found to produce satisfactory results on all problems considered in this paper.
Journal ArticleDOI

An Evolutionary Many-Objective Optimization Algorithm Using Reference-Point Based Nondominated Sorting Approach, Part II: Handling Constraints and Extending to an Adaptive Approach

TL;DR: This paper extends NSGA-III to solve generic constrained many-objective optimization problems and suggests three types of constrained test problems that are scalable to any number of objectives and provide different types of challenges to a many- objective optimizer.
Proceedings Article

Sparse local embeddings for extreme multi-label classification

TL;DR: The SLEEC classifier is developed for learning a small ensemble of local distance preserving embeddings which can accurately predict infrequently occurring (tail) labels and can make significantly more accurate predictions then state-of-the-art methods including both embedding-based as well as tree-based methods.
Proceedings ArticleDOI

Extreme Multi-label Loss Functions for Recommendation, Tagging, Ranking & Other Missing Label Applications

TL;DR: In this article, the authors propose propensity scored loss functions for extreme multi-label learning, which prioritize predicting the few relevant labels over the large number of irrelevant ones and provide unbiased estimates of the true loss function even when ground truth labels go missing under arbitrary probabilistic label noise models.
Posted Content

Long-tail learning via logit adjustment

TL;DR: These techniques revisit the classic idea of logit adjustment based on the label frequencies, either applied post-hoc to a trained model, or enforced in the loss during training, to encourage a large relative margin between logits of rare versus dominant labels.