Z
Zhi-Hua Zhou
Researcher at Nanjing University
Publications - 633
Citations - 64307
Zhi-Hua Zhou is an academic researcher from Nanjing University. The author has contributed to research in topics: Semi-supervised learning & Artificial neural network. The author has an hindex of 102, co-authored 626 publications receiving 52850 citations. Previous affiliations of Zhi-Hua Zhou include Michigan State University & Tokyo Institute of Technology.
Papers
More filters
Journal ArticleDOI
Unsupervised object discovery and co-localization by deep descriptor transformation
TL;DR: This paper proposes a simple yet effective method, termed Deep Descriptor Transforming (DDT), for evaluating the correlations of descriptors and then obtaining the category-consistent regions, which can accurately locate the common object in a set of unlabeled images, i.e., unsupervised object discovery.
Journal ArticleDOI
An analysis on recombination in multi-objective evolutionary optimization
Chao Qian,Yang Yu,Zhi-Hua Zhou +2 more
TL;DR: The analysis discloses that recombination may accelerate the filling of the Pareto front by recombining diverse solutions and thus help solve multi-objective optimization.
Posted Content
Multi-Label Learning with Global and Local Label Correlation
TL;DR: This paper proposes a new multi-label approach GLOCAL dealing with both the full-label and the missing-label cases, exploiting global and local label correlations simultaneously, through learning a latent label representation and optimizing label manifolds.
Proceedings Article
Pareto ensemble pruning
Chao Qian,Yang Yu,Zhi-Hua Zhou +2 more
TL;DR: This paper investigates solving the two goals explicitly in a bi-objective formulation and proposes the PEP (Pareto Ensemble Pruning) approach, and discloses that PEP does not only achieve significantly better performance than the state-of-the-art approaches, and also gains theoretical support.
Journal ArticleDOI
When Does Cotraining Work in Real Data
TL;DR: A novel approach to empirically verify the two assumptions of cotraining given two views is proposed, and several methods to split single view data sets into two views are designed in order to make cot training work reliably well.