scispace - formally typeset
H

Hui Li

Researcher at Beihang University

Publications -  99
Citations -  14278

Hui Li is an academic researcher from Beihang University. The author has contributed to research in topics: Evolutionary algorithm & Multi-objective optimization. The author has an hindex of 27, co-authored 81 publications receiving 11049 citations. Previous affiliations of Hui Li include University of Nottingham & University of Essex.

Papers
More filters
Journal ArticleDOI

MOEA/D: A Multiobjective Evolutionary Algorithm Based on Decomposition

TL;DR: Experimental results have demonstrated that MOEA/D with simple decomposition methods outperforms or performs similarly to MOGLS and NSGA-II on multiobjective 0-1 knapsack problems and continuous multiobjectives optimization problems.
Journal ArticleDOI

Multiobjective Optimization Problems With Complicated Pareto Sets, MOEA/D and NSGA-II

TL;DR: The experimental results indicate that MOEA/D could significantly outperform NSGA-II on these test instances, and suggests that decomposition based multiobjective evolutionary algorithms are very promising in dealing with complicated PS shapes.
Journal ArticleDOI

Multiobjective evolutionary algorithms: A survey of the state of the art

TL;DR: This paper surveys the development ofMOEAs primarily during the last eight years and covers algorithmic frameworks such as decomposition-based MOEAs (MOEA/Ds), memetic MOEas, coevolutionary MOE As, selection and offspring reproduction operators, MOE as with specific search methods, MOeAs for multimodal problems, constraint handling and MOE
Proceedings ArticleDOI

The performance of a new version of MOEA/D on CEC09 unconstrained MOP test instances

TL;DR: The new version of MOEA/D has been tested on all the CEC09 unconstrained MOP test instances and a strategy for allocating the computational resource to different subproblems in MOEA /D is proposed.
Proceedings ArticleDOI

Opposition-based particle swarm algorithm with cauchy mutation

TL;DR: An Opposition-based PSO (OPSO) to accelerate the convergence of PSO and avoid premature convergence is presented, which employs opposition-based learning for each particle and applies a dynamic Cauchy mutation on the best particle.