scispace - formally typeset
Search or ask a question
Author

Huaqiang Yuan

Bio: Huaqiang Yuan is an academic researcher from Dongguan University of Technology. The author has contributed to research in topics: Computer science & Handover. The author has an hindex of 15, co-authored 44 publications receiving 807 citations.

Papers
More filters
Journal ArticleDOI
TL;DR: Compared with the state-of-the-art QoS-predictors, BNLFT represents temporal patterns more precisely with high computational efficiency, thereby achieving the most accurate predictions for missing QoS data.
Abstract: Quality-of-service (QoS) data vary over time, making it vital to capture the temporal patterns hidden in such dynamic data for predicting missing ones with high accuracy However, currently latent factor (LF) analysis-based QoS-predictors are mostly defined on static QoS data without the consideration of such temporal dynamics To address this issue, this paper presents a biased non-negative latent factorization of tensors (BNLFTs) model for temporal pattern-aware QoS prediction Its main idea is fourfold: 1) incorporating linear biases into the model for describing QoS fluctuations; 2) constraining the model to be non-negative for describing QoS non-negativity; 3) deducing a single LF-dependent, non-negative, and multiplicative update scheme for training the model; and 4) incorporating an alternating direction method into the model for faster convergence The empirical studies on two dynamic QoS datasets from real applications show that compared with the state-of-the-art QoS-predictors, BNLFT represents temporal patterns more precisely with high computational efficiency, thereby achieving the most accurate predictions for missing QoS data

222 citations

Journal ArticleDOI
TL;DR: A novel multiobjective ant colony system based on a co-evolutionary multiple populations for multiple objectives framework is proposed, which adopts two colonies to deal with these two objectives, respectively.
Abstract: Cloud workflow scheduling is significantly challenging due to not only the large scale of workflow but also the elasticity and heterogeneity of cloud resources. Moreover, the pricing model of clouds makes the execution time and execution cost two critical issues in the scheduling. This paper models the cloud workflow scheduling as a multiobjective optimization problem that optimizes both execution time and execution cost. A novel multiobjective ant colony system based on a co-evolutionary multiple populations for multiple objectives framework is proposed, which adopts two colonies to deal with these two objectives, respectively. Moreover, the proposed approach incorporates with the following three novel designs to efficiently deal with the multiobjective challenges: 1) a new pheromone update rule based on a set of nondominated solutions from a global archive to guide each colony to search its optimization objective sufficiently; 2) a complementary heuristic strategy to avoid a colony only focusing on its corresponding single optimization objective, cooperating with the pheromone update rule to balance the search of both objectives; and 3) an elite study strategy to improve the solution quality of the global archive to help further approach the global Pareto front. Experimental simulations are conducted on five types of real-world scientific workflows and consider the properties of Amazon EC2 cloud platform. The experimental results show that the proposed algorithm performs better than both some state-of-the-art multiobjective optimization approaches and the constrained optimization approaches.

190 citations

Journal ArticleDOI
TL;DR: The experimental results show that the proposed DSDE algorithm is better than or at least comparable to the state-of-the-art multimodal algorithms when evaluated on the benchmark problems from CEC2013, in terms of locating more global optima, obtaining higher accuracy solution, and converging with faster speed.
Abstract: Multimodal optimization problem (MMOP), which targets at searching for multiple optimal solutions simultaneously, is one of the most challenging problems for optimization. There are two general goals for solving MMOPs. One is to maintain population diversity so as to locate global optima as many as possible, while the other is to increase the accuracy of the solutions found. To achieve these two goals, a novel dual-strategy differential evolution (DSDE) with affinity propagation clustering (APC) is proposed in this paper. The novelties and advantages of DSDE include the following three aspects. First, a dual-strategy mutation scheme is designed to balance exploration and exploitation in generating offspring. Second, an adaptive selection mechanism based on APC is proposed to choose diverse individuals from different optimal regions for locating as many peaks as possible. Third, an archive technique is applied to detect and protect stagnated and converged individuals. These individuals are stored in the archive to preserve the found promising solutions and are reinitialized for exploring more new areas. The experimental results show that the proposed DSDE algorithm is better than or at least comparable to the state-of-the-art multimodal algorithms when evaluated on the benchmark problems from CEC2013, in terms of locating more global optima, obtaining higher accuracy solution, and converging with faster speed.

119 citations

Journal ArticleDOI
TL;DR: Experimental results on two HiDS matrices generated by real recommender systems show that compared with an LF model with a standard SGD algorithm, an LF models with extended ones can achieve: higher prediction accuracy for missing data; faster convergence rate; and 3) model diversity.
Abstract: High-dimensional and sparse (HiDS) matrices generated by recommender systems contain rich knowledge regarding various desired patterns like users’ potential preferences and community tendency. Latent factor (LF) analysis proves to be highly efficient in extracting such knowledge from an HiDS matrix efficiently. Stochastic gradient descent (SGD) is a highly efficient algorithm for building an LF model. However, current LF models mostly adopt a standard SGD algorithm. Can SGD be extended from various aspects in order to improve the resultant models’ convergence rate and prediction accuracy for missing data? Are such SGD extensions compatible with an LF model? To answer them, this paper carefully investigates eight extended SGD algorithms to propose eight novel LF models. Experimental results on two HiDS matrices generated by real recommender systems show that compared with an LF model with a standard SGD algorithm, an LF model with extended ones can achieve: 1) higher prediction accuracy for missing data; 2) faster convergence rate; and 3) model diversity.

115 citations

Journal ArticleDOI
TL;DR: A two-layer distributed CC (dCC) architecture with adaptive computing resource allocation for large-scale optimization and two different conformance policies are designed to help optimizers use the assigned computing resources efficiently.
Abstract: Through introducing the divide-and-conquer strategy, cooperative co-evolution (CC) has been successfully employed by many evolutionary algorithms (EAs) to solve large-scale optimization problems. In practice, it is common that different subcomponents of a large-scale problem have imbalanced contributions to the global fitness. Thus, how to utilize such imbalance and concentrate efforts on optimizing important subcomponents becomes an important issue for improving performance of cooperative co-EA, especially in distributed computing environment. In this paper, we propose a two-layer distributed CC (dCC) architecture with adaptive computing resource allocation for large-scale optimization. The first layer is the dCC model which takes charge of calculating the importance of subcomponents and accordingly allocating resources. An effective allocating algorithm is designed which can adaptively allocate computing resources based on a periodic contribution calculating method. The second layer is the pool model which takes charge of making fully utilization of imbalanced resource allocation. Within this layer, two different conformance policies are designed to help optimizers use the assigned computing resources efficiently. Empirical studies show that the two conformance policies and the computing resource allocation algorithm are effective, and the proposed distributed architecture possesses high scalability and efficiency.

79 citations


Cited by
More filters
01 Dec 1971

979 citations

Journal ArticleDOI
TL;DR: An adaptive LS starting strategy is proposed by utilizing the proposed quasi-entropy index to address its key issue, i.e., when to start LS.
Abstract: A comprehensive learning particle swarm optimizer (CLPSO) embedded with local search (LS) is proposed to pursue higher optimization performance by taking the advantages of CLPSO’s strong global search capability and LS’s fast convergence ability. This paper proposes an adaptive LS starting strategy by utilizing our proposed quasi-entropy index to address its key issue, i.e., when to start LS. The changes of the index as the optimization proceeds are analyzed in theory and via numerical tests. The proposed algorithm is tested on multimodal benchmark functions. Parameter sensitivity analysis is performed to demonstrate its robustness. The comparison results reveal overall higher convergence rate and accuracy than those of CLPSO, state-of-the-art particle swarm optimization variants.

288 citations

Journal ArticleDOI
TL;DR: Particle swarm optimization (PSO) is a metaheuristic global optimization paradigm that has gained prominence in the last two decades due to its ease of application in unsupervised, complex multidimensional problems which cannot be solved using traditional deterministic algorithms as discussed by the authors.
Abstract: Particle Swarm Optimization (PSO) is a metaheuristic global optimization paradigm that has gained prominence in the last two decades due to its ease of application in unsupervised, complex multidimensional problems which cannot be solved using traditional deterministic algorithms. The canonical particle swarm optimizer is based on the flocking behavior and social co-operation of birds and fish schools and draws heavily from the evolutionary behavior of these organisms. This paper serves to provide a thorough survey of the PSO algorithm with special emphasis on the development, deployment and improvements of its most basic as well as some of the state-of-the-art implementations. Concepts and directions on choosing the inertia weight, constriction factor, cognition and social weights and perspectives on convergence, parallelization, elitism, niching and discrete optimization as well as neighborhood topologies are outlined. Hybridization attempts with other evolutionary and swarm paradigms in selected applications are covered and an up-to-date review is put forward for the interested reader.

260 citations

Journal ArticleDOI
TL;DR: This work proposes a coevolutionary particle swarm optimization with a bottleneck objective learning (BOL) strategy for many-objective optimization, and develops a solution reproduction procedure with both an elitist learning strategy and a juncture learning strategy to improve the quality of archived solutions.
Abstract: The application of multiobjective evolutionary algorithms to many-objective optimization problems often faces challenges in terms of diversity and convergence. On the one hand, with a limited population size, it is difficult for an algorithm to cover different parts of the whole Pareto front (PF) in a large objective space. The algorithm tends to concentrate only on limited areas. On the other hand, as the number of objectives increases, solutions easily have poor values on some objectives, which can be regarded as poor bottleneck objectives that restrict solutions’ convergence to the PF. Thus, we propose a coevolutionary particle swarm optimization with a bottleneck objective learning (BOL) strategy for many-objective optimization. In the proposed algorithm, multiple swarms coevolve in distributed fashion to maintain diversity for approximating different parts of the whole PF, and a novel BOL strategy is developed to improve convergence on all objectives. In addition, we develop a solution reproduction procedure with both an elitist learning strategy (ELS) and a juncture learning strategy (JLS) to improve the quality of archived solutions. The ELS helps the algorithm to jump out of local PFs, and the JLS helps to reach out to the missing areas of the PF that are easily missed by the swarms. The performance of the proposed algorithm is evaluated using two widely used test suites with different numbers of objectives. Experimental results show that the proposed algorithm compares favorably with six other state-of-the-art algorithms on many-objective optimization.

203 citations

Journal ArticleDOI
TL;DR: The experimental results show that VS-CCPSO has the capability of obtaining good feature subsets, suggesting its competitiveness for tackling FS problems with high dimensionality.
Abstract: Evolutionary feature selection (FS) methods face the challenge of “curse of dimensionality” when dealing with high-dimensional data. Focusing on this challenge, this article studies a variable-size cooperative coevolutionary particle swarm optimization algorithm (VS-CCPSO) for FS. The proposed algorithm employs the idea of “divide and conquer” in cooperative coevolutionary approach, but several new developed problem-guided operators/strategies make it more suitable for FS problems. First, a space division strategy based on the feature importance is presented, which can classify relevant features into the same subspace with a low computational cost. Following that, an adaptive adjustment mechanism of subswarm size is developed to maintain an appropriate size for each subswarm, with the purpose of saving computational cost on evaluating particles. Moreover, a particle deletion strategy based on fitness-guided binary clustering, and a particle generation strategy based on feature importance and crossover both are designed to ensure the quality of particles in the subswarms. We apply VS-CCPSO to 12 typical datasets and compare it with six state-of-the-art methods. The experimental results show that VS-CCPSO has the capability of obtaining good feature subsets, suggesting its competitiveness for tackling FS problems with high dimensionality.

193 citations