Author
MengChu Zhou
Other affiliations: Xidian University, Macau University of Science and Technology
Bio: MengChu Zhou is an academic researcher from New Jersey Institute of Technology. The author has contributed to research in topics: Cloud computing & Scheduling (production processes). The author has an hindex of 8, co-authored 17 publications receiving 148 citations. Previous affiliations of MengChu Zhou include Xidian University & Macau University of Science and Technology.
Papers
More filters
TL;DR: Inspired by the two-layered structure of GSA, four layers consisting of population, iteration-best, personal-best and global-best layers are constructed and dynamically implemented in different search stages to greatly improve both exploration and exploitation abilities of population.
Abstract: A gravitational search algorithm ( GSA ) uses gravitational force among individuals to evolve population. Though GSA is an effective population-based algorithm, it exhibits low search performance and premature convergence. To ameliorate these issues, this work proposes a multi-layered GSA called MLGSA. Inspired by the two-layered structure of GSA, four layers consisting of population, iteration-best, personal-best and global-best layers are constructed. Hierarchical interactions among four layers are dynamically implemented in different search stages to greatly improve both exploration and exploitation abilities of population. Performance comparison between MLGSA and nine existing GSA variants on twenty-nine CEC2017 test functions with low, medium and high dimensions demonstrates that MLGSA is the most competitive one. It is also compared with four particle swarm optimization variants to verify its excellent performance. Moreover, the analysis of hierarchical interactions is discussed to illustrate the influence of a complete hierarchy on its performance. The relationship between its population diversity and fitness diversity is analyzed to clarify its search performance. Its computational complexity is given to show its efficiency. Finally, it is applied to twenty-two CEC2011 real-world optimization problems to show its practicality.
123 citations
TL;DR: In this paper, a momentum-incorporated parallel stochastic gradient descent (MPSGD) algorithm is proposed to accelerate the convergence rate by integrating momentum effects into its training process.
Abstract: A recommender system (RS) relying on latent factor analysis usually adopts stochastic gradient descent (SGD) as its learning algorithm. However, owing to its serial mechanism, an SGD algorithm suffers from low efficiency and scalability when handling large-scale industrial problems. Aiming at addressing this issue, this study proposes a momentum-incorporated parallel stochastic gradient descent (MPSGD) algorithm, whose main idea is two-fold: a) implementing parallelization via a novel data-splitting strategy, and b) accelerating convergence rate by integrating momentum effects into its training process. With it, an MPSGD-based latent factor (MLF) model is achieved, which is capable of performing efficient and high-quality recommendations. Experimental results on four high-dimensional and sparse matrices generated by industrial RS indicate that owing to an MPSGD algorithm, an MLF model outperforms the existing state-of-the-art ones in both computational efficiency and scalability.
108 citations
TL;DR: An extended version of a flexible job shop problem that allows the precedence between the operations to be given by an arbitrary directed acyclic graph instead of a linear order is considered.
Abstract: Scheduling of complex manufacturing systems entails complicated constraints such as the mating operational one. Focusing on the real settings, this article considers an extended version of a flexible job shop problem that allows the precedence between the operations to be given by an arbitrary directed acyclic graph instead of a linear order. In order to obtain its reliable and high-performance schedule in a reasonable time, this article contributes a knowledge-based cuckoo search algorithm (KCSA) to the scheduling field. The proposed knowledge base is initially trained off-line on models before operations based on reinforcement learning and hybrid heuristics to store scheduling information and appropriate parameters. In its off-line training phase, the algorithm SARSA is used, for the first time, to build a self-adaptive parameter control scheme of the CS algorithm. In each iteration, the proposed knowledge base selects suitable parameters to ensure the desired diversification and intensification of population. It is then used to generate new solutions by probability sampling in a designed mutation phase. Moreover, it is updated via feedback information from a search process. Its influence on KCSA’s performance is investigated and the time complexity of the KCSA is analyzed. The KCSA is validated with the benchmark and randomly generated cases. Various simulation experiments and comparisons between it and several popular methods are performed to validate its effectiveness. Note to Practitioners —Complex manufacturing scheduling problems are usually solved via intelligent optimization algorithms. However, most of them are parameter-sensitive, and thus selecting their proper parameters is highly challenging. On the other hand, it is difficult to ensure their robustness since they heavily rely on some random mechanisms. In order to deal with the above obstacles, we design a knowledge-based intelligent optimization algorithm. In the proposed algorithm, a reinforcement learning algorithm is proposed to self-adjust its parameters to tackle the parameter selection issue. Two probability matrices for machine allocation and operation sequencing are built via hybrid heuristics as a guide for searching a new and efficient assignment scheme. To further improve the performance of our algorithm, a feedback control framework is constructed to ensure the desired state of population. As a result, our algorithm can obtain a high-quality schedule in a reasonable time to fulfill a real-time scheduling purpose. In addition, it possesses high robustness via the proposed feedback control technique. Simulation results show that the knowledge-based cuckoo search algorithm (KCSA) outperforms well some existing algorithms. Hence, it can be readily applied to real manufacturing facility scheduling problems.
52 citations
TL;DR: A hybrid probabilistic multiobjective evolutionary algorithm (MOEA) is proposed to optimize these conflicting metrics and outperforms some state-of-the-art algorithms as it achieves a higher hypervolume value than them.
Abstract: As big-data-driven complex systems, commercial recommendation systems (RSs) have been widely used in such companies as Amazon and Ebay. Their core aim is to maximize total profit, which relies on recommendation accuracy and profits from recommended items. It is also important for them to treat new items equally for a long-term run. However, traditional recommendation techniques mainly focus on recommendation accuracy and suffer from a cold-start problem (i.e., new items cannot be recommended). Differing from them, this work designs a multiobjective RS by considering item profit and novelty besides accuracy. Then, a hybrid probabilistic multiobjective evolutionary algorithm (MOEA) is proposed to optimize these conflicting metrics. In it, some specifically designed genetic operators are proposed, and two classical MOEA frameworks are adaptively combined such that it owns their complementary advantages. The experimental results reveal that it outperforms some state-of-the-art algorithms as it achieves a higher hypervolume value than them.
50 citations
TL;DR: A multi-objective discrete fruit fly optimization algorithm incorporating a stochastic simulation approach is developed that performs better on all the twenty-five instances than its peers and achieves the expected makespan and total tardiness minimization.
Abstract: Remanufacturing end-of-life (EOL) products is an important approach to yield great economic and environmental benefits. A remanufacturing process usually contains three shops, i.e., disassembly, processing and assembly shops. EOL products are dissembled into multiple components in a disassembly shop. Reusable components are reprocessed in a processing shop, and reassembled into their corresponding products in an assembly shop. To realize an overall optimization, we have to integrate them together when making decisions. In practice, a decision-maker usually has to optimize multiple criteria such as cost-related and service-oriented objectives. Additionally, we cannot accurately acquire the detail of EOL products due to their various usage processes. Therefore, multi-objective and uncertainty need to be considered simultaneously in an integrated disassembly-reprocessing-reassembly scheduling process. This work investigates a stochastic multi-objective integrated disassembly-reprocessing-reassembly scheduling problem to achieve the expected makespan and total tardiness minimization. To handle this problem, this work develops a multi-objective discrete fruit fly optimization algorithm incorporating a stochastic simulation approach. Its search techniques are designed according to this problem’s features from five aspects, i.e., solution representation, heuristic decoding rules, smell-searching, vision-searching, and genetic-searching. Simulation experiments are conducted by adopting twenty-five instances to verify the performance of the proposed approach. Nondominated sorting genetic algorithm II, bi-objective multi-start simulated annealing method, and hybrid multi-objective discrete artificial bee colony are chosen for comparisons. By analyzing the results with three performance metrics, we can find that the proposed approach performs better on all the twenty-five instances than its peers. Specifically, it outperforms them by 6.45%–9.82%, 6.91%–17.64% and 1.19%–2.76% in terms of performance, respectively. The results confirm that the proposed approach can effectively and efficiently tackle the investigated problem.
43 citations
Cited by
More filters
TL;DR: A state-of-the-art optimization method, namely, directional permutation differential evolution algorithm (DPDE), to tackle the parameter estimation of several kinds of solar PV models, and extensive comparative results show that DPDE outperforms its peers in terms of the solution accuracy.
Abstract: Photovoltaic (PV) generation systems are vital to the utilization of the sustainable and pollution-free solar energy. However, the parameter estimation of PV systems remains very challenging due to its inherent nonlinear, multi-variable, and multi-modal characteristics. In this paper, we propose a state-of-the-art optimization method, namely, directional permutation differential evolution algorithm (DPDE), to tackle the parameter estimation of several kinds of solar PV models. By fully utilizing the information arisen from the search population and the direction of differential vectors, DPDE can possess a strong global exploration ability of jumping out of the local optima. To verify the performance of DPDE, six groups of experiments based on single, double, triple diode models and PV module models are conducted. Extensive comparative results between DPDE and other fifteen representative algorithms show that DPDE outperforms its peers in terms of the solution accuracy. Additionally, statistical results based on Wilcoxon rank-sum and Friedman tests indicate that DPDE is the most robust and best-performing algorithm for the parameter estimation of PV systems.
96 citations
TL;DR: This paper extends the DPFSP by considering the sequence-dependent setup time (SDST), and presents a mathematical model and an iterated greedy algorithm with a restart scheme (IGR), which is the best-performing one among all the algorithms in comparison.
Abstract: The distributed permutation flowshop scheduling problem (DPFSP) has attracted much attention in recent years. In this paper, we extend the DPFSP by considering the sequence-dependent setup time (SDST), and present a mathematical model and an iterated greedy algorithm with a restart scheme (IGR). In the IGR, we discard the simulated annealing-like acceptance criterion commonly used in traditional iterated greedy algorithms. A restart scheme with six different operators is proposed to ensure the diversity of the solutions and help the algorithm to escape from local optimizations. Furthermore, to achieve a balance between the exploitation and exploration, we introduce an algorithmic control parameter in the IG stage. Additionally, to further improve the performance of the algorithm, we propose two local search methods based on a job block which is built in the evolution process. A detailed design experiment is carried out to calibrate the parameters for the presented IGR algorithm. The IGR is assessed through comparing with the state-of-the-art algorithms in the literature. The experimental results show that the proposed IGR algorithm is the best-performing one among all the algorithms in comparison.
83 citations
TL;DR: This study proposes a Pointwise mutual information-incorporated and Graph-regularized SNMF (PGS) model, which uses Pointwise Mutual Information to quantify implicit associations among nodes, thereby completing the missing but crucial information among critical nodes in a uniform way.
Abstract: Community detection, aiming at determining correct affiliation of each node in a network, is a critical task of complex network analysis. Owing to its high efficiency, Symmetric and Non-negative Matrix Factorization (SNMF) is frequently adopted to handle this task. However, existing SNMF models mostly focus on a network's first-order topological information described by its adjacency matrix without considering the implicit associations among involved nodes. To address this issue, this study proposes a Pointwise mutual information-incorporated and Graph-regularized SNMF (PGS) model. It uses a) Pointwise Mutual Information to quantify implicit associations among nodes, thereby completing the missing but crucial information among critical nodes in a uniform way; b) graph-regularization to achieve precise representation of local topology, and c) SNMF to implement efficient community detection. Empirical studies on eight real-world social networks generated by industrial applications demonstrate that a PGS model achieves significantly higher accuracy gain in community detection than state-of-the-art community detectors.
78 citations
TL;DR: In this article, a multi-round allocation (MMA) algorithm is proposed to optimize the makespan and total cost for all submitted tasks subject to security and reliability constraints in multi-cloud systems.
Abstract: The rise of multi-cloud systems has been spurred. For safety-critical missions, it is important to guarantee their security and reliability. To address trust constraints in a heterogeneous multi-cloud environment, this work proposes a novel scheduling method called matching and multi-round allocation (MMA) to optimize the makespan and total cost for all submitted tasks subject to security and reliability constraints. The method is divided into two phases for task scheduling. The first phase is to find the best matching candidate resources for the tasks to meet their preferential demands including performance, security, and reliability in a multi-cloud environment; the second one iteratively performs multiple rounds of re-allocating to optimize tasks execution time and cost by minimizing the variance of the estimated completion time. The proposed algorithm, the modified cuckoo search (MCS), hybrid chaotic particle search (HCPS), modified artificial bee colony (MABC), max-min, and min-min algorithms are implemented in CloudSim to create simulations. The simulations and experimental results show that our proposed method achieves shorter makespan, lower cost, higher resource utilization, and better trade-off between time and economic cost. It is more stable and efficient.
70 citations
TL;DR: A thorough review of the state-of-the-art methods and applications of voxel-based point cloud representations from a collection of papers in the recent decade is conducted, focusing on the creation and utilization ofvoxels, as well as the strengths and weaknesses of various methods using voxels.
Abstract: Point clouds acquired through laser scanning and stereo vision techniques have been applied in a wide range of applications, proving to be optimal sources for mapping 3D urban scenes. Point clouds provide 3D spatial coordinates of geometric surfaces, describing the real 3D world with both geometric information and attributes. However, unlike 2D images, raw point clouds are usually unstructured and contain no semantic, geometric, or topological information of objects. This lack of an adequate data structure is a bottleneck for the pre-processing or further application of raw point clouds. Thus, it is generally necessary to organize and structure the 3D discrete points into a higher-level representation, such as voxels. Using voxels to represent discrete points is a common and effective way to organize and structure 3D point clouds. Voxels, similar to pixels in an image, are abstracted 3D units with pre-defined volumes, positions, and attributes, which can be used to structurally represent discrete points in a topologically explicit and information-rich manner. Although methods and algorithms for point clouds in various fields have been frequently reported throughout the last decade, there have been very few reviews summarizing and discussing the voxel-based representation of 3D point clouds in urban scenarios. Therefore, this paper aims to conduct a thorough review of the state-of-the-art methods and applications of voxel-based point cloud representations from a collection of papers in the recent decade. In particular, we focus on the creation and utilization of voxels, as well as the strengths and weaknesses of various methods using voxels. Moreover, we also provide an analysis of the potential of using voxel-based representations in the construction industry. Finally, we provide recommendations on future research directions regarding the future tendency of the voxel-based point cloud representations and its improvements.
61 citations