scispace - formally typeset
Search or ask a question
Author

Zhongyang Han

Bio: Zhongyang Han is an academic researcher from Dalian University of Technology. The author has contributed to research in topics: Scheduling (production processes) & Granular computing. The author has an hindex of 9, co-authored 30 publications receiving 306 citations.

Papers
More filters
Journal ArticleDOI
TL;DR: This paper reviews the state of the art developments in deep learning for time series prediction and categorizes them into discriminative, generative, and hybrids models, based on modeling for the perspective of conditional or joint probability.
Abstract: In order to approximate the underlying process of temporal data, time series prediction has been a hot research topic for decades. Developing predictive models plays an important role in interpreting complex real-world elements. With the sharp increase in the quantity and dimensionality of data, new challenges, such as extracting deep features and recognizing deep latent patterns, have emerged, demanding novel approaches and effective solutions. Deep learning, composed of multiple processing layers to learn with multiple levels of abstraction, is, now, commonly deployed for overcoming the newly arisen difficulties. This paper reviews the state-of-the-art developments in deep learning for time series prediction. Based on modeling for the perspective of conditional or joint probability, we categorize them into discriminative, generative, and hybrids models. Experiments are implemented on both benchmarks and real-world data to elaborate the performance of the representative deep learning-based prediction methods. Finally, we conclude with comments on possible future perspectives and ongoing challenges with time series prediction.

142 citations

Journal ArticleDOI
Zhiming Lv1, Linqing Wang1, Zhongyang Han1, Jun Zhao1, Wei Wang1 
TL;DR: Experimental studies involving application on a number of benchmark test problems and parameter determination for multi-input multi-output least squares support vector machines are given, in which the results demonstrate promising performance of the proposed algorithm compared with other representative multi-objective particle swarm optimization algorithms.
Abstract: For multi-objective optimization problems, particle swarm optimization ( PSO ) algorithm generally needs a large number of fitness evaluations to obtain the Pareto optimal solutions. However, it will become substantially time-consuming when handling computationally expensive fitness functions. In order to save the computational cost, a surrogate-assisted PSO with Pareto active learning is proposed. In real physical space ( the objective functions are computationally expensive ), PSO is used as an optimizer, and its optimization results are used to construct the surrogate models. In virtual space, objective functions are replaced by the cheaper surrogate models, PSO is viewed as a sampler to produce the candidate solutions. To enhance the quality of candidate solutions, a hybrid mutation sampling method based on the simulated evolution is proposed, which combines the advantage of fast convergence of PSO and implements mutation to increase diversity. Furthermore, ε -Pareto active learning ( ε -PAL ) method is employed to pre-select candidate solutions to guide PSO in the real physical space. However, little work has considered the method of determining parameter ε. Therefore, a greedy search method is presented to determine the value of where the number of active sampling is employed as the evaluation criteria of classification cost. Experimental studies involving application on a number of benchmark test problems and parameter determination for multi-input multi-output least squares support vector machines ( MLSSVM ) are given, in which the results demonstrate promising performance of the proposed algorithm compared with other representative multi-objective particle swarm optimization ( MOPSO ) algorithms.

98 citations

Journal ArticleDOI
TL;DR: In this article, a multi-output least square support vector regressor is proposed, which considers not only the single fitting error of each tank level but also the combined one, and a particle swarm optimization is designed to determine the parameters of this model for the sake of improving the prediction accuracy.

81 citations

Journal ArticleDOI
TL;DR: A long-term prediction for the energy flows is proposed by using a granular computing-based method that considers industrial-driven semantics and granulates the initial data based on the specificity of manufacturing processes.
Abstract: Sound energy scheduling and allocation is of paramount significance for the current steel industry, and the quantitative prediction of energy media is being regarded as the prerequisite for such challenging tasks. In this paper, a long-term prediction for the energy flows is proposed by using a granular computing-based method that considers industrial-driven semantics and granulates the initial data based on the specificity of manufacturing processes. When forming information granules on a basis of experimental data, we propose to deal with the unequal-length temporal granules by exploiting dynamic time warping, which becomes instrumental to the realization of the prediction model. The model engages the fuzzy $\boldsymbol {C}$ -means clustering method. To quantify the performance of the proposed method, real-world industrial energy data coming from a steel plant in China are employed. The experimental results demonstrate that the proposed method is superior to some other data-driven methods and becomes capable of satisfying the requirements of the practically viable prediction.

48 citations

Journal ArticleDOI
TL;DR: A nonlinear programming model for oxygen system scheduling is proposed in this study, which concerns not only the practical characteristics of the energy pipeline network, but also the electricity cost acquired by a fitting regression modeling between the load of air separation units U+0028 ASU U-0029 and its corresponding electricity consumption.
Abstract: As an essential energy resource in steel industry, oxygen is widely utilized in many production procedures. With different demands of the oxygen amount, a gap between the generation and consumption always occurs. Therefore, its related optimization and scheduling work along with the electricity cost to fill the gap has a great impact on daily production and efficient energy utilization in steel plant. Considering an oxygen system in a steel plant in China, a nonlinear programming model for oxygen system scheduling is proposed in this study, which concerns not only the practical characteristics of the energy pipeline network, but also the electricity cost acquired by a fitting regression modeling between the load of air separation units U+0028 ASU U+0029 and its corresponding electricity consumption. A set of constraints is formulated for restricting the practical adjusting capacity and filling the imbalance gap of oxygen. To solve the proposed scheduling model with electricity cost consideration, a particle swarm optimization U+0028 PSO U+0029 algorithm is then adopted. To verify the effectiveness of the proposed approach, a large number of experiments employing the real data from this plant are carried out, both for the fitting regression and the scheduling optimization phases. And the results demonstrate that such a practice-based solution successfully resolves the oxygen scheduling problem and simultaneously minimizes the electricity cost, which will be beneficial for the enterprise.

30 citations


Cited by
More filters
Journal ArticleDOI
TL;DR: This study provides a survey on state‐of‐the‐art multi‐output regression methods, that are categorized as problem transformation and algorithm adaptation methods, and presents the mostly used performance evaluation measures, publicly available data sets for multi-output regression real‐world problems, as well as open‐source software frameworks.
Abstract: In recent years, a plethora of approaches have been proposed to deal with the increasingly challenging task of multi-output regression. This study provides a survey on state-of-the-art multi-output regression methods, that are categorized as problem transformation and algorithm adaptation methods. In addition, we present the mostly used performance evaluation measures, publicly available data sets for multi-output regression real-world problems, as well as open-source software frameworks. WIREs Data Mining Knowl Discov 2015, 5:216-233. doi: 10.1002/widm.1157

440 citations

Proceedings ArticleDOI
Qingsong Wen1, Liang Sun1, Fan Yang1, Xiaomin Song1, Jingkun Gao2, Xue Wang1, Huan Xu1 
TL;DR: This paper systematically review different data augmentation methods for time series, and proposes a taxonomy for the reviewed methods, and provides a structured review for these methods by highlighting their strengths and limitations.
Abstract: Deep learning performs remarkably well on many time series analysis tasks recently. The superior performance of deep neural networks relies heavily on a large number of training data to avoid overfitting. However, the labeled data of many real-world time series applications may be limited such as classification in medical time series and anomaly detection in AIOps. As an effective way to enhance the size and quality of the training data, data augmentation is crucial to the successful application of deep learning models on time series data. In this paper, we systematically review different data augmentation methods for time series. We propose a taxonomy for the reviewed methods, and then provide a structured review for these methods by highlighting their strengths and limitations. We also empirically compare different data augmentation methods for different tasks including time series anomaly detection, classification, and forecasting. Finally, we discuss and highlight five future directions to provide useful research guidance.

260 citations

Journal ArticleDOI
TL;DR: A review of swarm intelligence algorithms can be found in this paper, where the authors highlight the functions and strengths from 127 research literatures and briefly provide the description of their successful applications in optimization problems of engineering fields.
Abstract: Swarm intelligence algorithms are a subset of the artificial intelligence (AI) field, which is increasing popularity in resolving different optimization problems and has been widely utilized in various applications. In the past decades, numerous swarm intelligence algorithms have been developed, including ant colony optimization (ACO), particle swarm optimization (PSO), artificial fish swarm (AFS), bacterial foraging optimization (BFO), and artificial bee colony (ABC). This review tries to review the most representative swarm intelligence algorithms in chronological order by highlighting the functions and strengths from 127 research literatures. It provides an overview of the various swarm intelligence algorithms and their advanced developments, and briefly provides the description of their successful applications in optimization problems of engineering fields. Finally, opinions and perspectives on the trends and prospects in this relatively new research domain are represented to support future developments.

247 citations

Journal ArticleDOI
TL;DR: The objective is to provide an advanced solution for controlling manufacturing processes and to gain perspective on various dimensions that enable manufacturers to access effective predictive technologies.
Abstract: Smart manufacturing refers to optimization techniques that are implemented in production operations by utilizing advanced analytics approaches. With the widespread increase in deploying industrial internet of things ( IIOT ) sensors in manufacturing processes, there is a progressive need for optimal and effective approaches to data management. Embracing machine learning and artificial intelligence to take advantage of manufacturing data can lead to efficient and intelligent automation. In this paper, we conduct a comprehensive analysis based on evolutionary computing and neural network algorithms toward making semiconductor manufacturing smart. We propose a dynamic algorithm for gaining useful insights about semiconductor manufacturing processes and to address various challenges. We elaborate on the utilization of a genetic algorithm and neural network to propose an intelligent feature selection algorithm. Our objective is to provide an advanced solution for controlling manufacturing processes and to gain perspective on various dimensions that enable manufacturers to access effective predictive technologies.

153 citations

Journal ArticleDOI
TL;DR: This paper reviews the state of the art developments in deep learning for time series prediction and categorizes them into discriminative, generative, and hybrids models, based on modeling for the perspective of conditional or joint probability.
Abstract: In order to approximate the underlying process of temporal data, time series prediction has been a hot research topic for decades. Developing predictive models plays an important role in interpreting complex real-world elements. With the sharp increase in the quantity and dimensionality of data, new challenges, such as extracting deep features and recognizing deep latent patterns, have emerged, demanding novel approaches and effective solutions. Deep learning, composed of multiple processing layers to learn with multiple levels of abstraction, is, now, commonly deployed for overcoming the newly arisen difficulties. This paper reviews the state-of-the-art developments in deep learning for time series prediction. Based on modeling for the perspective of conditional or joint probability, we categorize them into discriminative, generative, and hybrids models. Experiments are implemented on both benchmarks and real-world data to elaborate the performance of the representative deep learning-based prediction methods. Finally, we conclude with comments on possible future perspectives and ongoing challenges with time series prediction.

142 citations