Author
Guoqiang Zhong
Other affiliations: École de technologie supérieure, Chinese Academy of Sciences, École Normale Supérieure
Bio: Guoqiang Zhong is an academic researcher from Ocean University of China. The author has contributed to research in topics: Deep learning & Computer science. The author has an hindex of 19, co-authored 102 publications receiving 1164 citations. Previous affiliations of Guoqiang Zhong include École de technologie supérieure & Chinese Academy of Sciences.
Papers
More filters
••
TL;DR: An overview of the state-of-the-art attention models proposed in recent years is given and a unified model that is suitable for most attention structures is defined.
620 citations
••
TL;DR: This letter adopts long short-term memory (LSTM) to predict sea surface temperature (SST), and makes short- and long-term prediction, including weekly mean and monthly mean, and the model’s online updated characteristics are presented.
Abstract: This letter adopts long short-term memory (LSTM) to predict sea surface temperature (SST), and makes short-term prediction, including one day and three days, and long-term prediction, including weekly mean and monthly mean The SST prediction problem is formulated as a time series regression problem The proposed network architecture is composed of two kinds of layers: an LSTM layer and a full-connected dense layer The LSTM layer is utilized to model the time series relationship The full-connected layer is utilized to map the output of the LSTM layer to a final prediction The optimal setting of this architecture is explored by experiments and the accuracy of coastal seas of China is reported to confirm the effectiveness of the proposed method The prediction accuracy is also tested on the SST anomaly data In addition, the model’s online updated characteristics are presented
265 citations
••
TL;DR: This paper investigates both traditional feature learning algorithms and state-of-the-art deep learning models, and gives a few remarks on the development of data representation learning and suggest some interesting research directions in this area.
128 citations
••
TL;DR: Wang et al. as mentioned in this paper proposed a novel architecture of Generative Adversarial Network (GAN) with the Multi-Layer Perceptron (MLP) as discriminator and the Long Short-Term Memory (LSTM) as the generator for forecasting the closing price of stocks.
127 citations
01 Jan 2018
TL;DR: Experimental results show that the novel GAN can get a promising performance in the closing price prediction on the real data compared with other models in machine learning and deep learning.
Abstract: Deep learning has recently achieved great success in many areas due to its strong capacity in data process. For instance, it has been widely used in financial areas such as stock market prediction, portfolio optimization, financial information processing and trade execution strategies. Stock market prediction is one of the most popular and valuable area in finance. In this paper, we propose a novel architecture of Generative Adversarial Network (GAN) with the Multi-Layer Perceptron (MLP) as the discriminator and the Long Short-Term Memory (LSTM) as the generator for forecasting the closing price of stocks. The generator is built by LSTM to mine the data distributions of stocks from given data in stock market and generate data in the same distributions, whereas the discriminator designed by MLP aims to discriminate the real stock data and generated data. We choose the daily data on S&P 500 Index and several stocks in a wide range of trading days and try to predict the daily closing price. Experimental results show that our novel GAN can get a promising performance in the closing price prediction on the real data compared with other models in machine learning and deep learning.
93 citations
Cited by
More filters
•
3,940 citations
09 Mar 2012
TL;DR: Artificial neural networks (ANNs) constitute a class of flexible nonlinear models designed to mimic biological neural systems as mentioned in this paper, and they have been widely used in computer vision applications.
Abstract: Artificial neural networks (ANNs) constitute a class of flexible nonlinear models designed to mimic biological neural systems. In this entry, we introduce ANN using familiar econometric terminology and provide an overview of ANN modeling approach and its implementation methods. † Correspondence: Chung-Ming Kuan, Institute of Economics, Academia Sinica, 128 Academia Road, Sec. 2, Taipei 115, Taiwan; ckuan@econ.sinica.edu.tw. †† I would like to express my sincere gratitude to the editor, Professor Steven Durlauf, for his patience and constructive comments on early drafts of this entry. I also thank Shih-Hsun Hsu and Yu-Lieh Huang for very helpful suggestions. The remaining errors are all mine.
2,069 citations
••
TL;DR: The concept of ensemble learning is introduced, traditional, novel and state‐of‐the‐art ensemble methods are reviewed and current challenges and trends in the field are discussed.
Abstract: Ensemble methods are considered the state‐of‐the art solution for many machine learning challenges. Such methods improve the predictive performance of a single model by training multiple models and combining their predictions. This paper introduce the concept of ensemble learning, reviews traditional, novel and state‐of‐the‐art ensemble methods and discusses current challenges and trends in the field.
1,381 citations
•
TL;DR: A systematic review of the metric learning literature is proposed, highlighting the pros and cons of each approach and presenting a wide range of methods that have recently emerged as powerful alternatives, including nonlinear metric learning, similarity learning and local metric learning.
Abstract: The need for appropriate ways to measure the distance or similarity between data is ubiquitous in machine learning, pattern recognition and data mining, but handcrafting such good metrics for specific problems is generally difficult. This has led to the emergence of metric learning, which aims at automatically learning a metric from data and has attracted a lot of interest in machine learning and related fields for the past ten years. This survey paper proposes a systematic review of the metric learning literature, highlighting the pros and cons of each approach. We pay particular attention to Mahalanobis distance metric learning, a well-studied and successful framework, but additionally present a wide range of methods that have recently emerged as powerful alternatives, including nonlinear metric learning, similarity learning and local metric learning. Recent trends and extensions, such as semi-supervised metric learning, metric learning for histogram data and the derivation of generalization guarantees, are also covered. Finally, this survey addresses metric learning for structured data, in particular edit distance learning, and attempts to give an overview of the remaining challenges in metric learning for the years to come.
671 citations