scispace - formally typeset
Search or ask a question
Author

Baocai Yin

Other affiliations: Dalian University of Technology
Bio: Baocai Yin is an academic researcher from Beijing University of Technology. The author has contributed to research in topics: Sparse approximation & Cluster analysis. The author has an hindex of 20, co-authored 308 publications receiving 1566 citations. Previous affiliations of Baocai Yin include Dalian University of Technology.


Papers
More filters
Proceedings ArticleDOI
14 Jun 2020
TL;DR: A frequency-based decompositionand- enhancement model that first learns to recover image objects in the low-frequency layer and then enhances high-frequency details based on the recovered image objects and outperforms state-of-the-art approaches in enhancing practical noisy low-light images.
Abstract: Low-light images typically suffer from two problems. First, they have low visibility (i.e., small pixel values). Second, noise becomes significant and disrupts the image content, due to low signal-to-noise ratio. Most existing lowlight image enhancement methods, however, learn from noise-negligible datasets. They rely on users having good photographic skills in taking images with low noise. Unfortunately, this is not the case for majority of the low-light images. While concurrently enhancing a low-light image and removing its noise is ill-posed, we observe that noise exhibits different levels of contrast in different frequency layers, and it is much easier to detect noise in the lowfrequency layer than in the high one. Inspired by this observation, we propose a frequency-based decompositionand- enhancement model for low-light image enhancement. Based on this model, we present a novel network that first learns to recover image objects in the low-frequency layer and then enhances high-frequency details based on the recovered image objects. In addition, we have prepared a new low-light image dataset with real noise to facilitate learning. Finally, we have conducted extensive experiments to show that the proposed method outperforms state-of-the-art approaches in enhancing practical noisy low-light images.

167 citations

Journal ArticleDOI
TL;DR: A graph network is introduced and an optimized graph convolution recurrent neural network is proposed for traffic prediction, in which the spatial information of the road network is represented as a graph, which outperforms state-of-the-art traffic prediction methods.
Abstract: Traffic prediction is a core problem in the intelligent transportation system and has broad applications in the transportation management and planning, and the main challenge of this field is how to efficiently explore the spatial and temporal information of traffic data. Recently, various deep learning methods, such as convolution neural network (CNN), have shown promising performance in traffic prediction. However, it samples traffic data in regular grids as the input of CNN, thus it destroys the spatial structure of the road network. In this paper, we introduce a graph network and propose an optimized graph convolution recurrent neural network for traffic prediction, in which the spatial information of the road network is represented as a graph. Additionally, distinguishing with most current methods using a simple and empirical spatial graph, the proposed method learns an optimized graph through a data-driven way in the training phase, which reveals the latent relationship among the road segments from the traffic data. Lastly, the proposed method is evaluated on three real-world case studies, and the experimental results show that the proposed method outperforms state-of-the-art traffic prediction methods.

164 citations

Posted ContentDOI
Xueyan Yin1, Genze Wu1, Jinze Wei1, Yanming Shen1, Heng Qi1, Baocai Yin1 
TL;DR: A comprehensive survey on deep learning-based approaches in traffic prediction from multiple perspectives is provided, and the state-of-the-art approaches in different traffic prediction applications are listed.
Abstract: Traffic prediction plays an essential role in intelligent transportation system. Accurate traffic prediction can assist route planing, guide vehicle dispatching, and mitigate traffic congestion. This problem is challenging due to the complicated and dynamic spatio-temporal dependencies between different regions in the road network. Recently, a significant amount of research efforts have been devoted to this area, especially deep learning method, greatly advancing traffic prediction abilities. The purpose of this paper is to provide a comprehensive survey on deep learning-based approaches in traffic prediction from multiple perspectives. Specifically, we first summarize the existing traffic prediction methods, and give a taxonomy. Second, we list the state-of-the-art approaches in different traffic prediction applications. Third, we comprehensively collect and organize widely used public datasets in the existing literature to facilitate other researchers. Furthermore, we give an evaluation and analysis by conducting extensive experiments to compare the performance of different methods on a real-world public dataset. Finally, we discuss open challenges in this field.

123 citations

Journal ArticleDOI
Shi Xiaoming1, Heng Qi1, Yanming Shen1, Genze Wu1, Baocai Yin1 
TL;DR: This paper proposes a novel Attention-based Periodic-Temporal neural Network (APTN), an end-to-end solution for traffic foresting that captures spatial, short-term, and long-term periodical dependencies.
Abstract: Accurate traffic forecasting is important to enable intelligent transportation systems in a smart city. This problem is challenging due to the complicated spatial, short-term temporal and long-term periodical dependencies. Existing approaches have considered these factors in modeling. Most solutions apply CNN, or its extension Graph Convolution Networks (GCN) to model the spatial correlation. However, the convolution operator may not adequately model the non-Euclidean pair-wise correlations. In this paper, we propose a novel Attention-based Periodic-Temporal neural Network (APTN), an end-to-end solution for traffic foresting that captures spatial, short-term, and long-term periodical dependencies. APTN first uses an encoder attention mechanism to model both the spatial and periodical dependencies. Our model can capture these dependencies more easily because every node attends to all other nodes in the network, which brings regularization effect to the model and avoids overfitting between nodes. Then, a temporal attention is applied to select relevant encoder hidden states across all time steps. We evaluate our proposed model using real world traffic datasets and observe consistent improvements over state-of-the-art baselines.

83 citations

Proceedings ArticleDOI
01 Oct 2019
TL;DR: This paper introduces two modules, a Semantic Consistency Module (SCM) and an Attention Competition Module (ACM), to the SEGAN, a new model, Semantics-enhanced Generative Adversarial Network (SEGAN), for fine-grained text-to-image generation.
Abstract: This paper presents a new model, Semantics-enhanced Generative Adversarial Network (SEGAN), for fine-grained text-to-image generation. We introduce two modules, a Semantic Consistency Module (SCM) and an Attention Competition Module (ACM), to our SEGAN. The SCM incorporates image-level semantic consistency into the training of the Generative Adversarial Network (GAN), and can diversify the generated images and improve their structural coherence. A Siamese network and two types of semantic similarities are designed to map the synthesized image and the groundtruth image to nearby points in the latent semantic feature space. The ACM constructs adaptive attention weights to differentiate keywords from unimportant words, and improves the stability and accuracy of SEGAN. Extensive experiments demonstrate that our SEGAN significantly outperforms existing state-of-the-art methods in generating photo-realistic images. All source codes and models will be released for comparative study.

74 citations


Cited by
More filters
Christopher M. Bishop1
01 Jan 2006
TL;DR: Probability distributions of linear models for regression and classification are given in this article, along with a discussion of combining models and combining models in the context of machine learning and classification.
Abstract: Probability Distributions.- Linear Models for Regression.- Linear Models for Classification.- Neural Networks.- Kernel Methods.- Sparse Kernel Machines.- Graphical Models.- Mixture Models and EM.- Approximate Inference.- Sampling Methods.- Continuous Latent Variables.- Sequential Data.- Combining Models.

10,141 citations

01 Jan 2006

3,012 citations

Proceedings Article
01 Jan 1999

2,010 citations