scispace - formally typeset
Search or ask a question
Institution

Harbin Institute of Technology

EducationHarbin, China
About: Harbin Institute of Technology is a education organization based out in Harbin, China. It is known for research contribution in the topics: Microstructure & Control theory. The organization has 88259 authors who have published 109297 publications receiving 1603393 citations. The organization is also known as: HIT.


Papers
More filters
Journal ArticleDOI
TL;DR: Zhang et al. as discussed by the authors proposed a denoising convolutional neural network (DnCNN) to handle Gaussian denoizing with unknown noise level, which implicitly removes the latent clean image in the hidden layers.
Abstract: Discriminative model learning for image denoising has been recently attracting considerable attentions due to its favorable denoising performance. In this paper, we take one step forward by investigating the construction of feed-forward denoising convolutional neural networks (DnCNNs) to embrace the progress in very deep architecture, learning algorithm, and regularization method into image denoising. Specifically, residual learning and batch normalization are utilized to speed up the training process as well as boost the denoising performance. Different from the existing discriminative denoising models which usually train a specific model for additive white Gaussian noise (AWGN) at a certain noise level, our DnCNN model is able to handle Gaussian denoising with unknown noise level (i.e., blind Gaussian denoising). With the residual learning strategy, DnCNN implicitly removes the latent clean image in the hidden layers. This property motivates us to train a single DnCNN model to tackle with several general image denoising tasks such as Gaussian denoising, single image super-resolution and JPEG image deblocking. Our extensive experiments demonstrate that our DnCNN model can not only exhibit high effectiveness in several general image denoising tasks, but also be efficiently implemented by benefiting from GPU computing.

1,446 citations

Journal ArticleDOI
TL;DR: FFDNet as discussed by the authors proposes a fast and flexible denoising convolutional neural network with a tunable noise level map as the input, which can handle a wide range of noise levels effectively with a single network.
Abstract: Due to the fast inference and good performance, discriminative learning methods have been widely studied in image denoising. However, these methods mostly learn a specific model for each noise level, and require multiple models for denoising images with different noise levels. They also lack flexibility to deal with spatially variant noise, limiting their applications in practical denoising. To address these issues, we present a fast and flexible denoising convolutional neural network, namely FFDNet, with a tunable noise level map as the input. The proposed FFDNet works on downsampled sub-images, achieving a good trade-off between inference speed and denoising performance. In contrast to the existing discriminative denoisers, FFDNet enjoys several desirable properties, including: 1) the ability to handle a wide range of noise levels (i.e., [0, 75]) effectively with a single network; 2) the ability to remove spatially variant noise by specifying a non-uniform noise level map; and 3) faster speed than benchmark BM3D even on CPU without sacrificing denoising performance. Extensive experiments on synthetic and real noisy images are conducted to evaluate FFDNet in comparison with state-of-the-art denoisers. The results show that FFDNet is effective and efficient, making it highly attractive for practical denoising applications.

1,430 citations

Proceedings ArticleDOI
01 Sep 2015
TL;DR: A neural network model is introduced to learn vector-based document representation in a unified, bottom-up fashion and dramatically outperforms standard recurrent neural network in document modeling for sentiment classification.
Abstract: Document level sentiment classification remains a challenge: encoding the intrinsic relations between sentences in the semantic meaning of a document. To address this, we introduce a neural network model to learn vector-based document representation in a unified, bottom-up fashion. The model first learns sentence representation with convolutional neural network or long short-term memory. Afterwards, semantics of sentences and their relations are adaptively encoded in document representation with gated recurrent neural network. We conduct document level sentiment classification on four large-scale review datasets from IMDB and Yelp Dataset Challenge. Experimental results show that: (1) our neural model shows superior performances over several state-of-the-art algorithms; (2) gated recurrent neural network dramatically outperforms standard recurrent neural network in document modeling for sentiment classification. 1

1,379 citations

Proceedings ArticleDOI
14 Jun 2020
TL;DR: The Efficient Channel Attention (ECA) module as discussed by the authors proposes a local cross-channel interaction strategy without dimensionality reduction, which can be efficiently implemented via 1D convolution, which only involves a handful of parameters while bringing clear performance gain.
Abstract: Recently, channel attention mechanism has demonstrated to offer great potential in improving the performance of deep convolutional neural networks (CNNs). However, most existing methods dedicate to developing more sophisticated attention modules for achieving better performance, which inevitably increase model complexity. To overcome the paradox of performance and complexity trade-off, this paper proposes an Efficient Channel Attention (ECA) module, which only involves a handful of parameters while bringing clear performance gain. By dissecting the channel attention module in SENet, we empirically show avoiding dimensionality reduction is important for learning channel attention, and appropriate cross-channel interaction can preserve performance while significantly decreasing model complexity. Therefore, we propose a local cross-channel interaction strategy without dimensionality reduction, which can be efficiently implemented via 1D convolution. Furthermore, we develop a method to adaptively select kernel size of 1D convolution, determining coverage of local cross-channel interaction. The proposed ECA module is both efficient and effective, e.g., the parameters and computations of our modules against backbone of ResNet50 are 80 vs. 24.37M and 4.7e-4 GFlops vs. 3.86 GFlops, respectively, and the performance boost is more than 2% in terms of Top-1 accuracy. We extensively evaluate our ECA module on image classification, object detection and instance segmentation with backbones of ResNets and MobileNetV2. The experimental results show our module is more efficient while performing favorably against its counterparts.

1,378 citations

Journal ArticleDOI
TL;DR: A basic data-driven design framework with necessary modifications under various industrial operating conditions is sketched, aiming to offer a reference for industrial process monitoring on large-scale industrial processes.
Abstract: Recently, to ensure the reliability and safety of modern large-scale industrial processes, data-driven methods have been receiving considerably increasing attention, particularly for the purpose of process monitoring. However, great challenges are also met under different real operating conditions by using the basic data-driven methods. In this paper, widely applied data-driven methodologies suggested in the literature for process monitoring and fault diagnosis are surveyed from the application point of view. The major task of this paper is to sketch a basic data-driven design framework with necessary modifications under various industrial operating conditions, aiming to offer a reference for industrial process monitoring on large-scale industrial processes.

1,289 citations


Authors

Showing all 89023 results

NameH-indexPapersCitations
Jiaguo Yu178730113300
Lei Jiang1702244135205
Gang Chen1673372149819
Xiang Zhang1541733117576
Hui-Ming Cheng147880111921
Yi Yang143245692268
Bruce E. Logan14059177351
Bin Liu138218187085
Peng Shi137137165195
Hui Li1352982105903
Lei Zhang135224099365
Jie Liu131153168891
Lei Zhang130231286950
Zhen Li127171271351
Kurunthachalam Kannan12682059886
Network Information
Related Institutions (5)
South China University of Technology
69.4K papers, 1.2M citations

95% related

Tianjin University
79.9K papers, 1.2M citations

95% related

Tsinghua University
200.5K papers, 4.5M citations

94% related

University of Science and Technology of China
101K papers, 2.4M citations

94% related

Nanyang Technological University
112.8K papers, 3.2M citations

93% related

Performance
Metrics
No. of papers from the Institution in previous years
YearPapers
2023383
20221,895
202110,083
20209,817
20199,659
20188,215