scispace - formally typeset
Search or ask a question
Institution

Hong Kong Polytechnic University

EducationHong Kong, China
About: Hong Kong Polytechnic University is a education organization based out in Hong Kong, China. It is known for research contribution in the topics: Tourism & Population. The organization has 29633 authors who have published 72136 publications receiving 1956312 citations. The organization is also known as: HKPU & PolyU.


Papers
More filters
Journal ArticleDOI
10 May 2012-ACS Nano
TL;DR: The G QDs are capable of converting blue light into white light when the GQDs are coated onto a blue light emitting diode and the photoluminescence quantum yields were determined to be 7-11%.
Abstract: Glucose-derived water-soluble crystalline graphene quantum dots (GQDs) with an average diameter as small as 1.65 nm (∼5 layers) were prepared by a facile microwave-assisted hydrothermal method. The GQDs exhibits deep ultraviolet (DUV) emission of 4.1 eV, which is the shortest emission wavelength among all the solution-based QDs. The GQDs exhibit typical excitation wavelength-dependent properties as expected in carbon-based quantum dots. However, the emission wavelength is independent of the size of the GQDs. The unique optical properties of the GQDs are attributed to the self-passivated layer on the surface of the GQDs as revealed by electron energy loss spectroscopy. The photoluminescence quantum yields of the GQDs were determined to be 7–11%. The GQDs are capable of converting blue light into white light when the GQDs are coated onto a blue light emitting diode.

1,465 citations

Journal ArticleDOI
TL;DR: Zhang et al. as discussed by the authors proposed a denoising convolutional neural network (DnCNN) to handle Gaussian denoizing with unknown noise level, which implicitly removes the latent clean image in the hidden layers.
Abstract: Discriminative model learning for image denoising has been recently attracting considerable attentions due to its favorable denoising performance. In this paper, we take one step forward by investigating the construction of feed-forward denoising convolutional neural networks (DnCNNs) to embrace the progress in very deep architecture, learning algorithm, and regularization method into image denoising. Specifically, residual learning and batch normalization are utilized to speed up the training process as well as boost the denoising performance. Different from the existing discriminative denoising models which usually train a specific model for additive white Gaussian noise (AWGN) at a certain noise level, our DnCNN model is able to handle Gaussian denoising with unknown noise level (i.e., blind Gaussian denoising). With the residual learning strategy, DnCNN implicitly removes the latent clean image in the hidden layers. This property motivates us to train a single DnCNN model to tackle with several general image denoising tasks such as Gaussian denoising, single image super-resolution and JPEG image deblocking. Our extensive experiments demonstrate that our DnCNN model can not only exhibit high effectiveness in several general image denoising tasks, but also be efficiently implemented by benefiting from GPU computing.

1,446 citations

Journal ArticleDOI
TL;DR: FFDNet as discussed by the authors proposes a fast and flexible denoising convolutional neural network with a tunable noise level map as the input, which can handle a wide range of noise levels effectively with a single network.
Abstract: Due to the fast inference and good performance, discriminative learning methods have been widely studied in image denoising. However, these methods mostly learn a specific model for each noise level, and require multiple models for denoising images with different noise levels. They also lack flexibility to deal with spatially variant noise, limiting their applications in practical denoising. To address these issues, we present a fast and flexible denoising convolutional neural network, namely FFDNet, with a tunable noise level map as the input. The proposed FFDNet works on downsampled sub-images, achieving a good trade-off between inference speed and denoising performance. In contrast to the existing discriminative denoisers, FFDNet enjoys several desirable properties, including: 1) the ability to handle a wide range of noise levels (i.e., [0, 75]) effectively with a single network; 2) the ability to remove spatially variant noise by specifying a non-uniform noise level map; and 3) faster speed than benchmark BM3D even on CPU without sacrificing denoising performance. Extensive experiments on synthetic and real noisy images are conducted to evaluate FFDNet in comparison with state-of-the-art denoisers. The results show that FFDNet is effective and efficient, making it highly attractive for practical denoising applications.

1,430 citations

Journal ArticleDOI
TL;DR: The proposed LSTM approach outperforms the other listed rival algorithms in the task of short-term load forecasting for individual residential households and is comprehensively compared to various benchmarks including the state-of-the-arts in the field of load forecasting.
Abstract: As the power system is facing a transition toward a more intelligent, flexible, and interactive system with higher penetration of renewable energy generation, load forecasting, especially short-term load forecasting for individual electric customers plays an increasingly essential role in the future grid planning and operation. Other than aggregated residential load in a large scale, forecasting an electric load of a single energy user is fairly challenging due to the high volatility and uncertainty involved. In this paper, we propose a long short-term memory (LSTM) recurrent neural network-based framework, which is the latest and one of the most popular techniques of deep learning, to tackle this tricky issue. The proposed framework is tested on a publicly available set of real residential smart meter data, of which the performance is comprehensively compared to various benchmarks including the state-of-the-arts in the field of load forecasting. As a result, the proposed LSTM approach outperforms the other listed rival algorithms in the task of short-term load forecasting for individual residential households.

1,415 citations

Journal ArticleDOI
TL;DR: In this article, the authors highlight recent progress on single-junction and tandem NFA solar cells and research directions to achieve even higher efficiencies of 15-20% using NFA-based organic photovoltaics are also proposed.
Abstract: Over the past three years, a particularly exciting and active area of research within the field of organic photovoltaics has been the use of non-fullerene acceptors (NFAs). Compared with fullerene acceptors, NFAs possess significant advantages including tunability of bandgaps, energy levels, planarity and crystallinity. To date, NFA solar cells have not only achieved impressive power conversion efficiencies of ~13–14%, but have also shown excellent stability compared with traditional fullerene acceptor solar cells. This Review highlights recent progress on single-junction and tandem NFA solar cells and research directions to achieve even higher efficiencies of 15–20% using NFA-based organic photovoltaics are also proposed.

1,404 citations


Authors

Showing all 30115 results

NameH-indexPapersCitations
Jing Wang1844046202769
Xiang Zhang1541733117576
Wei Zheng1511929120209
Rui Zhang1512625107917
Jian Yang1421818111166
Joseph Lau140104899305
Yu Huang136149289209
Dacheng Tao133136268263
Chuan He13058466438
Lei Zhang130231286950
Ming-Hsuan Yang12763575091
Chao Zhang127311984711
Yuri S. Kivshar126184579415
Bin Wang126222674364
Chi-Ming Che121130562800
Network Information
Related Institutions (5)
Nanyang Technological University
112.8K papers, 3.2M citations

93% related

National University of Singapore
165.4K papers, 5.4M citations

93% related

University of Hong Kong
99.1K papers, 3.2M citations

92% related

Tsinghua University
200.5K papers, 4.5M citations

91% related

University of Waterloo
93.9K papers, 2.9M citations

90% related

Performance
Metrics
No. of papers from the Institution in previous years
YearPapers
20241
2023229
2022971
20216,743
20206,207
20195,288