scispace - formally typeset
Search or ask a question
Author

Xiao Shu

Bio: Xiao Shu is an academic researcher from McMaster University. The author has contributed to research in topics: Backlight & Liquid-crystal display. The author has an hindex of 7, co-authored 32 publications receiving 316 citations. Previous affiliations of Xiao Shu include University of Guelph & Shanghai Jiao Tong University.

Papers
More filters
Journal ArticleDOI
TL;DR: In this paper, it was shown that for every fixed integer k, there exists a polynomial-time algorithm for determining whether a P5-free graph admits a k-coloring, and finding one, if it does.
Abstract: The problem of computing the chromatic number of a P5-free graph (a graph which contains no path on 5 vertices as an induced subgraph) is known to be NP-hard. However, we show that for every fixed integer k, there exists a polynomial-time algorithm determining whether or not a P5-free graph admits a k-coloring, and finding one, if it does.

160 citations

Journal ArticleDOI
TL;DR: A novel local dimming technique that can achieve the theoretical highest fidelity of intensity reproduction in either l1 or l2 metrics is proposed and Simulation results demonstrate superior performances of the proposed algorithm in terms of visual quality and power consumption.
Abstract: Light emitting diode (LED)-backlit liquid crystal displays (LCDs) hold the promise of improving image quality while reducing the energy consumption with signal-dependent local dimming. However, most existing local dimming algorithms are mostly motivated by simple implementation, and they often lack concern for visual quality. To fully realize the potential of LED-backlit LCDs and reduce the artifacts that often occur in current systems, we propose a novel local dimming technique that can achieve the theoretical highest fidelity of intensity reproduction in either l1 or l2 metrics. Both the exact and fast approximate versions of the optimal local dimming algorithm are proposed. Simulation results demonstrate superior performances of the proposed algorithm in terms of visual quality and power consumption.

28 citations

Posted Content
TL;DR: A novel deep convolutional encoder-decoder method to remove the objectionable reflection by learning a map between image pairs with and without reflection, which significantly outperforms the other tested state-of-the-art techniques.
Abstract: Image of a scene captured through a piece of transparent and reflective material, such as glass, is often spoiled by a superimposed layer of reflection image. While separating the reflection from a familiar object in an image is mentally not difficult for humans, it is a challenging, ill-posed problem in computer vision. In this paper, we propose a novel deep convolutional encoder-decoder method to remove the objectionable reflection by learning a map between image pairs with and without reflection. For training the neural network, we model the physical formation of reflections in images and synthesize a large number of photo-realistic reflection-tainted images from reflection-free images collected online. Extensive experimental results show that, although the neural network learns only from synthetic data, the proposed method is effective on real-world images, and it significantly outperforms the other tested state-of-the-art techniques.

27 citations

Posted Content
TL;DR: It is shown that for every fixed integer k, there exists a polynomial-time algorithm determining whether or not a P5-free graph admits a k-coloring, and finding one, if it does.
Abstract: The problem of computing the chromatic number of a $P_5$-free graph is known to be NP-hard. In contrast to this negative result, we show that determining whether or not a $P_5$-free graph admits a $k$-colouring, for each fixed number of colours $k$, can be done in polynomial time. If such a colouring exists, our algorithm produces it.

15 citations

Proceedings ArticleDOI
01 Oct 2019
TL;DR: This research adopts a novel strategy of two-stage deep learning, in which the restoration task is divided into two stages: the removal of printing artifacts and the inverse of halftoning, which significantly outperforms the existing ones in visual quality.
Abstract: A great number of invaluable historical photographs unfortunately only exist in the form of halftone prints in old publications such as newspapers or books. Their original continuous-tone films have long been lost or irreparably damaged. There have been attempts to digitally restore these vintage halftone prints to the original film quality or higher. However, even using powerful deep convolutional neural networks, it is still difficult to obtain satisfactory results. The main challenge is that the degradation process is complex and compounded while little to no real data is available for properly training a data-driven method. In this research, we adopt a novel strategy of two-stage deep learning, in which the restoration task is divided into two stages: the removal of printing artifacts and the inverse of halftoning. The advantage of our technique is that only the simple first stage requires unsupervised training in order to make the combined network generalize on real halftone prints, while the more complex second stage of inverse halftoning can be easily trained with synthetic data. Extensive experimental results demonstrate the efficacy of the proposed technique for real halftone prints; the new technique significantly outperforms the existing ones in visual quality.

13 citations


Cited by
More filters
Proceedings ArticleDOI
14 Jun 2020
TL;DR: Zhang et al. as discussed by the authors model the HDR-to-LDR image formation pipeline as the dynamic range clipping, non-linear mapping from a camera response function, and quantization.
Abstract: Recovering a high dynamic range (HDR) image from a single low dynamic range (LDR) input image is challenging due to missing details in under-/over-exposed regions caused by quantization and saturation of camera sensors. In contrast to existing learning-based methods, our core idea is to incorporate the domain knowledge of the LDR image formation pipeline into our model. We model the HDR-to-LDR image formation pipeline as the (1) dynamic range clipping, (2) non-linear mapping from a camera response function, and (3) quantization. We then propose to learn three specialized CNNs to reverse these steps. By decomposing the problem into specific sub-tasks, we impose effective physical constraints to facilitate the training of individual sub-networks. Finally, we jointly fine-tune the entire model end-to-end to reduce error accumulation. With extensive quantitative and qualitative experiments on diverse image datasets, we demonstrate that the proposed method performs favorably against state-of-the-art single-image HDR reconstruction algorithms.

167 citations

Journal ArticleDOI
Yakun Chang1, Cheolkon Jung1, Peng Ke1, Hyoseob Song2, Hwang Jung-Mee2 
TL;DR: Since automatic CLAHE adaptively enhances contrast in each block while boosting luminance, it is very effective in enhancing dark images and daylight ones with strong dark shadows and outperforms state-of-the-art methods in terms of visual quality and quantitative measures.
Abstract: We propose automatic contrast-limited adaptive histogram equalization (CLAHE) for image contrast enhancement. We automatically set the clip point for CLAHE based on textureness of a block. Also, we introduce dual gamma correction into CLAHE to achieve contrast enhancement while preserving naturalness. First, we redistribute the histogram of the block in CLAHE based on the dynamic range of each block. Second, we perform dual gamma correction to enhance the luminance, especially in dark regions while reducing over-enhancement artifacts. Since automatic CLAHE adaptively enhances contrast in each block while boosting luminance, it is very effective in enhancing dark images and daylight ones with strong dark shadows. Moreover, automatic CLAHE is computationally efficient, i.e., more than 35 frames/s at $1024\times682$ resolution, due to the independent block processing for contrast enhancement. Experimental results demonstrate that automatic CLAHE with dual gamma correction achieves good performance in contrast enhancement and outperforms state-of-the-art methods in terms of visual quality and quantitative measures.

152 citations

Posted Content
TL;DR: This work builds on a recent technique that removes the need for reference data by employing networks with a "blind spot" in the receptive field, and significantly improves two key aspects: image quality and training efficiency.
Abstract: We describe a novel method for training high-quality image denoising models based on unorganized collections of corrupted images. The training does not need access to clean reference images, or explicit pairs of corrupted images, and can thus be applied in situations where such data is unacceptably expensive or impossible to acquire. We build on a recent technique that removes the need for reference data by employing networks with a "blind spot" in the receptive field, and significantly improve two key aspects: image quality and training efficiency. Our result quality is on par with state-of-the-art neural network denoisers in the case of i.i.d. additive Gaussian noise, and not far behind with Poisson and impulse noise. We also successfully handle cases where parameters of the noise model are variable and/or unknown in both training and evaluation data.

149 citations

Journal ArticleDOI
TL;DR: This paper proposes a new hybrid location-based routing protocol that is particularly designed to address the issue of vehicle mobility and shows through analysis and simulation that the protocol is scalable and has an optimal overhead, even in the presence of high location errors.
Abstract: Vehicular ad hoc networks (VANETs) are highly mobile wireless networks that are designed to support vehicular safety, traffic monitoring, and other commercial applications. Within VANETs, vehicle mobility will cause the communication links between vehicles to frequently be broken. Such link failures require a direct response from the routing protocols, leading to a potentially excessive increase in the routing overhead and degradation in network scalability. In this paper, we propose a new hybrid location-based routing protocol that is particularly designed to address this issue. Our new protocol combines features of reactive routing with location-based geographic routing in a manner that efficiently uses all the location information available. The protocol is designed to gracefully exit to reactive routing as the location information degrades. We show through analysis and simulation that our protocol is scalable and has an optimal overhead, even in the presence of high location errors. Our protocol provides an enhanced yet pragmatic location-enabled solution that can be deployed in all VANET-type environments.

144 citations