scispace - formally typeset
Search or ask a question
Author

Oscar C. Au

Bio: Oscar C. Au is an academic researcher from Hong Kong University of Science and Technology. The author has contributed to research in topics: Motion estimation & Motion compensation. The author has an hindex of 40, co-authored 491 publications receiving 7493 citations. Previous affiliations of Oscar C. Au include Wilmington University & Huawei.


Papers
More filters
Journal ArticleDOI
TL;DR: This paper proposes a novel framework called DCast for distributed video coding and transmission over wireless networks, which is different from existing distributed schemes in three aspects, and proposes a power distortion optimization algorithm to replace the traditional rate distortion optimization.
Abstract: This paper proposes a novel framework called DCast for distributed video coding and transmission over wireless networks, which is different from existing distributed schemes in three aspects. First, coset quantized DCT coefficients and motion data are directly delivered to the channel coding layer without syndrome or entropy coding. Second, transmission power is directly allocated to coset data and motion data according to their distributions and magnitudes without forward error correction. Third, these data are transformed by Hadamard and then directly mapped using a dense constellation (64K-QAM) for transmission without Gray coding. One of the most important properties in this framework is that the coding and transmission rate is fixed and distortion is minimized by allocating the transmission power. Thus, we further propose a power distortion optimization algorithm to replace the traditional rate distortion optimization. This framework avoids the annoying cliff effect caused by the mismatch between transmission rate and channel condition. In multicast, each user can get approximately the best quality matching its channel condition. Our experiment results show that the proposed DCast outperforms the typical solution using H.264 over 802.11 up to 8 dB in video PSNR in video broadcast. Even in video unicast, the proposed DCast is still comparable to the typical solution.

66 citations

Proceedings ArticleDOI
19 Apr 2015
TL;DR: This paper derive the optimal edge weights for local graph-based filtering using gradient estimates from non-local pixel patches that are self-similar, and derives the optimal metric space G*: one that leads to a graph Laplacian regularizer that is discriminant when the gradient estimates are accurate, and robust when thegradient estimates are noisy.
Abstract: Image denoising is an under-determined problem, and hence it is important to define appropriate image priors for regularization. One recent popular prior is the graph Laplacian regularizer, where a given pixel patch is assumed to be smooth in the graph-signal domain. The strength and direction of the resulting graph-based filter are computed from the graph's edge weights. In this paper, we derive the optimal edge weights for local graph-based filtering using gradient estimates from non-local pixel patches that are self-similar. To analyze the effects of the gradient estimates on the graph Laplacian regularizer, we first show theoretically that, given graph-signal hD is a set of discrete samples on continuous function h(x; y) in a closed region Ω, graph Laplacian regularizer (hD)TLhD converges to a continuous functional S Ω integrating gradient norm of h in metric space G—i.e., (▽h)TG−1(▽h)—over Ω. We then derive the optimal metric space G★: one that leads to a graph Laplacian regularizer that is discriminant when the gradient estimates are accurate, and robust when the gradient estimates are noisy. Finally, having derived G★ we compute the corresponding edge weights to define the Laplacian L used for filtering. Experimental results show that our image denoising algorithm using the per-patch optimal metric space G★ outperforms non-local means (NLM) by up to 1.5 dB in PSNR.

64 citations

Proceedings ArticleDOI
03 Dec 2010
TL;DR: The state-of-the-art studies and the trends of the HDR imaging are reviewed, in terms of the following three points: HDR imaging sensor and HDR image generation techniques as image acquisition technologies, encode method of HDR images for efficient transmission and storage, and human visual system issues associated with reproduction of HDR image.
Abstract: Recently, visual representations using high dynamic range (HDR) images become increasingly popular, with advancement of technologies for increasing the dynamic range of image. HDR image is expected to be used in wide-ranging applications such as digital cinema, digital photography and next generation broadcast, because of its high quality and its powerful expression ability. HDR imaging technologies will spread its sphere of influence in imaging industry. In this paper, we review the state-of-the-art studies and the trends of the HDR imaging, in terms of the following three points: (1) HDR imaging sensor and HDR image generation techniques as image acquisition technologies, (2) encode method of HDR images for efficient transmission and storage, (3) human visual system issues associated with reproduction of HDR image.

62 citations

Journal ArticleDOI
TL;DR: It is shown that the key vectors used in the codeword permutation step can be recovered with complexity O(N), where N is the symbol sequence length and the resulting system has already been shown to be insecure in the original paper.
Abstract: The paper ldquosecure arithmetic codingrdquo (in IEEE Transactions on Signal Processing, vol. 55, no. 5, pp. 2263-2272, May 2007) presented a novel encryption scheme called the secure arithmetic coding (SAC) based on the interval splitting arithmetic coding (ISAC) and a series of permutations. In the current work, we study the security of the SAC under an adaptive chosen-ciphertext attack. It is shown that the key vectors used in the codeword permutation step can be recovered with complexity O(N), where N is the symbol sequence length. After getting these key vectors, we can remove the codeword permutation step, and the resulting system has already been shown to be insecure in the original paper. This implies that the SAC is not suitable for the applications where the attacker can have access to the decoder. In addition, we discuss a method to jointly enhance the security and the performance of the SAC.

59 citations

Journal ArticleDOI
TL;DR: Experimental results show that MHMCF demonstrates quite good denoising performance while the inputs are much fewer than spatio-temporal filters, and some numerical evaluations are revealed.
Abstract: Denoising module is required by any practical video processing systems. Most existing denoising schemes are spatio-temporal filters which operate on data over three dimensions. However, to limit the number of inputs, these filters only utilize one reference frame and cannot fully exploit temporal correlation. In this paper, a recursive temporal denoising filter named multihypothesis motion compensated filter (MHMCF) is proposed. To fully exploit temporal correlation, MHMCF performs motion estimation in a number of reference frames to construct multiple hypotheses (temporal predictions) of the current pixel. These hypotheses are combined by weighted averaging to suppress noise and estimate the actual current pixel value. Based on the multihypothesis motion compensated residue model presented in this paper, we investigate the efficiency of MHMCF, and some numerical evaluations are revealed. Experimental results show that MHMCF demonstrates quite good denoising performance while the inputs are much fewer than spatio-temporal filters. Moreover, as a purely temporal filter, it can well preserve spatial details and achieve satisfactory visual quality.

57 citations


Cited by
More filters
Journal ArticleDOI

[...]

08 Dec 2001-BMJ
TL;DR: There is, I think, something ethereal about i —the square root of minus one, which seems an odd beast at that time—an intruder hovering on the edge of reality.
Abstract: There is, I think, something ethereal about i —the square root of minus one. I remember first hearing about it at school. It seemed an odd beast at that time—an intruder hovering on the edge of reality. Usually familiarity dulls this sense of the bizarre, but in the case of i it was the reverse: over the years the sense of its surreal nature intensified. It seemed that it was impossible to write mathematics that described the real world in …

33,785 citations

Book
24 Oct 2001
TL;DR: Digital Watermarking covers the crucial research findings in the field and explains the principles underlying digital watermarking technologies, describes the requirements that have given rise to them, and discusses the diverse ends to which these technologies are being applied.
Abstract: Digital watermarking is a key ingredient to copyright protection. It provides a solution to illegal copying of digital material and has many other useful applications such as broadcast monitoring and the recording of electronic transactions. Now, for the first time, there is a book that focuses exclusively on this exciting technology. Digital Watermarking covers the crucial research findings in the field: it explains the principles underlying digital watermarking technologies, describes the requirements that have given rise to them, and discusses the diverse ends to which these technologies are being applied. As a result, additional groundwork is laid for future developments in this field, helping the reader understand and anticipate new approaches and applications.

2,849 citations

Book
23 Nov 2007
TL;DR: This new edition now contains essential information on steganalysis and steganography, and digital watermark embedding is given a complete update with new processes and applications.
Abstract: Digital audio, video, images, and documents are flying through cyberspace to their respective owners. Unfortunately, along the way, individuals may choose to intervene and take this content for themselves. Digital watermarking and steganography technology greatly reduces the instances of this by limiting or eliminating the ability of third parties to decipher the content that he has taken. The many techiniques of digital watermarking (embedding a code) and steganography (hiding information) continue to evolve as applications that necessitate them do the same. The authors of this second edition provide an update on the framework for applying these techniques that they provided researchers and professionals in the first well-received edition. Steganography and steganalysis (the art of detecting hidden information) have been added to a robust treatment of digital watermarking, as many in each field research and deal with the other. New material includes watermarking with side information, QIM, and dirty-paper codes. The revision and inclusion of new material by these influential authors has created a must-own book for anyone in this profession. *This new edition now contains essential information on steganalysis and steganography *New concepts and new applications including QIM introduced *Digital watermark embedding is given a complete update with new processes and applications

1,773 citations

Journal ArticleDOI
TL;DR: This paper provides a state-of-the-art review and analysis of the different existing methods of steganography along with some common standards and guidelines drawn from the literature and some recommendations and advocates for the object-oriented embedding mechanism.

1,572 citations