scispace - formally typeset
Search or ask a question

Showing papers by "C.-C. Jay Kuo published in 2007"


Proceedings Article•DOI•
28 Jan 2007
TL;DR: It is shown by experimental results that the super-macroblock coding scheme can achieve a higher coding gain and an adaptive scheme is proposed for the selection the best coding mode and transform size.
Abstract: A high definition video coding technique using super-macroblocks is investigated in this work. Our research is motivated by the observation that the macroblock-based partition in H.264/AVC may not be efficient for high definition video since the maximum macroblock size of 16 x 16 is relatively small against the whole image size. In the proposed super-macboblock based video coding scheme, the original block size MxN in H.264 is scaled to 2Mx2N. Along with the super-macroblock prediction framework, a low-complexity 16 x 16 discrete cosine transform (DCT) is proposed. As compared with the 1D 8 x 8 DCT, only 16 additions are added for a 1D 16 points 16 x 16 DCT. Furthermore, an adaptive scheme is proposed for the selection the best coding mode and best transform size. It is shown by experimental results that the super-macroblock coding scheme can achieve a higher coding gain.

80 citations


Book•
13 Dec 2007
TL;DR: The first part presents a concise treatment of some fundamental concepts related to wireless communications and multicarrier systems, while the second offers a comprehensive survey of recent developments on a variety of critical design issues.
Abstract: Multi-Carrier Techniques for Broadband Wireless Communications provides an accessible introduction to OFDM-based systems from a signal processing perspective. The first part presents a concise treatment of some fundamental concepts related to wireless communications and multicarrier systems, while the second offers a comprehensive survey of recent developments on a variety of critical design issues. These include synchronization techniques, channel estimation methods, adaptive resource allocation and practical schemes for reducing the peak-to-average power ratio of the transmitted waveform. Contents: Fundamentals of OFDM/OFDMA Systems; Time and Frequency Synchronization; Channel Estimation and Equalization; Joint Synchronization, Channel Estimation and Data Symbol Detection in OFDMA Uplink; Dynamic Resource Allocation; Peak-to-Average Power Ratio (PAPR) Reduction.

60 citations


Journal Article•DOI•
TL;DR: This article prioritize candidate nonsynonymous single-nucleotide polymorphisms (nsSNPs) through a bioinformatics approach that takes advantages of a set of improved numeric features derived from protein-sequence information and a new statistical learning model called "multiple selection rule voting" (MSRV).
Abstract: The increasing demand for the identification of genetic variation responsible for common diseases has translated into a need for sophisticated methods for effectively prioritizing mutations occurring in disease-associated genetic regions. In this article, we prioritize candidate nonsynonymous single-nucleotide polymorphisms (nsSNPs) through a bioinformatics approach that takes advantages of a set of improved numeric features derived from protein-sequence information and a new statistical learning model called "multiple selection rule voting" (MSRV). The sequence-based features can maximize the scope of applications of our approach, and the MSRV model can capture subtle characteristics of individual mutations. Systematic validation of the approach demonstrates that this approach is capable of prioritizing causal mutations for both simple monogenic diseases and complex polygenic diseases. Further studies of familial Alzheimer diseases and diabetes show that the approach can enrich mutations underlying these polygenic diseases among the top of candidate mutations. Application of this approach to unclassified mutations suggests that there are 10 suspicious mutations likely to cause diseases, and there is strong support for this in the literature.

45 citations


Journal Article•DOI•
TL;DR: Owing to its ability to coherently combine the channel magnitude of every multipath, the CPP-UWB transceiver can achieve a higher data rate by shortening its symbol duration with a tolerable interference.
Abstract: A novel transceiver design for ultrawideband (UWB) communication systems using the channel phase precoding (CPP) technique is proposed in this work. With the CPP-UWB transceiver, we encode data symbols using the reversed order of the channel phase. A simple phase estimation algorithm is presented for the CPP-UWB implementation. Owing to its ability to coherently combine the channel magnitude of every multipath, the CPP-UWB transceiver can achieve a higher data rate by shortening its symbol duration with a tolerable interference. The performance of the CPP-UWB can be further improved using an optimal code length and/or the MMSE receiver to suppress intersymbol interference.

39 citations


Proceedings Article•DOI•
TL;DR: Low-complexity error concealment techniques for missing macroblock (MB) recovery in mobile video delivery based on the boundary matching principle is extensively studied and evaluated in this work.
Abstract: Low-complexity error concealment techniques for missing macroblock (MB) recovery in mobile video delivery based on the boundary matching principle is extensively studied and evaluated in this work. We first examine the boundary matching algorithm (BMA) and the outer boundary matching algorithm (OBMA) due to their excellent trade-off in complexity and visual quality. Their good performance is explained, and additional experiments are given to identify their strengths and weaknesses. Then, two more extensions of OBMA are presented. One is obtained by extending the search pattern for performance improvement at the cost of additional complexity. The other is based on the use of multiple overlapped outer boundary layers.

28 citations


Proceedings Article•DOI•
27 May 2007
TL;DR: The RDC optimization framework presents a way to balance coding efficiency and the ADF decoding cost in the mode decision process, which is called the decoder-friendly adaptive deblocking filter (DF-ADF) mode decision.
Abstract: Video encoding to yield a decoder-friendly H.264 bit stream that consumes less decoding power yet with little coding efficiency degradation is investigated in this work. The energy saving of the decoder relies on the use of adaptive deblocking filters (ADF). We first propose a power consumption model for the deblocking filter. Then, the encoder performs the rate-distortion-decoder complexity optimization (RDC) to save the decoder power needed for deblocking filter operations, which is called the decoder-friendly adaptive deblocking filter (DF-ADF) mode decision. The RDC optimization framework presents a way to balance coding efficiency and the ADF decoding cost in the mode decision process. The effectiveness of the proposed DF-ADF algorithm is demonstrated by experiments with diverse video contents and bit rates.

15 citations


Proceedings Article•DOI•
28 Jan 2007
TL;DR: A new technique for film grain noise extraction, modeling and synthesis is proposed and applied to the coding of high definition video and a parametric model containing a small set of parameters is described to represent the extracted filmgrain noise that is close to the real one in terms of power spectral density and cross-channel spectral correlation.
Abstract: A new technique for film grain noise extraction, modeling and synthesis is proposed and applied to the coding of high definition video in this work. The film grain noise is viewed as a part of artistic presentation by people in the movie industry. On one hand, since the film grain noise can boost the natural appearance of pictures in high definition video, it should be preserved in high-fidelity video processing systems. On the other hand, video coding with film grain noise is expensive. It is desirable to extract film grain noise from the input video as a pre-processing step at the encoder and re-synthesize the film grain noise and add it back to the decoded video as a post-processing step at the decoder. Under this framework, the coding gain of the denoised video is higher while the quality of the final reconstructed video can still be well preserved. Following this idea, we present a method to remove film grain noise from image/video without distorting its original content. Besides, we describe a parametric model containing a small set of parameters to represent the extracted film grain noise. The proposed model generates the film grain noise that is close to the real one in terms of power spectral density and cross-channel spectral correlation. Experimental results are shown to demonstrate the efficiency of the proposed scheme.

15 citations



Proceedings Article•DOI•
04 Dec 2007
TL;DR: A single-channel audio source separation algorithm based on the matching pursuit technique with content-adaptive dictionaries (CAD) is proposed in this work and the effectiveness of the MP-CAD algorithm in audio signal approximation and single- channel source separation is demonstrated by computer simulation.
Abstract: A single-channel audio source separation algorithm based on the matching pursuit (MP) technique with content-adaptive dictionaries (CAD) is proposed in this work. The proposed MP-CAD algorithm uses content-dependent atoms that capture inherent characteristics of audio signals effectively. As compared with previous methods based on spectral decomposition and clustering in the time-frequency domain, the MP-CAD algorithm projects the time-domain audio signals onto a subspace spanned by content-adaptive atoms efficiently for their concise representation and separation. The effectiveness of the MP-CAD algorithm in audio signal approximation and single-channel source separation is demonstrated by computer simulation.

14 citations


Journal Article•DOI•
TL;DR: The objective is to offer a state-of-the art review of SNP data analysis from a signal processing viewpoint so that researchers in the signal processing field can grasp the important domain knowledge to overcome the barrier between the two fields.
Abstract: The basic structural units of the genome are nucleotides. A single nucleotide polymorphism (SNP) is a mutation at a single nucleotide position. This paper discusses several major problems in SNP data analysis and review some existing solutions in this work. Generally speaking, a rich set of SNP analysis problems are cast in the signal processing framework. Our objective is to offer a state-of-the art review on this topic from a signal processing viewpoint so that researchers in the signal processing field can grasp the important domain knowledge to overcome the barrier between the two fields

12 citations


Proceedings Article•DOI•
26 Dec 2007
TL;DR: A simple yet efficient algorithm to enhance the system throughput by integrating opportunistic medium access and collision resolution through random subchannel backoff is presented, which is called the opportunistic access with random sub channel backoff (OARSB) scheme.
Abstract: A distributed medium access control (MAC) algorithm for uplink OFDMA networks under the IEEE 802.16 framework is proposed and analyzed in this work. We present a simple yet efficient algorithm to enhance the system throughput by integrating opportunistic medium access and collision resolution through random subchannel backoff. Consequently, the resulting algorithm is called the opportunistic access with random subchannel backoff (OARSB) scheme. OARSB not only achieves distributed coordination among users but also reduces the amount of information exchange between the base station and users. The throughput and delay performance analysis of OARSB is conducted using a Markov chain model. The superior performance of OARSB over an existing scheme is demonstrated by analysis as well as computer simulation.

Proceedings Article•DOI•
TL;DR: A decoding complexity model of context-based adaptive binary arithmetic coding (CABAC) for H.264/AVC is investigated and can provide good estimation results for variant bit streams.
Abstract: One way to save the power consumption in the H.264 decoder is for the H.264 encoder to generate decoderfriendly bit streams. By following this idea, a decoding complexity model of context-based adaptive binary arithmetic coding (CABAC) for H.264/AVC is investigated in this research. Since different coding modes will have an impact on the number of quantized transformed coeffcients (QTCs) and motion vectors (MVs) and, consequently, the complexity of entropy decoding, the encoder with a complexity model can estimate the complexity of entropy decoding and choose the best coding mode to yield the best tradeoff between the rate, distortion and decoding complexity performance. The complexity model consists of two parts: one for source data (i.e. QTCs) and the other for header data (i.e. the macro-block (MB) type and MVs). Thus, the proposed CABAC decoding complexity model of a MB is a function of QTCs and associated MVs, which is verified experimentally. The proposed CABAC decoding complexity model can provide good estimation results for variant bit streams. Practical applications of this complexity model will also be discussed.

Book Chapter•DOI•
11 Dec 2007
TL;DR: First, a GOP-adaptive layer-based packet priority ordering algorithm is proposed to allow flexible prioritized video transmission with unequal error protection and then, a packetization scheme tailored to NC delivery is discussed.
Abstract: The integration of scalable video representation and network coding (NC) offers an excellent solution to robust and flexible video multicast over IP networks. In this work, we examine one critical component in this system, i.e. video priority ordering and packetization at the source of the multicast tree. First, a GOP-adaptive layer-based packet priority ordering algorithm is proposed to allow flexible prioritized video transmission with unequal error protection. Then, a packetization scheme tailored to NC delivery is discussed. Simulation results are given to demonstrate that the proposed algorithms offer better performance in video quality and bandwidth efficiency as compared the SNR-based packetization method.

Book Chapter•DOI•
26 Nov 2007
TL;DR: A novel physically driven cumulus cloud simulation algorithm based on the similarity approach that greatly facilitates computing efficiency, general shape control and wind effect simulation, while yielding realistic visual quality.
Abstract: Simulation of 3D clouds is an important component in realistic modeling of outdoor scenes. In this paper, we propose a novel physically driven cumulus cloud simulation algorithm based on the similarity approach. By using the similarity approach, the overall cloud characteristics is captured with a set of constant parameters, which in turn enables decoupling of the 3D cloud simulation into the 1D vertical and the 2D horizontal simulations. As a result, the proposed cloud simulation algorithm greatly facilitates computing efficiency, general shape control and wind effect simulation, while yielding realistic visual quality.

Proceedings Article•DOI•
01 Dec 2007
TL;DR: This work examines the impact of timing jitter on the performance of TRP and CPP and shows that the CPP-UWB system is more robust against the timing jitters than the TRP- UWB svstem.
Abstract: The channel phase precoding (CPP) technique was recently proposed for the ultra-wideband (UWB) communication system in [1] to save the feedback overhead and the computational complexity as compared with the time-reversal prefllter (TRP) technique [2]. Two ideal assumptions have been made in both systems; namely, the availability of accurate channel information and perfect synchronization of transmitted pulses. In this work, we examine the impact of timing jitter on the performance of TRP and CPP and show that the CPP-UWB system is more robust against the timing jitter than the TRP- UWB svstem.

Proceedings Article•DOI•
05 Aug 2007
TL;DR: Here, the framework built upon a 3D tree model is generalized so that a simplified tree model can be observed from different angles through user interaction and geometrical simplification algorithms are used to save the rendering effort while keeping the visual quality close to that of a full tree model.
Abstract: Although billboards are often used in representing trees and grass of natural scenes, they are only suitable for distant objects, which are far away from the camera viewpoint. Human visual perception can point out billboards under close examination of such objects. In addition, a moving camera viewpoint around the objects further takes out the realism as some billboards are often programmed to rotate along the camera's viewing direction. In this on-going research, we consider rendering techniques built upon a 3D tree model and focus on geometrical simplification algorithms, which are used to save the rendering effort while keeping the visual quality close to that of a full tree model. Some recent research efforts [Lee et al. 2007] have tried to address this problem under the context of viewdependent rendering. Here, we generalize the framework so that a simplified tree model can be observed from different angles through user interaction.