scispace - formally typeset
Search or ask a question

Showing papers in "IEEE Circuits and Systems Magazine in 2004"


Journal ArticleDOI
TL;DR: This paper provides an overview of the new tools, features and complexity of H.264/AVC.
Abstract: H.264/AVC, the result of the collaboration between the ISO/IEC Moving Picture Experts Group and the ITU-T Video Coding Experts Group, is the latest standard for video coding. The goals of this standardization effort were enhanced compression efficiency, network friendly video representation for interactive (video telephony) and non-interactive applications (broadcast, streaming, storage, video on demand). H.264/AVC provides gains in compression efficiency of up to 50% over a wide range of bit rates and video resolutions compared to previous standards. Compared to previous standards, the decoder complexity is about four times that of MPEG-2 and two times that of MPEG-4 Visual Simple Profile. This paper provides an overview of the new tools, features and complexity of H.264/AVC.

1,013 citations


Journal ArticleDOI
TL;DR: An advanced NoC architecture, called Xpipes, targeting high performance and reliable communication for on-chip multi-processors is introduced, which consists of a library of soft macros that are design-time composable and tunable so that domain-specific heterogeneous architectures can be instantiated and synthesized.
Abstract: The growing complexity of embedded multiprocessor architectures for digital media processing will soon require highly scalable communication infrastructures. Packet switched networks-on-chip (NoC) have been proposed to support the trend for systems-on-chip integration. In this paper, an advanced NoC architecture, called Xpipes, targeting high performance and reliable communication for on-chip multi-processors is introduced. It consists of a library of soft macros (switches, network interfaces and links) that are design-time composable and tunable so that domain-specific heterogeneous architectures can be instantiated and synthesized. Links can be pipelined with a flexible number of stages to decouple link throughput from its length and to get arbitrary topologies. Moreover, a tool called XpipesCompiler, which automatically instantiates a customized NoC from the library of soft network components, is used in this paper to test the Xpipes-based synthesis flow for domain-specific communication architectures.

413 citations


Journal ArticleDOI
TL;DR: Two equalization approaches - transmitter pre-emphasis and receiver equalization, are reviewed, in addition to various adaptation criteria and algorithms, for low-cost transmission media for over Gbps data transmissions.
Abstract: The article first discusses the major non-ideal issues of low-cost transmission media for over Gbps data transmissions - the frequency dispersion loss and channel noise. The former causes ISI in received signal, which presents difficulty for clock and data recovery at high frequencies and results higher BER. The latter further degrades the received signal quality and further limits the data transmission rate and transmission distance. Then, two equalization approaches - transmitter pre-emphasis and receiver equalization, are reviewed, in addition to various adaptation criteria and algorithms.

151 citations


Journal ArticleDOI
TL;DR: The fundamentals and algorithms of the state of the art of M-D interleaving - the t-interleaved array approach by Blaum, Bruck and Vardy and the successive packing approach by Shi and Zhang are presented and analyzed and the performance comparison between different approaches is made.
Abstract: To ensure data fidelity, a number of random error correction codes (ECCs) have been developed. ECC is, however, not efficient in combating bursts of errors, i.e., a group of consecutive (in one-dimensional (1-D) case) or connected (in two- and three- dimensional (2-D and 3-D) case) erroneous code symbols owing to the bursty nature of errors. Interleaving is a process to rearrange code symbols so as to spread bursts of errors over multiple code-words that can be corrected by ECCs. By converting bursts of errors into random-like errors, interleaving thus becomes an effective means to combat error bursts. In this article, we first illustrate the philosophy of interleaving by introducing a 1-D block interleaving technique. Then multi-dimensional (M-D) bursts of errors and optimality of interleaving are defined. The fundamentals and algorithms of the state of the art of M-D interleaving - the t-interleaved array approach by Blaum, Bruck and Vardy and the successive packing approach by Shi and Zhang-are presented and analyzed. In essence, a t-interleaved array is constructed by closely tiling a building block, which is solely determined by the burst size t. Therefore, the algorithm needs to be implemented each time for a different burst size in order to maintain either the error burst correction capability or optimality. Since the size of error bursts is usually not known in advance, the application of the technique is somewhat limited. The successive packing algorithm, based on the concept of 2 /spl times/ 2 basis array, only needs to be implemented once for a given square 2-D array, and yet it remains optimal for a set of bursts of errors having different sizes. The performance comparison between different approaches is made. Future research on the successive packing approach is discussed. Finally, applications of 2-D/3-D successive packing interleaving in enhancing the robustness of image/video data hiding are presented as examples of practical utilization of interleaving.

101 citations


Journal ArticleDOI
TL;DR: The problem of gene finding using digital filtering, and the use of transform domain methods in the study of protein binding spots are described, and several new directions in genomic signal processing are briefly outlined.
Abstract: The theory and methods of signal processing are becoming increasingly important in molecular biology. Digital filtering techniques, transform domain methods, and Markov models have played important roles in gene identification, biological sequence analysis, and alignment. This paper contains a brief review of molecular biology, followed by a review of the applications of signal processing theory. This includes the problem of gene finding using digital filtering, and the use of transform domain methods in the study of protein binding spots. The relatively new topic of noncoding genes, and the associated problem of identifying ncRNA buried in DNA sequences are also described. This includes a discussion of hidden Markov models and context free grammars. Several new directions in genomic signal processing are briefly outlined in the end.

90 citations


Journal ArticleDOI
TL;DR: A framework for modeling power systems using hybrid input/output automata (HIOA) is proposed and this hybrid modeling process is applied to a simple power system.
Abstract: In this work a framework for modeling power systems using hybrid input/output automata (HIOA) is proposed. The system is assumed to consist of several distinct components. Some of them drive the continuous dynamics while others exhibit event-driven discrete dynamics. Such behavior is characterized by interactions between continuous dynamics and discrete events. Therefore the power systems are an important example of hybrid systems. This hybrid modeling process is applied to a simple power system.

74 citations


Journal ArticleDOI
TL;DR: There is an unmet need for lab-on-a-chip to effectively deal with the biological systems at the cell level to give biology the advantage of miniaturization for carrying out complex experiments.
Abstract: Recently, the sensing methods for dielectrophoresis (DEP) have been changed from bulky instruments to lab-on-a-chip. Lab-on-a-chip based the dielectrophoresis phenomenon holds the promise to give biology the advantage of miniaturization for carrying out complex experiments. However, until now, there is an unmet need for lab-on-a-chip to effectively deal with the biological systems at the cell level.

70 citations



Journal ArticleDOI
TL;DR: An optimized strategy for designing charge pumps with minimum power consumption is presented that allows designers to define the number of stages that, for a given input, and an output voltage, maximize power efficiency.
Abstract: In this paper, an optimized strategy for designing charge pumps with minimum power consumption is presented The approach allows designers to define the number of stages that, for a given input, and an output voltage, maximize power efficiency Capacitor value is then set to provide the current capability required This approach was analytically developed and validated through simulations and experimental measurements on 035 /spl mu/m EEPROM CMOS technology This approach was then compared with one which minimized the silicon area and it was shown that only a small increase in area is needed to minimize power consumption

6 citations


Journal ArticleDOI
TL;DR: An analysis of drift errors is provided to identify the sources of quality degradation when transcoding to a lower spatial resolution and it is found that the intra-refresh architecture offers the best tradeoff between quality and complexity and is also the most flexible.
Abstract: This paper discusses the problem of reduced-resolution transcoding of compressed video bitstreams. An analysis of drift errors is provided to identify the sources of quality degradation when transcoding to a lower spatial resolution. Two types of drift error are considered: a reference picture error, which has been identified in previous works, and error due to the noncommutative property of motion compensation and down-sampling, which is unique to this work. To overcome these sources of error, four novel architectures are presented. One architecture attempts to compensate for the reference picture error in the reduced resolution, while another architecture attempts to do the same in the original resolution. We present a third architecture that attempts to eliminate the second type of drift error and a final architecture that relies on an intrablock refresh method to compensate for all types of errors. In all of these architectures, a variety of macroblock level conversions are required, such as motion vector mapping and texture down-sampling. These conversions are discussed in detail. Another important issue for the transcoder is rate control. This is especially important for the intra-refresh architecture since it must find a balance between number of intrablocks used to compensate for errors and the associated rate-distortion characteristics of the low-resolution signal. The complexity and quality of the architectures are compared. Based on the results, we find that the intra-refresh architecture offers the best tradeoff between quality and complexity and is also the most flexible.

6 citations


Journal ArticleDOI
TL;DR: T his is a well-written solid engineering oriented book covering the transistor circuits associated with digital electronic systems.
Abstract: T his is a well-written solid engineering oriented book covering the transistor circuits associated with digital electronic systems. From the Preface the background knowledge assumed is of “calculus, differential equations, physics, and chemistry as well as courses in circuits, electronics, and digital logic” with the book intended to “fit into the junior or senior year.” The stated goals of the book are: