scispace - formally typeset
Search or ask a question
Author

Rangarajan Aravind

Bio: Rangarajan Aravind is an academic researcher from Indian Institute of Technology Madras. The author has contributed to research in topics: Kalman filter & Video compression picture types. The author has an hindex of 15, co-authored 63 publications receiving 1433 citations. Previous affiliations of Rangarajan Aravind include Bell Labs & Indian Institutes of Technology.


Papers
More filters
Patent
Atul Puri1, Rangarajan Aravind1
15 Nov 1991
TL;DR: In this article, an adaptive and selective coding of digital signals relating to frames and fields of the video images is proposed for video conferencing applications and high definition television, among other things.
Abstract: Improved compression of digital signals relating to high resolution video images is accomplished by an adaptive and selective coding of digital signals relating to frames and fields of the video images Digital video input signals are analyzed and a coding type signal is produced in response to this analysis This coding type signal may be used to adaptively control the operation of one or more types of circuitry which are used to compress digital video signals so that less bits, and slower bit rates, may be used to transmit high resolution video images without undue loss of quality For example, the coding type signal may be used to improve motion compensated estimation techniques, quantization of transform coefficients, scanning of video data, and variable word length encoding of the data The improved compression of digital video signals is useful for video conferencing applications and high definition television, among other things

232 citations

Journal ArticleDOI
TL;DR: This paper compares the performance of these techniques (excluding temporal scalability) under various loss rates using realistic length material and discusses their relative merits.
Abstract: Transmission of compressed video over packet networks with nonreliable transport benefits when packet loss resilience is incorporated into the coding. One promising approach to packet loss resilience, particularly for transmission over networks offering dual priorities such as ATM networks, is based on layered coding which uses at least two bitstreams to encode video. The base-layer bitstream, which can be decoded independently to produce a lower quality picture, is transmitted over a high priority channel. The enhancement-layer bitstream(s) contain less information, so that packet losses are more easily tolerated. The MPEG-2 standard provides four methods to produce a layered video bitstream: data partitioning, signal-to-noise ratio scalability, spatial scalability, and temporal scalability. Each was included in the standard in part for motivations other than loss resilience. This paper compares the performance of these techniques (excluding temporal scalability) under various loss rates using realistic length material and discusses their relative merits. Nonlayered MPEG-2 coding gives generally unacceptable video quality for packet loss ratios of 10/sup -3/ for small packet sizes. Better performance can be obtained using layered coding and dual-priority transmission. With data partitioning, cell loss ratios of 10/sup -4/ in the low-priority layer are definitely acceptable, while for SNR scalable encoding, cell loss ratios of 10/sup -3/ are generally invisible. Spatial scalable encoding can provide even better visual quality under packet losses; however, it has a high implementation complexity.

227 citations

Journal ArticleDOI
Atul Puri1, Rangarajan Aravind1
TL;DR: The authors address the problem of adapting the Motion Picture Experts Group (MPEG) quantizer for scenes of different complexity (at bit rates around 1 Mb/s), such that the perceptual quality of the reconstructed video is optimized.
Abstract: The authors address the problem of adapting the Motion Picture Experts Group (MPEG) quantizer for scenes of different complexity (at bit rates around 1 Mb/s), such that the perceptual quality of the reconstructed video is optimized. Adaptive quantisation techniques conforming to the MPEG syntax can significantly improve the performance of the encoder. The authors concentrate on a one-pass causal scheme to limit the complexity of the encoder. The system employs prestored models for perceptual quality and a bit rate that have been experimentally derived. A framework is provided for determining these models as well as adapting them to locally varying scene characteristics. The variance of an 8*8 (luminance) block is basic to the techniques developed. Following standard practice, it is defined as the average of the square of the deviations of the pixels in the block from the mean pixel value. >

201 citations

Journal ArticleDOI
TL;DR: In this article, the authors describe several standard compression algorithms developed in recent years and describe their compatibility among different applications and manufacturers, and present a comparison of the algorithms for image and video compression.
Abstract: Most image or video applications involving transmission or storage require some form of data compression to reduce the otherwise inordinate demand on bandwidth and storage. Compatibility among different applications and manufacturers is very desirable, and often essential. This paper describes several standard compression algorithms developed in recent years.

141 citations

Journal ArticleDOI
TL;DR: A novel for-mulation of the state and state-transition rule that uses a perceptually based edge classifier is introduced and significant gains to be obtained are obtained by enhancing the basic VQ approach with interblock memory.
Abstract: Image compression using memoryless vector quantization (VQ), in which small blocks (vectors) of pixels are independently encoded, has been demonstrated to be an effective technique for achieving bit rates above 0.6 bits per pixel (bpp). To maintain the same quality at lower rates, it is necessary to exploit spatial redundancy over a larger region of pixels than is possible with memoryless VQ. This can be achieved by incorporating memory of previously encoded blocks into the encoding of each successive input block. Finite-state vector quantization (FSVQ) employs a finite number of states, which summarize key information about previously encoded vectors, to select one of a family of codebooks to encode each input vector. In this paper, we review the basic ideas of VQ and extend the finite-state concept to image compression. We introduce a novel for-mulation of the state and state-transition rule that uses a perceptually based edge classifier. We also examine the use of interpolation in conjunction with VQ with finite memory. Coding results are presented for monochrome images in the bit-rate range of 0.24 to 0.32 bpp. The results achieved with finite memory are comparable to those of memoryless VQ at 0.6 bpp and show that there are significant gains to be obtained by enhancing the basic VQ approach with interblock memory.

108 citations


Cited by
More filters
01 Apr 2003
TL;DR: The EnKF has a large user group, and numerous publications have discussed applications and theoretical aspects of it as mentioned in this paper, and also presents new ideas and alternative interpretations which further explain the success of the EnkF.
Abstract: The purpose of this paper is to provide a comprehensive presentation and interpretation of the Ensemble Kalman Filter (EnKF) and its numerical implementation. The EnKF has a large user group, and numerous publications have discussed applications and theoretical aspects of it. This paper reviews the important results from these studies and also presents new ideas and alternative interpretations which further explain the success of the EnKF. In addition to providing the theoretical framework needed for using the EnKF, there is also a focus on the algorithmic formulation and optimal numerical implementation. A program listing is given for some of the key subroutines. The paper also touches upon specific issues such as the use of nonlinear measurements, in situ profiles of temperature and salinity, and data which are available with high frequency in time. An ensemble based optimal interpolation (EnOI) scheme is presented as a cost-effective approach which may serve as an alternative to the EnKF in some applications. A fairly extensive discussion is devoted to the use of time correlated model errors and the estimation of model bias.

2,975 citations

Journal ArticleDOI
01 May 1998
TL;DR: In this paper, a review of error control and concealment in video communication is presented, which are described in three categories according to the roles that the encoder and decoder play in the underlying approaches.
Abstract: The problem of error control and concealment in video communication is becoming increasingly important because of the growing interest in video delivery over unreliable channels such as wireless networks and the Internet. This paper reviews the techniques that have been developed for error control and concealment. These techniques are described in three categories according to the roles that the encoder and decoder play in the underlying approaches. Forward error concealment includes methods that add redundancy at the source end to enhance error resilience of the coded bit streams. Error concealment by postprocessing refers to operations at the decoder to recover the damaged areas based on characteristics of image and video signals. Last, interactive error concealment covers techniques that are dependent on a dialogue between the source and destination. Both current research activities and practice in international standards are covered.

1,611 citations

Journal ArticleDOI
TL;DR: The key to a successful quantization is the selection of an error criterion – such as entropy and signal-to-noise ratio – and the development of optimal quantizers for this criterion.
Abstract: Quantization is a process that maps a continous or discrete set of values into approximations that belong to a smaller set. Quantization is a lossy: some information about the original data is lost in the process. The key to a successful quantization is therefore the selection of an error criterion – such as entropy and signal-to-noise ratio – and the development of optimal quantizers for this criterion.

1,574 citations

Journal ArticleDOI
TL;DR: First, the concept of vector quantization is introduced, then its application to digital images is explained, and the emphasis is on the usefulness of the vector quantification when it is combined with conventional image coding techniques, orWhen it is used in different domains.
Abstract: A review of vector quantization techniques used for encoding digital images is presented. First, the concept of vector quantization is introduced, then its application to digital images is explained. Spatial, predictive, transform, hybrid, binary, and subband vector quantizers are reviewed. The emphasis is on the usefulness of the vector quantization when it is combined with conventional image coding techniques, or when it is used in different domains. >

1,102 citations

Journal ArticleDOI
TL;DR: An overview of the FGS video coding technique is provided in this Amendment of the MPEG-4 to address a variety of challenging problems in delivering video over the Internet.
Abstract: Streaming video profile is the subject of an Amendment of MPEG-4, and is developed in response to the growing need on a video coding standard for streaming video over the Internet. It provides the capability to distribute single-layered frame-based video over a wide range of bit rates with high coding efficiency. It also provides fine granularity scalability (FGS), and its combination with temporal scalability, to address a variety of challenging problems in delivering video over the Internet. This paper provides an overview of the FGS video coding technique in this Amendment of the MPEG-4.

1,023 citations