Author
Ali Bilgin
Other affiliations: Siemens, Harvard University, San Diego State University ...read more
Bio: Ali Bilgin is an academic researcher from University of Arizona. The author has contributed to research in topics: Image compression & Data compression. The author has an hindex of 27, co-authored 174 publications receiving 3068 citations. Previous affiliations of Ali Bilgin include Siemens & Harvard University.
Papers published on a yearly basis
Papers
More filters
28 Mar 2000
TL;DR: The JPEG-2000 standard as discussed by the authors is an emerging standard for still image compression, which defines the minimum compliant decoder and bitstream syntax, as well as optional, value-added extensions.
Abstract: JPEG-2000 is an emerging standard for still image compression. This paper provides a brief history of the JPEG-2000 standardization process, an overview of the standard, and some description of the capabilities provided by the standard. Part I of the JPEG-2000 standard specifies the minimum compliant decoder, while Part II describes optional, value-added extensions. Although the standard specifies only the decoder and bitstream syntax, in this paper we describe JPEG-2000 from the point of view of encoding. We take this approach, as we believe it is more amenable to a compact description more easily understood by most readers.
391 citations
TL;DR: This work combines principal component analysis with a model‐based algorithm to reconstruct maps of selected principal component coefficients from highly undersampled radial MRI data yielding a more accurate and reliable estimation of MR parameter maps.
Abstract: Recently, there has been an increased interest in quantitative MR parameters to improve diagnosis and treatment. Parameter mapping requires multiple images acquired with different timings usually resulting in long acquisition times. While acquisition time can be reduced by acquiring undersampled data, obtaining accurate estimates of parameters from undersampled data is a challenging problem, in particular for structures with high spatial frequency content. In this work, Principal Component Analysis (PCA) is combined with a model-based algorithm to reconstruct maps of selected principal component coefficients from highly undersampled radial MRI data. This novel approach linearizes the cost function of the optimization problem yielding a more accurate and reliable estimation of MR parameter maps. The proposed algorithm - REconstruction of Principal COmponent coefficient Maps (REPCOM) using Compressed Sensing - is demonstrated in phantoms and in vivo and compared to two other algorithms previously developed for undersampled data.
186 citations
TL;DR: The goal of this paper is to demonstrate how the JPEG2000 codec can be used to compress electrocardiogram (ECG) data, and to demonstrate the ECG application as an example that can be extended to other signals that exist within the consumer electronics realm.
Abstract: JPEG2000 is the latest international standard for compression of still images. Although the JPEG2000 codec is designed to compress images, we illustrate that it can also be used to compress other signals. As an example, we illustrate how the JPEG2000 codec can be used to compress electrocardiogram (ECG) data. Experiments using the MIT-BIH arrhythmia database illustrate that the proposed approach outperforms many existing ECG compression schemes. The proposed scheme allows the use of existing hardware and software JPEG2000 codecs for ECG compression, and can be especially useful in eliminating the need for specialized hardware development. The desirable characteristics of the JPEG2000 codec, such as precise rate control and progressive quality, are retained in the presented scheme. The goal of this paper is to demonstrate the ECG application as an example. This example can be extended to other signals that exist within the consumer electronics realm.
138 citations
Patent•
26 Jun 2008TL;DR: In this paper, a lifting-based view compensated wavelet transform on the 2D images using the coordinate transformations to generate a plurality of wavelet coefficients and compressing the wavelets coefficients and depth maps to obtain a compressed representation of 2D image.
Abstract: A method for compressing 2D images includes determining a depth map for each of a plurality of sequential 2D images of a 3D volumetric image, determining coordinate transformations the 2D images based on the depth maps and a geometric relationship between the 3D volumetric image and each of the 2D image, performing a lifting-based view compensated wavelet transform on the 2D images using the coordinate transformations to generate a plurality of wavelet coefficients and compressing the wavelet coefficients and depth maps to generate a compressed representation of the 2D images.
113 citations
Cited by
More filters
TL;DR: A novel algorithm for adapting dictionaries in order to achieve sparse signal representations, the K-SVD algorithm, an iterative method that alternates between sparse coding of the examples based on the current dictionary and a process of updating the dictionary atoms to better fit the data.
Abstract: In recent years there has been a growing interest in the study of sparse representation of signals. Using an overcomplete dictionary that contains prototype signal-atoms, signals are described by sparse linear combinations of these atoms. Applications that use sparse representation are many and include compression, regularization in inverse problems, feature extraction, and more. Recent activity in this field has concentrated mainly on the study of pursuit algorithms that decompose signals with respect to a given dictionary. Designing dictionaries to better fit the above model can be done by either selecting one from a prespecified set of linear transforms or adapting the dictionary to a set of training signals. Both of these techniques have been considered, but this topic is largely still open. In this paper we propose a novel algorithm for adapting dictionaries in order to achieve sparse signal representations. Given a set of training signals, we seek the dictionary that leads to the best representation for each member in this set, under strict sparsity constraints. We present a new method-the K-SVD algorithm-generalizing the K-means clustering process. K-SVD is an iterative method that alternates between sparse coding of the examples based on the current dictionary and a process of updating the dictionary atoms to better fit the data. The update of the dictionary columns is combined with an update of the sparse representations, thereby accelerating convergence. The K-SVD algorithm is flexible and can work with any pursuit method (e.g., basis pursuit, FOCUSS, or matching pursuit). We analyze this algorithm and demonstrate its results both on synthetic tests and in applications on real image data
8,905 citations
TL;DR: A new image compression algorithm is proposed, based on independent embedded block coding with optimized truncation of the embedded bit-streams (EBCOT), capable of modeling the spatially varying visual masking phenomenon.
Abstract: A new image compression algorithm is proposed, based on independent embedded block coding with optimized truncation of the embedded bit-streams (EBCOT). The algorithm exhibits state-of-the-art compression performance while producing a bit-stream with a rich set of features, including resolution and SNR scalability together with a "random access" property. The algorithm has modest complexity and is suitable for applications involving remote browsing of large compressed images. The algorithm lends itself to explicit optimization with respect to MSE as well as more realistic psychovisual metrics, capable of modeling the spatially varying visual masking phenomenon.
1,933 citations
TL;DR: Some of the most significant features of the standard are presented, such as region-of-interest coding, scalability, visual weighting, error resilience and file format aspects, and some comparative results are reported.
Abstract: One of the aims of the standardization committee has been the development of Part I, which could be used on a royalty- and fee-free basis. This is important for the standard to become widely accepted. The standardization process, which is coordinated by the JTCI/SC29/WG1 of the ISO/IEC has already produced the international standard (IS) for Part I. In this article the structure of Part I of the JPFG 2000 standard is presented and performance comparisons with established standards are reported. This article is intended to serve as a tutorial for the JPEG 2000 standard. The main application areas and their requirements are given. The architecture of the standard follows with the description of the tiling, multicomponent transformations, wavelet transforms, quantization and entropy coding. Some of the most significant features of the standard are presented, such as region-of-interest coding, scalability, visual weighting, error resilience and file format aspects. Finally, some comparative results are reported and the future parts of the standard are discussed.
1,842 citations
TL;DR: This work puts forward ways for handling nonhomogeneous noise and missing information, paving the way to state-of-the-art results in applications such as color image denoising, demosaicing, and inpainting, as demonstrated in this paper.
Abstract: Sparse representations of signals have drawn considerable interest in recent years. The assumption that natural signals, such as images, admit a sparse decomposition over a redundant dictionary leads to efficient algorithms for handling such sources of data. In particular, the design of well adapted dictionaries for images has been a major challenge. The K-SVD has been recently proposed for this task and shown to perform very well for various grayscale image processing tasks. In this paper, we address the problem of learning dictionaries for color images and extend the K-SVD-based grayscale image denoising algorithm that appears in . This work puts forward ways for handling nonhomogeneous noise and missing information, paving the way to state-of-the-art results in applications such as color image denoising, demosaicing, and inpainting, as demonstrated in this paper.
1,818 citations
TL;DR: It is interesting to note that JPEG2000 is being designed to address the requirements of a diversity of applications, e.g. Internet, color facsimile, printing, scanning, digital photography, remote sensing, mobile applications, medical imagery, digital library and E-commerce.
Abstract: With the increasing use of multimedia technologies, image compression requires higher performance as well as new features. To address this need in the specific area of still image encoding, a new standard is currently being developed, the JPEG2000. It is not only intended to provide rate-distortion and subjective image quality performance superior to existing standards, but also to provide features and functionalities that current standards can either not address efficiently or in many cases cannot address at all. Lossless and lossy compression, embedded lossy to lossless coding, progressive transmission by pixel accuracy and by resolution, robustness to the presence of bit-errors and region-of-interest coding, are some representative features. It is interesting to note that JPEG2000 is being designed to address the requirements of a diversity of applications, e.g. Internet, color facsimile, printing, scanning, digital photography, remote sensing, mobile applications, medical imagery, digital library and E-commerce.
1,485 citations