scispace - formally typeset
Search or ask a question
Topic

Quantization (image processing)

About: Quantization (image processing) is a research topic. Over the lifetime, 7977 publications have been published within this topic receiving 126632 citations.


Papers
More filters
Proceedings ArticleDOI
14 Oct 1997
TL;DR: A new quantization method for color images that uses a local error optimization strategy to generate near optimal quantization levels that are superior than those of other popular image quantization algorithms.
Abstract: This paper presents a new quantization method for color images. It uses a local error optimization strategy to generate near optimal quantization levels. The algorithm is simple to implement and produces results that are superior than those of other popular image quantization algorithms.

62 citations

Proceedings ArticleDOI
16 Sep 1994
TL;DR: Results of a scheme to encode video sequences of digital image data based on a quadtree still-image fractal method showing near real-time software-only decoding; resolution independence; high compression ratios; and low compression times as compared with standard fixed image fractal schemes.
Abstract: We present results of a scheme to encode video sequences of digital image data based on a quadtree still-image fractal method. The scheme encodes each frame using image pieces, or vectors, from its predecessor; hence it can be thought of as a VQ scheme in which the code book is derived from the previous image. We present results showing: near real-time (5 - 12 frames/sec) software-only decoding; resolution independence; high compression ratios (25 - 244:1); and low compression times (2.4 - 66 sec/frame) as compared with standard fixed image fractal schemes.© (1994) COPYRIGHT SPIE--The International Society for Optical Engineering. Downloading of the abstract is permitted for personal use only.

62 citations

Journal ArticleDOI
Lionel Gueguen1
TL;DR: A new compact representation for the fast query/classification of compound structures from very high resolution optical remote sensing imagery relying on the multiscale segmentation of the input image and the quantization of image structures pooled into visual word distributions for the characterization of compound structure is proposed.
Abstract: With the increased spatial resolution of current sensor constellations, more details are captured about our changing planet, enabling the recognition of a greater range of land use/land cover classes. While pixeland object-based classification approaches are widely used for extracting information from imagery, recent studies have shown the importance of spatial contexts for discriminating more specific and challenging classes. This paper proposes a new compact representation for the fast query/classification of compound structures from very high resolution optical remote sensing imagery. This bag-of-features representation relies on the multiscale segmentation of the input image and the quantization of image structures pooled into visual word distributions for the characterization of compound structures. A compressed form of the visual word distributions is described, allowing adaptive and fast queries/classification of image patterns. The proposed representation and the query methodology are evaluated for the classification of the UC Merced 21-class data set, for the detection of informal settlements and for the discrimination of challenging agricultural classes. The results show that the proposed representation competes with state-of-the-art techniques. In addition, the complexity analysis demonstrates that the representation requires about 5% of the image storage space while allowing us to perform queries at a speed down to 1 s/ 1000 km 2 /CPU for 2-m multispectral data.

61 citations

Journal ArticleDOI
01 Mar 2005
TL;DR: This paper presents an FPGA implementation of the parallel-beam backprojection algorithm used in CT for which all the requirements are met and shows approximately 100 times speedup over software versions of the same algorithm running on a 1 GHz Pentium, and is more flexible than an ASIC implementation.
Abstract: Medical image processing in general and computerized tomography (CT) in particular can benefit greatly from hardware acceleration. This application domain is marked by computationally intensive algorithms requiring the rapid processing of large amounts of data. To date, reconfigurable hardware has not been applied to the important area of image reconstruction. For efficient implementation and maximum speedup, fixed-point implementations are required. The associated quantization errors must be carefully balanced against the requirements of the medical community. Specifically, care must be taken so that very little error is introduced compared to floating-point implementations and the visual quality of the images is not compromised. In this paper, we present an FPGA implementation of the parallel-beam backprojection algorithm used in CT for which all of these requirements are met. We explore a number of quantization issues arising in backprojection and concentrate on minimizing error while maximizing efficiency. Our implementation shows approximately 100 times speedup over software versions of the same algorithm running on a 1 GHz Pentium, and is more flexible than an ASIC implementation. Our FPGA implementation can easily be adapted to both medical sensors with different dynamic ranges as well as tomographic scanners employed in a wider range of application areas including nondestructive evaluation and baggage inspection in airport terminals.

61 citations

Journal ArticleDOI
Bolun Zheng1, Yaowu Chen1, Xiang Tian1, Fan Zhou1, Xuesong Liu1 
TL;DR: An implicit dual-domain convolutional network with a pixel position labeling map and quantization tables as inputs is proposed and is superior to the state-of-the-art methods and IDCN-f exhibits excellent abilities to handle a wide range of compression qualities with a little trade-off against performance.
Abstract: Several dual-domain convolutional neural network-based methods show outstanding performance in reducing image compression artifacts. However, they are unable to handle color images as the compression processes for gray scale and color images are different. Moreover, these methods train a specific model for each compression quality, and they require multiple models to achieve different compression qualities. To address these problems, we proposed an implicit dual-domain convolutional network (IDCN) with a pixel position labeling map and quantization tables as inputs. We proposed an extractor-corrector framework-based dual-domain correction unit (DCU) as the basic component to formulate the IDCN; the implicit dual-domain translation allows the IDCN to handle color images with discrete cosine transform (DCT)-domain priors. A flexible version of IDCN (IDCN-f) was also developed to handle a wide range of compression qualities. Experiments for both objective and subjective evaluations on benchmark datasets show that IDCN is superior to the state-of-the-art methods and IDCN-f exhibits excellent abilities to handle a wide range of compression qualities with a little trade-off against performance; further, it demonstrates great potential for practical applications.

61 citations


Network Information
Related Topics (5)
Feature extraction
111.8K papers, 2.1M citations
84% related
Image segmentation
79.6K papers, 1.8M citations
84% related
Feature (computer vision)
128.2K papers, 1.7M citations
84% related
Image processing
229.9K papers, 3.5M citations
83% related
Robustness (computer science)
94.7K papers, 1.6M citations
81% related
Performance
Metrics
No. of papers in the topic in previous years
YearPapers
20228
2021354
2020283
2019294
2018259
2017295