scispace - formally typeset
Search or ask a question
Topic

Quantization (image processing)

About: Quantization (image processing) is a research topic. Over the lifetime, 7977 publications have been published within this topic receiving 126632 citations.


Papers
More filters
Journal ArticleDOI
TL;DR: In this paper, the authors demonstrate and investigate resolution improvement of optical quantization using soliton self-frequency shift (SSFS) and optical coding using optical interconnection for an all-optical analog-to-digital conversion (ADC).
Abstract: We demonstrate and investigate resolution improvement of optical quantization using soliton self-frequency shift (SSFS) and optical coding using optical interconnection for an all-optical analog-to-digital conversion (ADC). Incorporating spectral compression into the optical quantization allows us to improve the resolution bit according to the spectral compression ratio with keeping its throughput. The proposed scheme consists of optical quantization using SSFS and self-phase modulation (SPM) induced spectral compression and optical coding using optical interconnection based on a binary conversion table. In optical quantization, the powers of input signals are discriminated by referring to the center wavelengths after the SSFS. The compression of the spectral width allows us to emphasize the differences of their center wavelengths, and improve the number of resolution bits. Optical interconnection generates a bit-parallel binary code by appropriate allocation of a level identification signal, which is provided as a result of optical quantization. Experimental results show the eight periods transfer function, that means, the four read-out bit operation of the proposed scheme in binary code. Simulation results indicate that the proposed optical quantization has the potential of 100 GS/s and 4-b resolution, which could surpass the electrical bandwidth limitations.

93 citations

Journal ArticleDOI
TL;DR: The detection and correction approach to transmission errors in JPEG images using the sequential discrete cosine transform (DCT)-based mode of operation is proposed and can recover high-quality JPEG images from the corresponding corrupted JPEG images at bit error rates up to approximately 0.4%.
Abstract: The detection and correction approach to transmission errors in JPEG images using the sequential discrete cosine transform (DCT)-based mode of operation is proposed. The objective is to eliminate transmission errors in JPEG images. Here a transmission error may be either a single-bit error or a burst error containing N successive error bits. For an entropy-coded JPEG image, a single transmission error in a codeword will not only affect the underlying codeword, but may also affect subsequent codewords. Consequently, a single error in an entropy-coded system may result in a significant degradation. To cope with the synchronization problem, in the proposed approach the restart capability of JPEG images is enabled, i.e., the eight unique restart markers (synchronization codewords) are periodically inserted into the JPEG compressed image bitstream. Transmission errors in a JPEG image are sequentially detected both when the JPEG image is under decoding and after the JPEG image has been decoded. When a transmission error or equivalently a corrupted restart interval is detected, the proposed error correction approach simply performs a sequence of bit inversions and redecoding operations on the corrupted restart interval and selects the "best" feasible redecoding solution by using a proposed cost function for error correction. The proposed approach can recover high-quality JPEG images from the corresponding corrupted JPEG images at bit error rates (BERs) up to approximately 0.4%. This shows the feasibility of the proposed approach.

93 citations

Proceedings ArticleDOI
03 Oct 2005
TL;DR: A system-level error tolerance scheme for systems where a linear transform is combined with quantization is proposed, which considers as an example the discrete cosine transform (DCT), which is part of a large number of existing image and video compression systems.
Abstract: In this paper, we propose a system-level error tolerance scheme for systems where a linear transform is combined with quantization. These are key components in multimedia compression systems, e.g., video and image codecs. Using the concept of acceptable degradation, our scheme classifies hardware faults into acceptable and unacceptable faults. We propose analysis techniques that allow us to estimate the faults' impact on compression performance, and in particular on the quality of decoded images/video. We consider as an example the discrete cosine transform (DCT), which is part of a large number of existing image and video compression systems. We propose methods to establish thresholds of acceptable degradation and corresponding testing algorithms for DCT-based systems. Our results for a JPEG encoder using a typical DCT architecture show that over 50% of single stuck-at interconnection faults in one of its 1D DCT modules lead to imperceptible quality degradation in the decoded images, over the complete range of compression rates at which JPEG can operate.

93 citations

Proceedings ArticleDOI
29 Oct 2012
TL;DR: The proposed scalar quantization achieves a relatively 42% improvement in mean average precision over the baseline (hierarchical visual vocabulary tree approach), and also outperforms the state-of-the-art Hamming Embedding approach and soft assignment method.
Abstract: Bag-of-Words (BoW) model based on SIFT has been widely used in large scale image retrieval applications. Feature quantization plays a crucial role in BoW model, which generates visual words from the high dimensional SIFT features, so as to adapt to the inverted file structure for indexing. Traditional feature quantization approaches suffer several problems: 1) high computational cost---visual words generation (codebook construction) is time consuming especially with large amount of features; 2) limited reliability---different collections of images may produce totally different codebooks and quantization error is hard to be controlled; 3) update inefficiency--once the codebook is constructed, it is not easy to be updated. In this paper, a novel feature quantization algorithm, scalar quantization, is proposed. With scalar quantization, a SIFT feature is quantized to a descriptive and discriminative bit-vector, of which the first tens of bits are taken out as code word. Our quantizer is independent of collections of images. In addition, the result of scalar quantization naturally lends itself to adapt to the classic inverted file structure for image indexing. Moreover, the quantization error can be flexibly reduced and controlled by efficiently enumerating nearest neighbors of code words. The performance of scalar quantization has been evaluated in partial-duplicate Web image search on a database of one million images. Experiments reveal that the proposed scalar quantization achieves a relatively 42% improvement in mean average precision over the baseline (hierarchical visual vocabulary tree approach), and also outperforms the state-of-the-art Hamming Embedding approach and soft assignment method.

93 citations

Journal ArticleDOI
TL;DR: A chip has been designed and tested to demonstrate the feasibility of an ultra-low-power, two-dimensional inverse discrete cosine transform (IDCT) computation unit in a standard 3.3-V process, which meets the sample rate requirements for MPEG-2 MP@ML.
Abstract: A chip has been designed and tested to demonstrate the feasibility of an ultra-low-power, two-dimensional inverse discrete cosine transform (IDCT) computation unit in a standard 3.3-V process. A data-driven computation algorithm that exploits the relative occurrence of zero-valued DCT coefficients coupled with clock gating has been used to minimize switched capacitance. In addition, circuit and architectural techniques such as deep pipelining have been used to lower the voltage and reduce the energy dissipation per sample. A Verilog-based power tool has been developed and used for architectural exploration and power estimation. The chip has a measured power dissipation of 4.65 mW at 1.3 V and 14 MHz, which meets the sample rate requirements for MPEG-2 MP@ML. The power dissipation improves significantly at lower bit rates (coarser quantization), which makes this implementation ideal for emerging quality-on-demand protocols that trade off energy efficiency and video quality.

93 citations


Network Information
Related Topics (5)
Feature extraction
111.8K papers, 2.1M citations
84% related
Image segmentation
79.6K papers, 1.8M citations
84% related
Feature (computer vision)
128.2K papers, 1.7M citations
84% related
Image processing
229.9K papers, 3.5M citations
83% related
Robustness (computer science)
94.7K papers, 1.6M citations
81% related
Performance
Metrics
No. of papers in the topic in previous years
YearPapers
20228
2021354
2020283
2019294
2018259
2017295