scispace - formally typeset
Search or ask a question
Topic

Quantization (image processing)

About: Quantization (image processing) is a research topic. Over the lifetime, 7977 publications have been published within this topic receiving 126632 citations.


Papers
More filters
Patent
14 Nov 2003
TL;DR: In this paper, a system and method for variable bit rate encoding using a complexity ratio is provided, where complex pictures are allocated a larger bit budget relative to simple pictures and the quality of complex pictures can be maintained while reducing the overall size of the encoded video stream.
Abstract: A system and method is provided for variable bit rate encoding using a complexity ratio. Quantization parameter is calculated using a complexity ratio, which is equal to a local complexity divided by a global complexity. Complex pictures are allocated a larger bit budget relative to simple pictures. With the larger bit budget the quality of complex pictures can be maintained while reducing the overall size of the encoded video stream.

53 citations

Book ChapterDOI
21 May 2001
TL;DR: A blind watermarking method integrated in the JPEG2000 coding pipeline that is robust to compression and other image processing attacks, and demonstrates two application scenarios: image authentication and copyright protection.
Abstract: In this paper, we propose a blind watermarking method integrated in the JPEG2000 coding pipeline. Prior to the entropy coding stage, the binary watermark is placed in the independent code-blocks using Quantization Index Modulation (QIM). The quantization strategy allows to embed data in the detail subbands of low resolution as well as in the approximation image. Watermark recovery is performed without reference to the original image during image decompression. The proposed embedding scheme is robust to compression and other image processing attacks. We demonstrate two application scenarios: image authentication and copyright protection.

53 citations

Journal ArticleDOI
TL;DR: No chaotic operation is needed for image diffusion, the efficiency is promoted and the complete cryptosystem is built using Bake map for image permutation, which proves the superior security and high efficiency of the proposed scheme.

53 citations

Journal ArticleDOI
TL;DR: A reversed-pruning strategy is proposed which reduces the number of parameters of AlexNet by a factor of 13× without accuracy loss on the ImageNet dataset and an efficient storage technique, which aims for the reduction of the whole overhead cache of the convolutional layer and the fully connected layer, is presented.
Abstract: Field programmable gate array (FPGA) is widely considered as a promising platform for convolutional neural network (CNN) acceleration. However, the large numbers of parameters of CNNs cause heavy computing and memory burdens for FPGA-based CNN implementation. To solve this problem, this paper proposes an optimized compression strategy, and realizes an accelerator based on FPGA for CNNs. Firstly, a reversed-pruning strategy is proposed which reduces the number of parameters of AlexNet by a factor of 13× without accuracy loss on the ImageNet dataset. Peak-pruning is further introduced to achieve better compressibility. Moreover, quantization gives another 4× with negligible loss of accuracy. Secondly, an efficient storage technique, which aims for the reduction of the whole overhead cache of the convolutional layer and the fully connected layer, is presented respectively. Finally, the effectiveness of the proposed strategy is verified by an accelerator implemented on a Xilinx ZCU104 evaluation board. By improving existing pruning techniques and the storage format of sparse data, we significantly reduce the size of AlexNet by 28×, from 243 MB to 8.7 MB. In addition, the overall performance of our accelerator achieves 9.73 fps for the compressed AlexNet. Compared with the central processing unit (CPU) and graphics processing unit (GPU) platforms, our implementation achieves 182.3× and 1.1× improvements in latency and throughput, respectively, on the convolutional (CONV) layers of AlexNet, with an 822.0× and 15.8× improvement for energy efficiency, separately. This novel compression strategy provides a reference for other neural network applications, including CNNs, long short-term memory (LSTM), and recurrent neural networks (RNNs).

53 citations

Journal ArticleDOI
TL;DR: It is demonstrated that the dense micro-block difference features proposed have dimensionality much lower than Scale Invariant Feature Transform (SIFT) and can be computed using integral image much faster than SIFT.
Abstract: This paper is devoted to the problem of texture classification. Motivated by recent advancements in the field of compressive sensing and keypoints descriptors, a set of novel features called dense micro-block difference (DMD) is proposed. These features provide highly descriptive representation of image patches by densely capturing the granularities at multiple scales and orientations. Unlike most of the earlier work on local features, the DMD does not involve any quantization, thus retaining the complete information. We demonstrate that the DMD have dimensionality much lower than Scale Invariant Feature Transform (SIFT) and can be computed using integral image much faster than SIFT. The proposed features are encoded using the Fisher vector method to obtain an image descriptor, which considers high-order statistics. The proposed image representation is combined with the linear support vector machine classifier. Extensive experiments are conducted on five texture data sets (KTH-TIPS, UMD, KTH-TIPS-2a, Brodatz, and Curet) using standard protocols. The results demonstrate that our approach outperforms the state-of-the-art in texture classification.

53 citations


Network Information
Related Topics (5)
Feature extraction
111.8K papers, 2.1M citations
84% related
Image segmentation
79.6K papers, 1.8M citations
84% related
Feature (computer vision)
128.2K papers, 1.7M citations
84% related
Image processing
229.9K papers, 3.5M citations
83% related
Robustness (computer science)
94.7K papers, 1.6M citations
81% related
Performance
Metrics
No. of papers in the topic in previous years
YearPapers
20228
2021354
2020283
2019294
2018259
2017295