scispace - formally typeset
Search or ask a question

Showing papers on "JPEG 2000 published in 2011"


Journal ArticleDOI
TL;DR: It is established experimentally that the ITAD structure results in lower-complexity representations that enjoy greater sparsity when compared to other recent dictionary structures, and a global rate-distortion criterion is proposed that distributes the code bits across the various image blocks.
Abstract: We introduce a new image coder which uses the Iteration Tuned and Aligned Dictionary (ITAD) as a transform to code image blocks taken over a regular grid. We establish experimentally that the ITAD structure results in lower-complexity representations that enjoy greater sparsity when compared to other recent dictionary structures. We show that this superior sparsity can be exploited successfully for compressing images belonging to specific classes of images (e.g., facial images). We further propose a global rate-distortion criterion that distributes the code bits across the various image blocks. Our evaluation shows that the proposed ITAD codec can outperform JPEG2000 by more than 2 dB at 0.25 bpp and by 0.5 dB at 0.45 bpp, accordingly producing qualitatively better reconstructions.

90 citations


Journal ArticleDOI
TL;DR: It is shown that for cartoon-like images this codec can outperform the JPEG standard and even its more advanced successor JPEG2000 and be able to encode and decode in real time.

84 citations


Journal ArticleDOI
TL;DR: A new image quality assessment method based on a hybrid of curvelet, wavelet, and cosine transforms called hybrid no-reference (HNR) model, which has three benefits: it is an NR method applicable to arbitrary images without compromising the prediction accuracy of full-reference methods, and it is the only general NR method well suited for four types of filters.
Abstract: In this paper, we propose a new image quality assessment method based on a hybrid of curvelet, wavelet, and cosine transforms called hybrid no-reference (HNR) model. From the properties of natural scene statistics, the peak coordinates of the transformed coefficient histogram of filtered natural images occupy well-defined clusters in peak coordinate space, which makes NR possible. Compared to other methods, HNR has three benefits: 1) It is an NR method applicable to arbitrary images without compromising the prediction accuracy of full-reference methods; 2) as far as we know, it is the only general NR method well suited for four types of filters: noise, blur, JPEG2000, and JPEG compression; and 3) it can classify the filter types of the image and predict filter levels even when the image is results from the application of two different filters. We tested HNR on very intensive video image database (our image library) and Laboratory for Image & Video Engineering (a public library). Results are compared to the state-of-the-art methods including peak SNR, structural similarity, visual information fidelity, and so on.

82 citations


Proceedings ArticleDOI
19 Dec 2011
TL;DR: Wavelet compression in JPEG 2000 is revisited by using a standards-based method to reduce large-scale data sizes for production scientific computing and to quantify compression effects, measuring bit rate versus maximum error as a quality metric to provide precision guarantees for scientific analysis on remotely compressed POP data.
Abstract: We revisit wavelet compression by using a standards-based method to reduce large-scale data sizes for production scientific computing. Many of the bottlenecks in visualization and analysis come from limited bandwidth in data movement, from storage to networks. The majority of the processing time for visualization and analysis is spent reading or writing large-scale data or moving data from a remote site in a distance scenario. Using wavelet compression in JPEG 2000, we provide a mechanism to vary data transfer time versus data quality, so that a domain expert can improve data transfer time while quantifying compression effects on their data. By using a standards-based method, we are able to provide scientists with the state-of-the-art wavelet compression from the signal processing and data compression community, suitable for use in a production computing environment. To quantify compression effects, we focus on measuring bit rate versus maximum error as a quality metric to provide precision guarantees for scientific analysis on remotely compressed POP (Parallel Ocean Program) data.

69 citations


Journal ArticleDOI
TL;DR: This paper has implemented and tested Huffman and arithmetic algorithms, and implemented results show that compression ratio of arithmetic coding is better than Huffman coding, while the performance of the Huff man coding is higher than Arithmetic coding.
Abstract: Compression is a technique to reduce the quantity of data without excessively reducing the quality of the multimedia data.The transition and storing of compressed multimedia data is much faster and more efficient than original uncompressed multimedia data. There are various techniques and standards for multimedia data compression, especially for image compression such as the JPEG and JPEG2000 standards. These standards consist of different functions such as color space conversion and entropy coding. Arithmetic and Huffman coding are normally used in the entropy coding phase. In this paper we try to answer the following question. Which entropy coding, arithmetic or Huffman, is more suitable compared to other from the compression ratio, performance, and implementation points of view? We have implemented and tested Huffman and arithmetic algorithms. Our implemented results show that compression ratio of arithmetic coding is better than Huffman coding, while the performance of the Huffman coding is higher than arithmetic coding. In addition, implementation of Huffman coding is much easier than the arithmetic coding.

52 citations


Journal ArticleDOI
01 Jun 2011
TL;DR: The fractional wavelet filter computes the wavelet transform of a 256x256 grayscale image using only 16-bit fixed-point arithmetic on a micro-controller with less than 1.5kbyte of RAM and gives typically negligible degradations in image quality.
Abstract: Existing image wavelet transform techniques exceed the computational and memory resources of low-complexity wireless sensor nodes. In order to enable multimedia wireless sensors to use image wavelet transforms techniques to pre-process collected image sensor data, we introduce the fractional wavelet filter. The fractional wavelet filter computes the wavelet transform of a 256x256 grayscale image using only 16-bit fixed-point arithmetic on a micro-controller with less than 1.5kbyte of RAM. We comprehensively evaluate the resource requirements (RAM, computational complexity, computing time) as well as image quality of the fractional wavelet filter. We find that the fractional wavelet transform computed with fixed-point arithmetic gives typically negligible degradations in image quality. We also find that combining the fractional wavelet filter with a customized wavelet-based image coding system achieves image compression competitive to the JPEG2000 standard.

41 citations


BookDOI
TL;DR: A simple quantisation and coding scheme of colour MP decomposition based on Run Length Encoding (RLE) which can achieve comparable performance to JPEG 2000 even though the latter utilises careful data modelling at the coding stage.
Abstract: We present and evaluate a novel idea for scalable lossy colour image coding with Matching Pursuit (MP) performed in a transform domain. The idea is to exploit correlations in RGB colour space between image subbands after wavelet transformation rather than in the spatial domain. We propose a simple quantisation and coding scheme of colour MP decomposition based on Run Length Encoding (RLE) which can achieve comparable performance to JPEG 2000 even though the latter utilises careful data modelling at the coding stage. Thus, the obtained image representation has the potential to outperform JPEG 2000 with a more sophisticated coding algorithm.

37 citations


Journal ArticleDOI
TL;DR: An effective method to detect the quantization table from the contaminated digital images which are originally stored as JPEG format based on the recently developed work about JPEG compression error analysis is proposed, and a quantitative method to reliably estimate the length of spatial modifications in those gray-scale JPEG stegos by using data fitting technology is presented.
Abstract: Although many existing steganalysis works have shown that the spatial ±1 steganography on JPEG pre-compressed images is relatively easier to be detected compared with that on the never-compressed images, most experimental results seem not very convincing since these methods usually assume that the quantization table of the JPEG stegos previously used is known before detection and/or the length of embedded message is fixed. Furthermore, there are just few effective quantitative algorithms for further estimating the spatial modifications. In this letter, we firstly propose an effective method to detect the quantization table from the contaminated digital images which are originally stored as JPEG format based on our recently developed work about JPEG compression error analysis , and then we present a quantitative method to reliably estimate the length of spatial modifications in those gray-scale JPEG stegos by using data fitting technology. The extensive experimental results show that our estimators are very effective, and the order of magnitude of prediction error can remain around measured by the mean absolute difference.

34 citations


Journal ArticleDOI
TL;DR: This work investigates the utility of a visual discrimination model (VDM) and other distortion metrics for predicting JPEG 2000 bit rates corresponding to visually lossless compression of virtual slides for breast biopsy specimens and suggests that VDM metrics could be used to guide the compression ofvirtual slides to achieve visually lossed compression while providing 5-12 times the data reduction of reversible methods.
Abstract: A major issue in telepathology is the extremely large and growing size of digitized “virtual” slides, which can require several gigabytes of storage and cause significant delays in data transmission for remote image interpretation and interactive visualization by pathologists. Compression can reduce this massive amount of virtual slide data, but reversible (lossless) methods limit data reduction to less than 50%, while lossy compression can degrade image quality and diagnostic accuracy. “Visually lossless” compression offers the potential for using higher compression levels without noticeable artifacts, but requires a rate-control strategy that adapts to image content and loss visibility. We investigated the utility of a visual discrimination model (VDM) and other distortion metrics for predicting JPEG 2000 bit rates corresponding to visually lossless compression of virtual slides for breast biopsy specimens. Threshold bit rates were determined experimentally with human observers for a variety of tissue regions cropped from virtual slides. For test images compressed to their visually lossless thresholds, just-noticeable difference (JND) metrics computed by the VDM were nearly constant at the 95th percentile level or higher, and were significantly less variable than peak signal-to-noise ratio (PSNR) and structural similarity (SSIM) metrics. Our results suggest that VDM metrics could be used to guide the compression of virtual slides to achieve visually lossless compression while providing 5-12 times the data reduction of reversible methods.

33 citations


Proceedings ArticleDOI
29 Nov 2011
TL;DR: A forensic algorithm to discriminate between original and forged regions in JPEG images, under the hypothesis that the tampered image presents a non-aligned double JPEG compression (NA-JPEG).
Abstract: In this paper, we present a forensic algorithm to discriminate between original and forged regions in JPEG images, under the hypothesis that the tampered image presents a non-aligned double JPEG compression (NA-JPEG). Unlike previous approaches, the proposed algorithm does not need to manually select a suspect region to test the presence or the absence of NA-JPEG artifacts. Based on a new statistical model, the probability for each 8 × 8 DCT block to be forged is automatically derived. Experimental results, considering different forensic scenarios, demonstrate the validity of the proposed approach.

31 citations


Journal ArticleDOI
TL;DR: The study studies the implications of JPEG (JPG) and JPEG 2000 (J2K) lossy compression for image classification of forests in Mediterranean areas and finds that the J2K compression standard is better than the JPG when applied to image classification.

Journal ArticleDOI
TL;DR: To encode all samples in a stripe-column, concurrently a new technique named as compact context coding is devised, and high throughput is attained and hardware requirement is also cut down.
Abstract: The embedded block coding with optimized truncation (EBCOT) is a key algorithm in JPEG 2000 image compression system. Various applications, such as medical imaging, satellite imagery, digital cinema, and others, require high speed, high performance EBCOT architecture. Though efficient EBCOT architectures have been proposed, hardware requirement of these existing architectures is very high and throughput is low. To solve this problem, we investigated rate of concurrent context generation. Our paper revealed that in an image rate of four or more context pairs generation is about 68.9%. Therefore, to encode all samples in a stripe-column, concurrently a new technique named as compact context coding is devised. As a consequence, high throughput is attained and hardware requirement is also cut down. The performance of the matrix quantizer coder is improved by operating renormalization and byte out stages concurrently. The entire design of EBCOT encoder is tested on the field programmable gate array platform. The implementation results show that throughput of the proposed architecture is 163.59 MSamples/s which is equivalent to encoding 1920p (1920 × 1080, 4:2:2) high-definition TV picture sequence at 39 f/s. However, only bit plane coder (BPC) architecture operates at 315.06 MHz which implies that it is 2.86 times faster than the fastest BPC design available so far. Moreover, it is capable of encoding digital cinema size (2048 × 1080) at 42 f/s. Thus, it satisfies the requirement of applications like cartography, medical imaging, satellite imagery, and others, which demand high-speed real-time image compression system.

Book ChapterDOI
23 Oct 2011
TL;DR: The efficacy of the proposed anti-forensic scheme has been evaluated on two prominent double JPEG detection techniques and the outcome reveals that the proposed scheme is mostly effective, especially in the cases that the first quality factor is lower than the second quality factor.
Abstract: In this paper, a simple yet effective anti-forensic scheme capable of misleading double JPEG compression detection techniques is proposed. Based on image resizing with bilinear interpolation, the proposed operation aims at destroying JPEG grid structure while preserving reasonably good image quality. Given a doubly compressed image, our attack modifies the image by JPEG decompressing, shrinking and zooming the image with bilinear interpolation before JPEG compression with the same quality factor as used in the given image. The efficacy of the proposed scheme has been evaluated on two prominent double JPEG detection techniques and the outcome reveals that the proposed scheme is mostly effective, especially in the cases that the first quality factor is lower than the second quality factor.

Book ChapterDOI
29 Aug 2011
TL;DR: JPEG XR is considered as a lossy sample data compression scheme in the context of iris recognition techniques and it is shown that apart from low-bitrate scenarios, JPEGXR is competitive to the current standard JPEG2000 while exhibiting significantly lower computational demands.
Abstract: JPEG XR is considered as a lossy sample data compression scheme in the context of iris recognition techniques. It is shown that apart from low-bitrate scenarios, JPEG XR is competitive to the current standard JPEG2000 while exhibiting significantly lower computational demands.

Book ChapterDOI
08 Jun 2011
TL;DR: The impact of using different lossless compression algorithms when compressing biometric iris sample data from several public iris databases is investigated and polar iris images are examined, specifically after iris detection, iris extraction, and mapping to polar coordinates.
Abstract: The impact of using different lossless compression algorithms when compressing biometric iris sample data from several public iris databases is investigated. In particular, the application of dedicated lossless image codecs (lossless JPEG, JPEG-LS, PNG, and GIF), lossless variants of lossy codecs (JPEG2000, JPEG XR, and SPIHT), and a few general purpose file compression schemes is compared. We specifically focus on polar iris images (as a result after iris detection, iris extraction, and mapping to polar coordinates). The results are discussed in the light of the recent ISO/IEC FDIS 19794-6 standard and IREX recommendations.

Proceedings ArticleDOI
05 Jun 2011
TL;DR: The computation steps in JPEG2000 are examined, particularly in the Tier-1, and novel, GPGPU compatible, parallel processing methods for the sample-level coding of the images are developed.
Abstract: The JPEG2000 image compression standard provides superior features to the popular JPEG standard; however, the slow performance of software implementation of JPEG2000 has kept it from being widely adopted. More than 80% of the execution time for JPEG2000 is spent on the Tier-1 coding engine. While much effort over the past decade has been devoted to optimizing this component, its performance still remains slow. The major reason for this is that the Tier-1 coder consists of highly serial operations, each operating on individual bits in every single bit plane of the image samples. In addition, in the past there lacked an efficient hardware platform to provide massively parallel acceleration for Tier-1. However, the recent growth of general purpose graphic processing unit (GPGPU) provides a great opportunity to solve the problem with thousands of parallel processing threads. In this paper, the computation steps in JPEG2000 are examined, particularly in the Tier-1, and novel, GPGPU compatible, parallel processing methods for the sample-level coding of the images are developed. The GPGPU-based parallel engine allows for significant speedup in execution time compared to the JasPer JPEG2000 compression software. Running on a single Nvidia GTX 480 GPU, the parallel wavelet engine achieves 100× speedup, the parallel bit plane coder achieves more than 30× speedup, and the overall Tier-1 coder achieves up to 17× speedup.

Book ChapterDOI
05 Sep 2011
TL;DR: The design of an interactive high resolution image viewing architecture for mobile devices based on JPEG XR is presented and display resolution, resolution scalability, image tiling are investigated in order to optimize the coding parameters with the objective to improve the user experience.
Abstract: Services for high definition image browsing on mobile devices require a careful design since the user experience is heavily depending on the network bandwidth, processing delay, display resolution, image quality. Modern applications require coding technologies providing tools for resolution and quality scalability, for accessing spatial regions of interest (ROI), for reducing the domain of the coding algorithm decomposing large images into tiles. Some state-of-the-art technologies satisfying these requirements are the JPEG2000 and the JPEG XR. This paper presents the design of an interactive high resolution image viewing architecture for mobile devices based on JPEG XR. Display resolution, resolution scalability, image tiling are investigated in order to optimize the coding parameters with the objective to improve the user experience. Experimental tests are performed on a set of large images and comparisons against accessing the images without parameter optimization are reported.

Journal ArticleDOI
TL;DR: Evaluating the effect of two lossy image compression methods on fractal dimension (FD) calculation found lossy compressed images with appropriate compression level may be used for FD calculation.
Abstract: The aim of the study was to evaluate the effect of two lossy image compression methods on fractal dimension (FD) calculation Ten periapical images of the posterior teeth with no restorations or previous root canal therapy were obtained using storage phosphor plates and were saved in TIF format Then, all images were compressed with lossy JPEG and JPEG2000 compression methods at five compression levels, ie, 90, 70, 50, 30, and 10 Compressed file sizes from all images and compression ratios were calculated On each image, two regions of interest (ROIs) containing healthy trabecular bone in the posterior periapical area were selected The FD of each ROI on the original and compressed images was calculated using differential box counting method Both image compression and analysis were performed by a public domain software Altogether, the FD of 220 ROIs was calculated FDs were compared using ANOVA and Dunnett tests The FD decreased gradually with compression level A statistically significant decrease of the FD values was found for JPEG 10, JPEG2000 10, and JPEG2000 30 compression levels (p < 005) At comparable file sizes, the JPEG induced a smaller FD difference In conclusion, lossy compressed images with appropriate compression level may be used for FD calculation

Book ChapterDOI
29 May 2011
TL;DR: To detect JPEG double compression, it is proposed to extract the neighboring joint density features and marginal density features on the DCT coefficients, and then to apply learning classifiers to the features for detection, and Experimental results indicate that the proposed method delivers promising performance in uncovering JPEG-based double compression.
Abstract: Digital multimedia forensics is an emerging field that has important applications in law enforcement, the protection of public safety, and notational security. As a popular image compression standard, the JPEG format is widely adopted; however, the tampering of JPEG images can be easily performed without leaving visible clues, and it is increasingly necessary to develop reliable methods to detect forgery in JPEG images. JPEG double compression is frequently used during image forgery, and it leaves a clue to the manipulation. To detect JPEG double compression, we propose in this paper to extract the neighboring joint density features and marginal density features on the DCT coefficients, and then to apply learning classifiers to the features for detection. Experimental results indicate that the proposed method delivers promising performance in uncovering JPEG-based double compression. In addition, we analyze the relationship among compression quality factor, image complexity, and the performance of our double compression detection algorithm, and demonstrate that a complete evaluation of the detection performance of different algorithms should necessarily include both the image complexity and double compression quality factor.

Journal ArticleDOI
TL;DR: Simulation results are presented and show that the proposed scheme optimizes network lifetime and reduces significantly the amount of required memory by analyzing the functional influence of each parameter of this distributed image compression algorithm.
Abstract: When using wireless sensor networks for real-time data transmission, some critical points should be considered. Restricted computational power, reduced memory, narrow bandwidth and energy supplied present strong limits in sensor nodes. Therefore, maximizing network lifetime and minimizing energy consumption are always optimization goals. To overcome the computation and energy limitation of individual sensor nodes during image transmission, an energy efficient image transport scheme is proposed, taking advantage of JPEG2000 still image compression standard using MATLAB and C from Jasper. JPEG2000 provides a practical set of features, not necessarily available in the previous standards. These features were achieved using techniques: the discrete wavelet transform (DWT), and embedded block coding with optimized truncation (EBCOT). Performance of the proposed image transport scheme is investigated with respect to image quality and energy consumption. Simulation results are presented and show that the proposed scheme optimizes network lifetime and reduces significantly the amount of required memory by analyzing the functional influence of each parameter of this distributed image compression algorithm.

Proceedings ArticleDOI
08 Apr 2011
TL;DR: An efficient Xilinx Spartan3A DSP implementation of 2D DWT (Discrete Wavelet Transforms) using polyphase filterbank architecture with Distributed Arithmetic (DA) to speedup wavelet computation is described.
Abstract: In this paper, we describe an efficient Xilinx Spartan3A DSP implementation of 2D DWT (Discrete Wavelet Transforms) using polyphase filterbank architecture with Distributed Arithmetic (DA) to speedup wavelet computation. Results show that the distributed arithmetic formulation results in a considerable performance gain while reducing the consumption of logic resources significantly. This architecture supports any size of Image and any level of decomposition. With minor changes this core can be implemented on any FPGA device. This irreversible Discrete Wavelet Transform uses 9/7 Daubechies coefficients which are used for lossy compression in the JPEG2000 standard. We can plug this core directly in any JPEG ENCODER / TV display controller for any crontel device (BT656 standard, interface of CH7009).

Proceedings ArticleDOI
15 May 2011
TL;DR: The proposed codec employs a simple yet efficient block classification algorithm to identify the blocks to pictorial and textural ones and JPEG is used to encode the pictorial blocks and PNG is used for textual blocks and significantly outperforms JPEG and PNG in terms of rate-distortion performance.
Abstract: In this paper, we present a browser-friendly hybrid JPEG/PNG codec for compound images. First we employ a simple yet efficient block classification algorithm to identify the blocks to pictorial and textural ones. And then JPEG is used to encode the pictorial blocks and PNG is used for textual blocks. Our evaluation results show that our codec significantly outperforms JPEG and PNG in terms of rate-distortion performance, and also it outperforms JPEG, JPEG2000 and DjVu in terms of visual quality. Moreover, since JPEG and PNG are naturally supported by modern browsers, the coded images generated from our proposed coder can be natively supported by browsers and possible to be widely deployed in Web applications.

Journal ArticleDOI
TL;DR: By using the proposed method, the quality of a spectrum can be improved by decoding the residual data, and the quality is comparable to that obtained by using JPEG2000.
Abstract: In this paper, we present a multispectral image (MSI) compression method using a lossless and lossy coding scheme, which focuses on the seamless coding of the RGB bit stream to enhance the usability of the MSI. The proposed method divides the MSI data into two components: RGB and residual. The RGB component is extracted from the MSI by using the XYZ color matching functions, a color conversion matrix, and a gamma curve. The original MSI is estimated by an RGB data encoder and the difference between the original and the estimated MSI, which is referred to as the residual component in this paper. Next, the RGB and residual components are encoded by using JPEG2000, and progressive decoding is achieved from the losslessly encoded code stream. Experimental results show that a high-quality RGB image can be obtained at a low bit rate with primary encoding of the RGB component. In addition, by using the proposed method, the quality of a spectrum can be improved by decoding the residual data, and the quality is comparable to that obtained by using JPEG2000. The lossless compression ratio obtained by using this method is also similar to that obtained by using JPEG2000 with the integer Karhunen-Loeve transform.

Journal ArticleDOI
TL;DR: This paper presents a color image restoration algorithm derived by the MAP estimation where all components are totally estimated and Experimental results show that the proposed restoration algorithm is more effective than the previous one.

Journal ArticleDOI
TL;DR: It is proposed that different subsets of wavelet coefficients of a color image be subjected to different spectral transforms before the resultant coefficients are coded by an efficient wavelet coefficient coding scheme such as that used in JPEG2000 or color set partitioning in hierarchical trees (CSPIHT).
Abstract: Since different regions of a color image generally exhibit different spectral characteristics, the energy compaction of applying a single spectral transform to all regions is largely inefficient from a compression perspective. Thus, it is proposed that different subsets of wavelet coefficients of a color image be subjected to different spectral transforms before the resultant coefficients are coded by an efficient wavelet coefficient coding scheme such as that used in JPEG2000 or color set partitioning in hierarchical trees (CSPIHT). A quadtree represents the spatial partitioning of the set of high frequency coefficients of the color planes into spatially oriented subsets which may be further partitioned into smaller directionally oriented subsets. The partitioning decisions and decisions to employ fixed or signal-dependent bases for each subset are rate-distortion (R-D) optimized by employing a known analytical R-D model for these coefficient coding schemes. A compression system of asymmetric complexity, that integrates the proposed adaptive spectral transform with the CSPIHT coefficient coding scheme yields average coding gains of 0.3 dB and 0.9 dB in the Y component at 1.0 b/p and 2.5 b/p, respectively, and 0.9 dB and 1.35 dB in the U and V components at 1.0 b/p and 2.5 b/p, respectively, over a reference compression system that integrates the single spectral transform derived from the entire image with the CSPIHT coefficient coding scheme.

Journal ArticleDOI
TL;DR: Although the BCWT algorithm is a wavelet tree-based algorithm, its coding order differs from that of the traditional waveletTree-based algorithms, which allows the proposed line-based image codec to become more memory efficient than other line-Based image codecs, including line- based JPEG2000, while still offering comparable rate distortion performance and much lower system complexity.
Abstract: When compared to the traditional row-column wavelet transform, the line-based wavelet transform can achieve significant memory savings. However, the design of an image codec using the line-based wavelet transform is an intricate task because of the irregular order in which the wavelet coefficients are generated. The independent block coding feature of JPEG2000 makes it work effectively with the line-based wavelet transform. However, with wavelet tree-based image codecs, such as set partitioning in hierarchical trees, the memory usage of the codecs does not realize significant advantage with the line-based wavelet transform because many wavelet coefficients must be buffered before the coding starts. In this paper, the line-based wavelet transform was utilized to facilitate backward coding of wavelet trees (BCWT). Although the BCWT algorithm is a wavelet tree-based algorithm, its coding order differs from that of the traditional wavelet tree-based algorithms, which allows the proposed line-based image codec to become more memory efficient than other line-based image codecs, including line-based JPEG2000, while still offering comparable rate distortion performance and much lower system complexity.

Posted Content
TL;DR: In this paper, the authors tried to answer the following question: which entropy coding, arithmetic or Huffman, is more suitable compared to other from the compression ratio, performance, and implementation points of view?
Abstract: Compression is a technique to reduce the quantity of data without excessively reducing the quality of the multimedia data. The transition and storing of compressed multimedia data is much faster and more efficient than original uncompressed multimedia data. There are various techniques and standards for multimedia data compression, especially for image compression such as the JPEG and JPEG2000 standards. These standards consist of different functions such as color space conversion and entropy coding. Arithmetic and Huffman coding are normally used in the entropy coding phase. In this paper we try to answer the following question. Which entropy coding, arithmetic or Huffman, is more suitable compared to other from the compression ratio, performance, and implementation points of view? We have implemented and tested Huffman and arithmetic algorithms. Our implemented results show that compression ratio of arithmetic coding is better than Huffman coding, while the performance of the Huffman coding is higher than Arithmetic coding. In addition, implementation of Huffman coding is much easier than the Arithmetic coding.

Proceedings Article
01 Sep 2011
TL;DR: In order to investigate the impact of quantization matrix on the performance of JPEG, a sample DCT was calculated, and images were quantized using several quantization matrices and the results are compared with the standard quantization Matrix.
Abstract: With the increase in imaging sensor resolution, the captured images are becoming larger and larger, which requires higher image compression ratio. Discrete Cosine Transform (DCT) quantization and entropy encoding are the two main steps in the Joint Photographic Experts Group (JPEG) image Compression standard. In order to investigate the impact of quantization matrix on the performance of JPEG, a sample DCT was calculated, images were quantized using several quantization matrices. The results are compared with the standard quantization matrix. The performance of JPEG is also analyzed for different images with different compression factors.

Journal ArticleDOI
TL;DR: The proposed compression scheme has generally superior performance in images where there is substantial amount of background and the proposed quantization table and method for converting a 3D image cube into a ID array provide better coding efficiency in run length coding.
Abstract: This paper presents a new JPEG-based lossy image compression method based on three-dimensional formation of the original image by spiral order scanning and 3D discrete cosine transformation (DCT). The proposed spiral scanning results in similar information blocks to be in the same 8x8x8 cube. The proposed quantization table and method for converting a 3D image cube into a ID array provide better coding efficiency in run length coding. Hence, a better performance in a wide range of images is obtained. The performance of the proposed compression method is measured over various images, and it is observed that the proposed method has better performance than that of conventional JPEG compression especially in low and in high bit rates. In addition, the proposed compression scheme has generally superior performance in images where there is substantial amount of background.