scispace - formally typeset
Search or ask a question

Showing papers on "Lossless JPEG published in 2014"


Journal ArticleDOI
TL;DR: A novel artifact reducing approach for the JPEG decompression is proposed via sparse and redundant representations over a learned dictionary, and an effective two-step algorithm is developed that outperforms the total variation and weighted total variation decompression methods.
Abstract: The JPEG compression method is among the most successful compression schemes since it readily provides good compressed results at a rather high compression ratio. However, the decompressed result of the standard JPEG decompression scheme usually contains some visible artifacts, such as blocking artifacts and Gibbs artifacts (ringing), especially when the compression ratio is rather high. In this paper, a novel artifact reducing approach for the JPEG decompression is proposed via sparse and redundant representations over a learned dictionary. Indeed, an effective two-step algorithm is developed. The first step involves dictionary learning and the second step involves the total variation regularization for decompressed images. Numerical experiments are performed to demonstrate that the proposed method outperforms the total variation and weighted total variation decompression methods in the measure of peak of signal to noise ratio, and structural similarity.

179 citations


Journal ArticleDOI
TL;DR: This letter presents a no-reference quality assessment algorithm for JPEG compressed images (NJQA), which testing on various image-quality databases demonstrates that NJQA is either competitive with or outperforms modern competing methods on JPEG images.
Abstract: This letter presents a no-reference quality assessment algorithm for JPEG compressed images (NJQA). Our method does not specifically aim to measure blockiness. Instead, quality is estimated by first counting the number of zero-valued DCT coefficients within each block, and then using a map, which we call the quality relevance map, to weight these counts. The quality relevance map for an image is a map that indicates which blocks are naturally uniform (or near-uniform) vs. which blocks have been made uniform (or near-uniform) via JPEG compression. Testing on various image-quality databases demonstrates that NJQA is either competitive with or outperforms modern competing methods on JPEG images.

104 citations


Journal ArticleDOI
TL;DR: An effective error-based statistical feature extraction scheme that can significantly outperform the state-of-the-art method to detect double JPEG compression with the same quantization matrix.
Abstract: Detection of double JPEG compression plays an important role in digital image forensics. Some successful approaches have been proposed to detect double JPEG compression when the primary and secondary compressions have different quantization matrices. However, detecting double JPEG compression with the same quantization matrix is still a challenging problem. In this paper, an effective error-based statistical feature extraction scheme is presented to solve this problem. First, a given JPEG file is decompressed to form a reconstructed image. An error image is obtained by computing the differences between the inverse discrete cosine transform coefficients and pixel values in the reconstructed image. Two classes of blocks in the error image, namely, rounding error block and truncation error block, are analyzed. Then, a set of features is proposed to characterize the statistical differences of the error blocks between single and double JPEG compressions. Finally, the support vector machine classifier is employed to identify whether a given JPEG image is doubly compressed or not. Experimental results on three image databases with various quality factors have demonstrated that the proposed method can significantly outperform the state-of-the-art method.

101 citations


Journal ArticleDOI
TL;DR: It is demonstrated that it is even possible to beat the quality of JPEG 2000 with EED if one uses specific subdivisions on rectangles and several important optimisations, including improved entropy coding, brightness and diffusivity optimisation, and interpolation swapping.
Abstract: Galic et al. (Journal of Mathematical Imaging and Vision 31:255---269, 2008) have shown that compression based on edge-enhancing anisotropic diffusion (EED) can outperform the quality of JPEG for medium to high compression ratios when the interpolation points are chosen as vertices of an adaptive triangulation. However, the reasons for the good performance of EED remained unclear, and they could not outperform the more advanced JPEG 2000. The goals of the present paper are threefold: Firstly, we investigate the compression qualities of various partial differential equations. This sheds light on the favourable properties of EED in the context of image compression. Secondly, we demonstrate that it is even possible to beat the quality of JPEG 2000 with EED if one uses specific subdivisions on rectangles and several important optimisations. These amendments include improved entropy coding, brightness and diffusivity optimisation, and interpolation swapping. Thirdly, we demonstrate how to extend our approach to 3-D and shape data. Experiments on classical test images and 3-D medical data illustrate the high potential of our approach.

91 citations


Journal ArticleDOI
TL;DR: A novel algorithm to achieve the reconstruction of the history of an image or a video by exploiting the effects of successive quantizations followed by dequantizations in case of double JPEG compressed images.
Abstract: One of the most common problems in the image forensics field is the reconstruction of the history of an image or a video. The data related to the characteristics of the camera that carried out the shooting, together with the reconstruction of the (possible) further processing, allow us to have some useful hints about the originality of the visual document under analysis. For example, if an image has been subjected to more than one JPEG compression, we can state that the considered image is not the exact bitstream generated by the camera at the time of shooting. It is then useful to estimate the quantization steps of the first compression, which, in case of JPEG images edited and then saved again in the same format, are no more available in the embedded metadata. In this paper, we present a novel algorithm to achieve this goal in case of double JPEG compressed images. The proposed approach copes with the case when the second quantization step is lower than the first one, exploiting the effects of successive quantizations followed by dequantizations. To improve the results of the estimation, a proper filtering strategy together with a function devoted to find the first quantization step, have been designed. Experimental results and comparisons with the state-of-the-art methods, confirm the effectiveness of the proposed approach.

56 citations


Journal ArticleDOI
TL;DR: One of the transformations, RDgDb, which requires just 2 integer subtractions per image pixel, on average results in the best ratios for JPEG2000 and JPEG XR, while for a specific set or in case of JPEG-LS its compression ratios are either the best or within 0.1 bpp from the best.

56 citations


Journal ArticleDOI
TL;DR: Experimental results show that the proposed JPEG anti-forensic method outperforms the state-of-the-art methods in a better tradeoff between the JPEG forensic undetectability and the visual quality of processed images.
Abstract: This paper proposes a JPEG anti-forensic method, which aims at removing from a given image the footprints left by JPEG compression, in both the spatial domain and DCT domain. With reasonable loss of image quality, the proposed method can defeat existing forensic detectors that attempt to identify traces of the image JPEG compression history or JPEG anti-forensic processing. In our framework, first because of a total variation-based deblocking operation, the partly recovered DCT information is thereafter used to build an adaptive local dithering signal model, which is able to bring the DCT histogram of the processed image close to that of the original one. Then, a perceptual DCT histogram smoothing is carried out by solving a simplified assignment problem, where the cost function is established as the total perceptual quality loss due to the DCT coefficient modification. The second-round deblocking and de-calibration operations successfully bring the image statistics that are used by the JPEG forensic detectors to the normal status. Experimental results show that the proposed method outperforms the state-of-the-art methods in a better tradeoff between the JPEG forensic undetectability and the visual quality of processed images. Moreover, the application of the proposed anti-forensic method in disguising double JPEG compression artifacts is proven to be feasible by experiments.

55 citations


Journal ArticleDOI
TL;DR: A new lossless color image compression algorithm, based on the hierarchical prediction and context-adaptive arithmetic coding, that enables the use of upper, left, and lower pixels for the pixel prediction, whereas the conventional raster scan prediction methods use upper and left pixels.
Abstract: This paper presents a new lossless color image compression algorithm, based on the hierarchical prediction and context-adaptive arithmetic coding. For the lossless compression of an RGB image, it is first decorrelated by a reversible color transform and then Y component is encoded by a conventional lossless grayscale image compression method. For encoding the chrominance images, we develop a hierarchical scheme that enables the use of upper, left, and lower pixels for the pixel prediction, whereas the conventional raster scan prediction methods use upper and left pixels. An appropriate context model for the prediction error is also defined and the arithmetic coding is applied to the error signal corresponding to each context. For several sets of images, it is shown that the proposed method further reduces the bit rates compared with JPEG2000 and JPEG-XR.

55 citations


Journal ArticleDOI
TL;DR: A JPEG 2000-based codec framework is proposed that provides a generic architecture suitable for the compression of many types of off-axis holograms, and is extended with a JPEG 2000 codec at its core, extended with fully arbitrary wavelet decomposition styles and directional wavelet transforms.
Abstract: With the advent of modern computing and imaging technologies, digital holography is becoming widespread in various scientific disciplines such as microscopy, interferometry, surface shape measurements, vibration analysis, data encoding, and certification Therefore, designing an efficient data representation technology is of particular importance Off-axis holograms have very different signal properties with respect to regular imagery, because they represent a recorded interference pattern with its energy biased toward the high-frequency bands This causes traditional images’ coders, which assume an underlying 1/f2 power spectral density distribution, to perform suboptimally for this type of imagery We propose a JPEG 2000-based codec framework that provides a generic architecture suitable for the compression of many types of off-axis holograms This framework has a JPEG 2000 codec at its core, extended with (1) fully arbitrary wavelet decomposition styles and (2) directional wavelet transforms Using this codec, we report significant improvements in coding performance for off-axis holography relative to the conventional JPEG 2000 standard, with Bjontegaard delta-peak signal-to-noise ratio improvements ranging from 13 to 116 dB for lossy compression in the 0125 to 200 bpp range and bit-rate reductions of up to 16 bpp for lossless compression

52 citations


Journal ArticleDOI
TL;DR: It is shown that the rate controller has excellent performance in terms of accuracy in the output rate, rate-distortion characteristics, and is extremely competitive with respect to state-of-the-art transform coding.
Abstract: Predictive coding is attractive for compression onboard of spacecrafts thanks to its low computational complexity, modest memory requirements and the ability to accurately control quality on a pixel-by-pixel basis. Traditionally, predictive compression focused on the lossless and near-lossless modes of operation where the maximum error can be bounded but the rate of the compressed image is variable. Rate control is considered a challenging problem for predictive encoders due to the dependencies between quantization and prediction in the feedback loop, and the lack of a signal representation that packs the signal's energy into few coefficients. In this paper, we show that it is possible to design a rate control scheme intended for onboard implementation. In particular, we propose a general framework to select quantizers in each spatial and spectral region of an image so as to achieve the desired target rate while minimizing distortion. The rate control algorithm allows to achieve lossy, near-lossless compression, and any in-between type of compression, e.g., lossy compression with a near-lossless constraint. While this framework is independent of the specific predictor used, in order to show its performance, in this paper we tailor it to the predictor adopted by the CCSDS-123 lossless compression standard, obtaining an extension that allows to perform lossless, near-lossless and lossy compression in a single package. We show that the rate controller has excellent performance in terms of accuracy in the output rate, rate-distortion characteristics and is extremely competitive with respect to state-of-the-art transform coding.

52 citations


Proceedings ArticleDOI
01 Dec 2014
TL;DR: A novel statistical framework is proposed for the identification of previous multiple aligned compressions in JPEG images and the estimation of the quality factors applied, both in the case of double and triple JPEG encoding with different quality factors.
Abstract: The analysis of JPEG compressed images is one of the most studied problems in image forensics, because of the extensive use and the characteristic traces left by such coding operation In this paper, we propose a novel statistical framework for the identification of previous multiple aligned compressions in JPEG images and the estimation of the quality factors applied The method has been tested on different datasets and forensic scenarios, where up to three JPEG compressions are considered Moreover, both in the case of double and triple JPEG encoding with different quality factors, the compression history of each image is estimated The experiments show good performance and, in most cases, higher accuracies with respect to state-of-the-art methods

Journal ArticleDOI
TL;DR: The proposed algorithm addresses all three types of artifacts which are prevalent in JPEG images: blocking, and for edges blurring, and aliasing, and enhances the quality of the image via two stages.
Abstract: Transform coding using the discrete cosine transform is one of the most popular techniques for image and video compression. However, at low bit rates, the coded images suffer from severe visual distortions. An innovative approach is proposed that deals with artifacts in JPEG compressed images. Our algorithm addresses all three types of artifacts which are prevalent in JPEG images: blocking, and for edges blurring, and aliasing. We enhance the quality of the image via two stages. First, we remove blocking artifacts via boundary smoothing and guided filtering. Then, we reduce blurring and aliasing around the edges via a local edge-regeneration stage. We compared the proposed algorithm with other modern JPEG artifact-removal algorithms. The results demonstrate that the proposed approach is competitive, and can in many cases outperform, competing algorithms.

Proceedings ArticleDOI
14 Apr 2014
TL;DR: A framework for application of the recently introduced firefly algorithm to the quantization table selection problem for different image similarity metrics is presented.
Abstract: JPEG is the prevailing compression algorithm used for digital images. Compression ratio and quality depend on quantization tables that are matrixes of 64 integers. The quality of compression for many applications has to be determined not by human judgment, but by software systems that perform some processing on compressed images, based on successfulness of such processing. Since there are many such applications, there is not unique best quantization table but it has to be selected for each application. Quantization table selection is intractable combinatorial problem that can be successfully solved by swarm intelligence metaheuristics. In this paper we present framework for application of the recently introduced firefly algorithm to the quantization table selection problem for different image similarity metrics.

Journal ArticleDOI
TL;DR: GLS coding as a special form of ATC, which attains synchronous compression and encryption, is used to modify JPEG and fill its gap and can not only achieve good compression performance but also resist known/chosen-plaintext attacks efficiently.

Proceedings ArticleDOI
01 Oct 2014
TL;DR: A novel forensic detector of JPEG compression traces in images stored in an uncompressed format is proposed based on a binary hypothesis test for which it can derive theoretically the confidence intervals, thus avoiding any training phase.
Abstract: Intrinsic statistical properties of natural uncompressed images can be used in image forensics for detecting traces of previous processing operations. In this paper, we extend the recent theoretical analysis of Benford-Fourier coefficients and propose a novel forensic detector of JPEG compression traces in images stored in an uncompressed format. The classification is based on a binary hypothesis test for which we can derive theoretically the confidence intervals, thus avoiding any training phase. Experiments on real images and comparisons with state-of-art techniques show that the proposed detector outperforms existing ones and overcomes issues due to dataset-dependency.

Proceedings ArticleDOI
20 Nov 2014
TL;DR: It is demonstrated that profiles A and B lead to similar saturation of quality at the higher bit rates, while profile C exhibits no saturation, while Profiles B and C appear to be more dependent on TMOs used for the base layer compared to profile A.
Abstract: The upcoming JPEG XT is under development for High Dynamic Range (HDR) image compression. This standard encodes a Low Dynamic Range (LDR) version of the HDR image generated by a Tone-Mapping Operator (TMO) using the conventional JPEG coding as a base layer and encodes the extra HDR information in a residual layer. This paper studies the performance of the three profiles of JPEG XT (referred to as profiles A, B and C) using a test set of six HDR images. Four TMO techniques were used for the base layer image generation to assess the influence of the TMOs on the performance of JPEG XT profiles. Then, the HDR images were coded with different quality levels for the base layer and for the residual layer. The performance of each profile was evaluated using Signal to Noise Ratio (SNR), Feature SIMilarity Index (FSIM), Root Mean Square Error (RMSE), and CIEDE2000 color difference objective metrics. The evaluation results demonstrate that profiles A and B lead to similar saturation of quality at the higher bit rates, while profile C exhibits no saturation. Profiles B and C appear to be more dependent on TMOs used for the base layer compared to profile A.

Book ChapterDOI
01 Oct 2014
TL;DR: A counter-forensic technique that makes multiple compression undetectable for any forensic detector based on the analysis of the histograms of quantized DCT coefficients is proposed.
Abstract: Detection of multiple JPEG compression of digital images has been attracting more and more interest in the field of multimedia forensics. On the other side, techniques to conceal the traces of multiple compression are being proposed as well. Motivated by a recent trend towards the adoption of universal approaches, we propose a counter-forensic technique that makes multiple compression undetectable for any forensic detector based on the analysis of the histograms of quantized DCT coefficients. Experimental results show the effectiveness of our approach in removing the artifacts of double and also triple compression, while maintaining a good quality of the image.

Journal ArticleDOI
01 Jan 2014-Optik
TL;DR: The JPEG image encryption can meet the security requirement of the storage and transmission of JPEG images in some common application occasions, and provides an effective and feasible way of encrypting JPEG images.

Proceedings ArticleDOI
04 May 2014
TL;DR: The lossless audio compression tool is presented, which utilizes a pre-processing procedure for flattening the amplitude envelop of linear prediction residue, and an arithmetic coder that adopts a scaled probability template.
Abstract: The IEEE Standard for Advanced Audio Coding (IEEE 1857.2) is a new standard approved by IEEE in August 2013. The standard comprises both lossy and lossless audio compression tools. This paper presents the lossless audio compression tool, which utilizes a pre-processing procedure for flattening the amplitude envelop of linear prediction residue, and an arithmetic coder that adopts a scaled probability template. The performance of the new IEEE lossless compressor is evaluated and compared with state-of-the-art lossless audio coders. Evaluation results show that the lossless compression performance of the IEEE compressor is about 5% higher than MPEG-4 ALS and 12% higher than FLAC.

Journal ArticleDOI
TL;DR: The system proposes to implement a lossless codec using an entropy coder and concentrates on minimizing the computation time by introducing parallel computing on the arithmetic coding stage as it deals with multiple subslices.
Abstract: Medical image compression is one of the growing research fields in biomedical applications. Most medical images need to be compressed using lossless compression as each pixel information is valuable. With the wide pervasiveness of medical imaging applications in health-care settings and the increased interest in telemedicine technologies, it has become essential to reduce both storage and transmission bandwidth requirements needed for archival and communication of related data, preferably by employing lossless compression methods. Furthermore, providing random access as well as resolution and quality scalability to the compressed data has become of great utility. Random access refers to the ability to decode any section of the compressed image without having to decode the entire data set. The system proposes to implement a lossless codec using an entropy coder. 3D medical images are decomposed into 2D slices and subjected to 2D-stationary wavelet transform (SWT). The decimated coefficients are compressed in parallel using embedded block coding with optimized truncation of the embedded bit stream. These bit streams are decoded and reconstructed using inverse SWT. Finally, the compression ratio (CR) is evaluated to prove the efficiency of the proposal. As an enhancement, the proposed system concentrates on minimizing the computation time by introducing parallel computing on the arithmetic coding stage as it deals with multiple subslices.

Proceedings ArticleDOI
09 Jan 2014
TL;DR: F fuzzy based soft hybrid JPEG technique (FSHJPEG) gives high compression ratio, preserving most of the image information and the image is reproduced with good quality and reduces blocking artifacts, ringing effects and false contouring appreciably.
Abstract: In the last few years, rapid growth in the technological development has been reported. This rapid growth in technology demands fast and efficient processing, transmission and storage of data. Although lots of work have been reported in the literature related to efficient processing and transmission of data, but this would not be achieved without assuring reduction in data storage, as during processing and transmission most of the efforts and time is required for either accessing the data or storing the data. Therefore to cope up with the current technological demands, the data should be in highly compressed form. One of the most important form of data is digital image which is nothing but a two dimensional signal. Digital image in their raw form require a huge amount of storage capacity so that a scheme that produces high degree of compression is required which should preserve the critical image information. However JPEG standards are already available for gray image compression, but this area is still open for algorithms, which can provide better compression ratio while keeping mean square error low. Zadah in his paper proved that, imprecise situations can be properly handled using fuzzy logic. This feature of fuzzy logic has been incorporated by introducing a novel data compression technique for gray images using fuzzy logic based fusion of available JPEG and JPEG2K Standards (FSHJPEG) to achieve higher compression ratio as compared to stand alone JPEG and JPEG2K standards. The fuzzy based soft hybrid JPEG technique (FSHJPEG) gives high compression ratio, preserving most of the image information and the image is reproduced with good quality. This new technique not only gives high compression ratio, but also reduces blocking artifacts, ringing effects and false contouring appreciably. The compression ratio obtained using FSHJPEG is more as compared to currently used standards of Image compression, preserving most of the image information.

Journal ArticleDOI
Shuhui Wang1, Tao Lin1
TL;DR: In UC, several lossless coding tools such as dictionary-entropy coders, run-length encoding (RLE), Hextile, and a few filters used in portable network graphics format are united into H.264 like intraframe hybrid video coding.
Abstract: This paper proposes a compound image coding method named united coding (UC). In UC, several lossless coding tools such as dictionary-entropy coders, run-length encoding (RLE), Hextile, and a few filters used in portable network graphics (PNG) format are united into H.264 like intraframe hybrid video coding. The basic coding unit (BCU) has a size typically between 16?×?16 pixels to 64?×?64 pixels. All coders in UC are used to code each BCU. Then, the lossless coder that generates minimum bit-rate (R) is chosen as the optimal lossless coder. Finally, the final optimal coder is chosen from the lossy intraframe hybrid coder and the optimal lossless coder using R-D cost based optimization criterion. Moreover, the data coded by one lossless coder can be used as the dictionary of other lossless coders. Experimental results demonstrate that compared with H.264, UC achieves up to 20 dB PSNR improvement and better visual picture quality for compound images with mixed text, graphics and natural picture. Compared with lossless coders such as gzip and PNG, UC can achieve 2---5 times higher compression ratio with just a minor loss and keep partial-lossless picture quality. The partial-lossless nature of UC is indispensable for some typical applications, such as cloud computing and rendering, cloudlet-screen computing and remote desktop, where lossless coding of partial image regions is demanded. On the other hand, the implementation complexity and cost increment of UC is moderate, typically less than 25 % of a traditional hybrid coder such as H.264.

Journal ArticleDOI
TL;DR: A simple lossless image compression method based on a combination between bit-plane slicing and adaptive predictive coding is adopted for compressing natural and medical images that characterized by guaranty fully reconstruction.
Abstract: In this paper, a simple lossless image compression method based on a combination between bit-plane slicing and adaptive predictive coding is adopted for compressing natural and medical images. The idea basically utilized the spatial domain efficiently after discarding the lowest order bits namely, exploiting only the highest order bits in which the most significant bit corresponds to last layer7 used adaptive predictive coding, while the other layers used run length coding. The test results leads to high system performance in which higher compression ratio achieves for lossless system that characterized by guaranty fully reconstruction. General Terms Bit-plane slicing along with adaptive predictive coding for lossless image compression.

Journal ArticleDOI
TL;DR: An efficient algorithm for fusing a pair of long- and short-exposure images that work in the JPEG domain, which uses the spatial frequency analysis provided by the discrete cosine transform within JPEG to combine the uniform regions of the long-ex exposure image with the detailed areas of the short-Exposure image, thereby reducing noise while providing sharp details.
Abstract: We present an efficient algorithm for fusing a pair of long- and short-exposure images that work in the JPEG domain. The algorithm uses the spatial frequency analysis provided by the discrete cosine transform within JPEG to combine the uniform regions of the long-exposure image with the detailed regions of the short-exposure image, thereby reducing noise while providing sharp details. Two additional features of the algorithm enable its implementation at low cost, and in real time, on a digital camera: the camera's response between exposures is equalized with a look-up table implementing a parametric sigmoidal function; and image fusion is performed by selective overwriting during the JPEG file save operation. The algorithm requires no more than a single JPEG macro-block of the short-exposure image to be maintained in RAM at any one time, and needs only a single pass over both long- and short-exposure images. The performance of the algorithm is demonstrated with examples of image stabilization and high dynamic range image acquisition.

Proceedings ArticleDOI
TL;DR: In this article, the authors present profiles of JPEG XT that are especially suited for hardware implementations by requiring only integer or fixed point logic, all functional blocks of a JPEG XT codec are here implemented by integer and fixed-point logic.
Abstract: JPEG XT (ISO/IEC 18477), the latest standardization initiative of the JPEG committee defines an image compression standard backwards compatible to the well-known JPEG standard (ISO/IEC 10918-1). JPEG XT extends JPEG by features like coding of images of higher bit-depth, coding of floating point image formats and lossless compression, all of which are backwards compatible to the legacy JPEG standard. In this work, the author presents profiles of JPEG XT that are especially suited for hardware implementations by requiring only integer logic. All functional blocks of a JPEG XT codec are here implemented by integer or fixed point logic. A performance analysis and comparison with other profiles of JPEG XT concludes the work.

13 Nov 2014
TL;DR: This work proposes a compression algorithm meeting these requirements through the use of modern information theory and signal processing tools, combined with simple methods to exploit spatial as well as temporal redundancies typically present in EEG signals.
Abstract: Current EEG applications imply the need for low-latency, low-power, high-fidelity data transmission and storage algorithms. This work proposes a compression algorithm meeting these requirements through the use of modern information theory and signal processing tools (such as universal coding, universal prediction, and fast online implementations of multivariate recursive least squares), combined with simple methods to exploit spatial as well as temporal redundancies typically present in EEG signals. The resulting compression algorithm requires O(1) operations per scalar sample and surpasses the current state of the art in near-lossless and lossless EEG compression ratios.

Journal ArticleDOI
TL;DR: Theoretical analysis and experimental results show that SMQ can achieve a balance between security and efficiency, while keeping comparable compression performance and energy consumption with standard JPEG 2000 coding.
Abstract: This paper presents a secure MQ coder (SMQ) for efficient selective encryption of JPEG 2000 images. Being different from existing schemes where encryption overhead is proportional to the size of plain image, SMQ only selectively encrypts tiny and constant volume of data in JPEG 2000 coding regardless of image size. It is extremely fast and suitable for protecting JPEG 2000 images in wireless multimedia sensor networks (WMSNs). Theoretical analysis and experimental results show that SMQ can achieve a balance between security and efficiency, while keeping comparable compression performance and energy consumption with standard JPEG 2000 coding. HighlightsWe propose an efficient encryption algorithm for protecting JPEG 2000 images.Our algorithm is quite fast and energy-saving when handling massive data in WMSNs.We solve the vulnerability to chosen-plaintext attack in a novel and effective way.The proposed algorithm and its superiorities are most related existing schemes.

01 Jan 2014
TL;DR: The implementation of the JPEG compression on a field programmable gate array minimise the logic resources of the FPGA and the latency at each stage of compression to target minimal FPGa resource usage without compromising encoded-image quality.
Abstract: This paper presents the implementation of the JPEG compression on a field programmable gate array.It minimise the logic resources of the FPGA and the latency at each stage of compression. The JPEG standard defines compression techniques for image data. It permits to store and transfer image data with considerably reduced demand for storage space and bandwidth. The encoder compresses an image as a stream of 8×8 blocks with each element of the block applied and processed individually. The encoder is implemented on Xilinx Spartan-3 FPGA. JPEG encoder that targets minimal FPGA resource usage without compromising encoded-image quality.

Journal ArticleDOI
TL;DR: This paper presents a lossless DCT compression technique for two-dimensional images that results in comparable or better performance, when compared to the different modes of the lossless JPEG standard.
Abstract: 3 Abstract: Image Compression is a method, which reduces the amount of space required to store the image. The Discrete Cosine Transform (DCT) is a method that transforms a signal or image from spatial domain to frequency domain. This technique is widely used in image compression. In this paper, we present a lossless DCT compression technique for two-dimensional images. In several scenarios, the utilization of the presented technique for image compression results in comparable or better performance, when compared to the different modes of the lossless JPEG standard.

Journal ArticleDOI
TL;DR: An algorithm of embedding filter coefficients in the bitstream, such that the embedded filter can be used to enhance the quality of the decoded image by applying the Wiener filter.