scispace - formally typeset
Search or ask a question

Showing papers on "JPEG 2000 published in 2009"


Journal ArticleDOI
TL;DR: JPEG XR is the newest image coding standard from the JPEG committee and achieves high image quality, on par with JPEG 2000, while requiring low computational resources and storage capacity.
Abstract: JPEG XR is the newest image coding standard from the JPEG committee. It primarily targets the representation of continuous-tone still images such as photographic images and achieves high image quality, on par with JPEG 2000, while requiring low computational resources and storage capacity. Moreover, it effectively addresses the needs of emerging high dynamic range imagery applications by including support for a wide range of image representation formats.

163 citations


Journal ArticleDOI
TL;DR: A quantitative comparison between the energy costs associated with direct transmission of uncompressed images and sensor platform-based JPEG compression followed by transmission of the compressed image data is presented.
Abstract: One of the most important goals of current and future sensor networks is energy-efficient communication of images. This paper presents a quantitative comparison between the energy costs associated with 1) direct transmission of uncompressed images and 2) sensor platform-based JPEG compression followed by transmission of the compressed image data. JPEG compression computations are mapped onto various resource-constrained platforms using a design environment that allows computation using the minimum integer and fractional bit-widths needed in view of other approximations inherent in the compression process and choice of image quality parameters. Advanced applications of JPEG, such as region of interest coding and successive/progressive transmission, are also examined. Detailed experimental results examining the tradeoffs in processor resources, processing/transmission time, bandwidth utilization, image quality, and overall energy consumption are presented.

103 citations


Journal ArticleDOI
TL;DR: This paper proposes a practical approach of uniform down sampling in image space and yet making the sampling adaptive by spatially varying, directional low-pass prefiltering, which outperforms JPEG 2000 in PSNR measure at low to medium bit rates and achieves superior visual quality.
Abstract: Recently, many researchers started to challenge a long-standing practice of digital photography: oversampling followed by compression and pursuing more intelligent sparse sampling techniques. In this paper, we propose a practical approach of uniform down sampling in image space and yet making the sampling adaptive by spatially varying, directional low-pass prefiltering. The resulting down-sampled prefiltered image remains a conventional square sample grid, and, thus, it can be compressed and transmitted without any change to current image coding standards and systems. The decoder first decompresses the low-resolution image and then upconverts it to the original resolution in a constrained least squares restoration process, using a 2-D piecewise autoregressive model and the knowledge of directional low-pass prefiltering. The proposed compression approach of collaborative adaptive down-sampling and upconversion (CADU) outperforms JPEG 2000 in PSNR measure at low to medium bit rates and achieves superior visual quality, as well. The superior low bit-rate performance of the CADU approach seems to suggest that oversampling not only wastes hardware resources and energy, and it could be counterproductive to image quality given a tight bit budget.

99 citations


Journal ArticleDOI
01 Sep 2009
TL;DR: This paper proposes a method that combines the JPEG-LS and an interframe coding with motion vectors to enhance the compression performance of using JPEG- LS alone, and achieves average compression gains of 13.3% and 26.3 % over the methods of using Photoshop and JPEG2000 alone, respectively.
Abstract: Hospitals and medical centers produce an enormous amount of digital medical images every day, especially in the form of image sequences, which requires considerable storage space. One solution could be the application of lossless compression. Among available methods, JPEG-LS has excellent coding performance. However, it only compresses a single picture with intracoding and does not utilize the interframe correlation among pictures. Therefore, this paper proposes a method that combines the JPEG-LS and an interframe coding with motion vectors to enhance the compression performance of using JPEG-LS alone. Since the interframe correlation between two adjacent images in a medical image sequence is usually not as high as that in a general video image sequence, the interframe coding is activated only when the interframe correlation is high enough. With six capsule endoscope image sequences under test, the proposed method achieves average compression gains of 13.3% and 26.3 % over the methods of using JPEG-LS and JPEG2000 alone, respectively. Similarly, for an MRI image sequence, coding gains of 77.5% and 86.5% are correspondingly obtained.

97 citations


BookDOI
07 Aug 2009
TL;DR: The author reveals how the JPSEC Framework changed the way that JPEG 2000 was designed and how that changed the nature of the JPEG 2000 encoding system itself.
Abstract: Contributor Biographies. Foreword. Series Editor's Preface. Preface. Acknowledgments. List of Acronyms. Part A. 1 JPEG 2000 Core Coding System (Part 1) ( Majid Rabbani, Rajan L. Joshi, and Paul W. Jones ). 1.1 Introduction. 1.2 JPEG 2000 Fundamental Building Blocks. 1.3 JPEG 2000 Bit-Stream Organization. 1.4 JPEG 2000 Rate Control. 1.5 Performance Comparison of the JPEG 2000 Encoder Options. 1.6 Additional Features of JPEG 2000 Part 1. Acknowledgments. References. 2 JPEG 2000 Extensions (Part 2) ( Margaret Lepley, J. Scott Houchin, James Kasner, and Michael Marcellin ). 2.1 Introduction. 2.2 Variable DC Offset. 2.3 Variable Scalar Quantization. 2.4 Trellis-Coded Quantization. 2.5 Precinct-Dependent Quantization. 2.6 Extended Visual Masking. 2.7 Arbitrary Decomposition. 2.8 Arbitrary Wavelet Transforms. 2.9 Multiple-Component Transform Extensions. 2.10 Nonlinear Point Transform. 2.11 Geometric Manipulation via a Code-Block Anchor Point (CBAP). 2.12 Single-Sample Overlap. 2.13 Region of Interest. 2.14 Extended File Format: JPX. 2.15 Extended Capabilities Signaling. Acknowledgments. References. 3 Motion JPEG 2000 and ISO Base Media File Format (Parts 3 and 12) ( Joerg Mohr ). 3.1 Introduction. 3.2 Motion JPEG 2000 and ISO Base Media File Format. 3.3 ISO Base Media File Format. 3.4 Motion JPEG 2000. References. 4 Compound Image File Format (Part 6) ( Frederik Temmermans, Tim Bruylants, Simon McPartlin, and Louis Sharpe ). 4.1 Introduction. 4.2 The JPM File Format. 4.3 Mixed Raster Content Model (MRC). 4.4 Streaming JPM Files. 4.5 Referencing JPM Files. 4.6 Metadata. 4.7 Boxes. 4.8 Profiles. 4.9 Conclusions. References. 5 JPSEC: Securing JPEG 2000 Files (Part 8) ( Susie Wee and Zhishou Zhang ). 5.1 Introduction. 5.2 JPSEC Security Services. 5.3 JPSEC Architecture. 5.4 JPSEC Framework. 5.5 What: JPSEC Security Services. 5.6 Where: Zone of Influence (ZOI). 5.7 How: Processing Domain and Granularity. 5.8 JPSEC Examples. 5.9 Summary. References. 6 JPIP - Interactivity Tools, APIs, and Protocols (Part 9) ( Robert Prandolini ). 6.1 Introduction. 6.2 Data-Bins. 6.3 JPIP Basics. 6.4 Client Request-Server Response. 6.5 Advanced Topics. 6.6 Conclusions. Acknowledgments. References. 7 JP3D - Extensions for Three-Dimensional Data (Part 10) ( Tim Bruylants, Peter Schelkens, and Alexis Tzannes ). 7.1 Introduction. 7.2 JP3D: Going Volumetric. 7.3 Bit-Stream Organization. 7.4 Additional Features of JP3D. 7.5 Compression performances: JPEG 2000 Part 1 versus JP3D. 7.6 Implications for Other Parts of JPEG 2000. Acknowledgments. References. 8 JPWL - JPEG 2000 Wireless (Part 11) ( Frederic Dufaux ). 8.1 Introduction. 8.2 Background. 8.3 JPWL Overview. 8.4 Normative Parts. 8.5 Informative Parts. 8.6 Summary. Acknowledgments. References. Part B. 9 JPEG 2000 for Digital Cinema ( Siegfried F o ssel ). 9.1 Introduction. 9.2 General Requirements for Digital Cinema. 9.3 Distribution of Digital Cinema Content. 9.4 Archiving of Digital Movies. 9.5 Future Use of JPEG 2000 within Digital Cinema. 9.6 Conclusions. Acknowledgments. References. 10 Security Applications for JPEG 2000 Imagery ( John Apostolopoulos, Frederic Dufaux, and Qibin Sun ). 10.1 Introduction. 10.2 Secure Transcoding and Secure Streaming. 10.3 Multilevel Access Control. 10.4 Selective or Partial Encryption of Image Content. 10.5 Image Authentication. 10.6 Summary. Acknowledgments. References. 11 Video Surveillance and Defense Imaging ( Touradj Ebrahimi and Frederic Dufaux ). 11.1 Introduction. 11.2 Scrambling. 11.3 Overview of a Typical Video Surveillance System. 11.4 Overview of a Video Surveillance System Based on JPEG 2000 and ROI Scrambling. 12 JPEG 2000 Application in GIS and Remote Sensing ( Bernard Brower, Robert Fiete, and Roddy Shuler ). 12.1 Introduction. 12.2 Geographic Information Systems. 12.3 Recommendations for JPEG 2000 Encoding. 12.4 Other JPEG 2000 Parts to Consider. References. 13 Medical Imaging ( Alexis Tzannes and Ron Gut ). 13.1 Introduction. 13.2 Background. 13.3 DICOM and JPEG 2000 Part 1. 13.4 DICOM and JPEG 2000 Part 2. 13.5 Example Results. 13.6 Image Streaming, DICOM, and JPIP. References. 14 Digital Culture Imaging ( Greg Colyer, Robert Buckley, and Athanassios Skodras ). 14.1 Introduction. 14.2 The Digital Culture Context. 14.3 Digital Culture and JPEG 2000. 14.4 Application - National Digital Newspaper Program. Acknowledgments. References. 15 Broadcast Applications ( Hans Hoffman, Adi Kouadio, and Luk Overmeire ). 15.1 Introduction - From Tape-Based to File-Based Production. 15.2 Broadcast Production Chain Reference Model. 15.3 Codec Requirements for Broadcasting Applications. 15.4 Overview of State-of-the-Art HD Compression Schemes. 15.5 JPEG 2000 Applications. 15.6 Multigeneration Production Processes. 15.7 JPEG 2000 Comparison with SVC. 15.8 Conclusion. References. 16 JPEG 2000 in 3-D Graphics Terrain Rendering ( Gauthier Lafruit, Wolfgang Van Raemdonck, Klaas Tack, and Eric Delfosse ). 16.1 Introduction. 16.2 Tiling: The Straightforward Solution to Texture Streaming. 16.3 View-Dependent JPEG 2000 Texture Streaming and Mipmapping. 16.4 JPEG 2000 Quality and Decoding Time Scalability for Optimal Quality-Workload Tradeoff. 16.5 Conclusion. References. 17 Conformance Testing, Reference Software, and Implementations ( Peter Schelkens, Yiannis Andreopoulos, and Joeri Barbarien ). 17.1 Introduction. 17.2 Part 4 - Conformance Testing. 17.3 Part 5 - Reference Software. 17.4 Implementation of the Discrete Wavelet Transform as Suggested by the JPEG 2000 Standard. 17.5 JPEG 2000 Hardware and Software Implementations. 17.6 Conclusions. Acknowledgments. References. 18 Ongoing Standardization Efforts ( Touradj Ebrahimi, Athanassios Skodras, and Peter Schelkens ). 18.1 Introduction. 18.2 JPSearch. 18.3 JPEG XR. 18.4 Advanced Image Coding and Evaluation Methodologies (AIC). References. Index.

96 citations


Journal ArticleDOI
TL;DR: An iterative algorithm is presented to jointly optimize run-length coding, Huffman coding, and quantization table selection that results in a compressed bitstream completely compatible with existing JPEG and MPEG decoders, but is also computationally efficient.
Abstract: To maximize rate distortion performance while remaining faithful to the JPEG syntax, the joint optimization of the Huffman tables, quantization step sizes, and DCT indices of a JPEG encoder is investigated. Given Huffman tables and quantization step sizes, an efficient graph-based algorithm is first proposed to find the optimal DCT indices in the form of run-size pairs. Based on this graph-based algorithm, an iterative algorithm is then presented to jointly optimize run-length coding, Huffman coding, and quantization table selection. The proposed iterative algorithm not only results in a compressed bitstream completely compatible with existing JPEG and MPEG decoders, but is also computationally efficient. Furthermore, when tested over standard test images, it achieves the best JPEG compression results, to the extent that its own JPEG compression performance even exceeds the quoted PSNR results of some state-of-the-art wavelet-based image coders such as Shapiro's embedded zerotree wavelet algorithm at the common bit rates under comparison. Both the graph-based algorithm and the iterative algorithm can be applied to application areas such as web image acceleration, digital camera image compression, MPEG frame optimization, and transcoding, etc.

91 citations


Journal ArticleDOI
TL;DR: The implementation results show that the proposed 2-D DWT architecture can process 1080 p HDTV pictures with five-level decomposition at 30 frames/sec and the hardware cost and internal memory requirements are smaller than other familiar architectures based on the same throughput rate.
Abstract: In this paper, we present a high performance and memory-efficient pipelined architecture with parallel scanning method for 2-D lifting-based DWT in JPEG2000 applications. The Proposed 2-D DWT architecture are composed of two 1-D DWT cores and a 2times2 transposing register array. The proposed 1-D DWT core consumes two input data and produces two output coefficients per cycle, and its critical path takes one multiplier delay only. Moreover, we utilize the parallel scanning method to reduce the internal buffer size instead of the line-based scanning method. For the NtimesN tile image with one-level 2-D DWT decomposition, only 4N temporal memory and the 2times2 register array are required for 9/7 filter to store the intermediate coefficients in the column 1-D DWT core. And the column-processed data can be rearranged in the transposing array. According to the comparison results, the hardware cost of the 1-D DWT core and the internal memory requirements of proposed 2-D DWT architecture are smaller than other familiar architectures based on the same throughput rate. The implementation results show that the proposed 2-D DWT architecture can process 1080 p HDTV pictures with five-level decomposition at 30 frames/sec.

87 citations


Journal ArticleDOI
TL;DR: A modified version of the embedded block coder with optimized truncation (EBCOT), tailored according to the characteristics of the data, encodes the residual data generated after prediction to provide resolution and quality scalability.
Abstract: We propose a novel symmetry-based technique for scalable lossless compression of 3D medical image data. The proposed method employs the 2D integer wavelet transform to decorrelate the data and an intraband prediction method to reduce the energy of the sub-bands by exploiting the anatomical symmetries typically present in structural medical images. A modified version of the embedded block coder with optimized truncation (EBCOT), tailored according to the characteristics of the data, encodes the residual data generated after prediction to provide resolution and quality scalability. Performance evaluations on a wide range of real 3D medical images show an average improvement of 15% in lossless compression ratios when compared to other state-of-the art lossless compression methods that also provide resolution and quality scalability including 3D-JPEG2000, JPEG2000, and H.264/AVC intra-coding.

86 citations


Journal ArticleDOI
TL;DR: Levels at which lossy compression can be confidently used in diagnostic imaging applications are determined and a table of recommended compression ratios for each modality and anatomical area investigated is provided to be integrated in the Canadian Association of Radiologists standard for the use oflossy compression in medical imaging.
Abstract: New technological advancements including multislice CT scanners and functional MRI, have dramatically increased the size and number of digital images generated by medical imaging departments. Despite the fact that the cost of storage is dropping, the savings are largely surpassed by the increasing volume of data being generated. While local area network bandwidth within a hospital is adequate for timely access to imaging data, efficiently moving the data between institutions requires wide area network bandwidth, which has a limited availability at a national level. A solution to address those issues is the use of lossy compression as long as there is no loss of relevant information. The goal of this study was to determine levels at which lossy compression can be confidently used in diagnostic imaging applications. In order to provide a fair assessment of existing compression tools, we tested and compared the two most commonly adopted DISCOM compression algorithms: JPEG and JPEG-2000. We conducted an extensive pan-Canadian evaluation of lossy compression applied to seven anatomical areas and five modalities using two recognized techniques: objective methods or diagnostic accuracy and subjective assessment based on Just Noticeable Difference. By incorporating both diagnostic accuracy and subjective evaluation techniques, enabled us to define a range of compression for each modality and body part tested. The results of our study suggest that at low levels of compression, there was no significant difference between the performance of lossy JPEG and lossy JPEG 2000, and that they are both appropriate to use for reporting on medical images. At higher levels, lossy JPEG proved to be more effective than JPEG 2000 in some cases, mainly neuro CT. More evaluation is required to assess the effect of compression on thin slice CT. We provide a table of recommended compression ratios for each modality and anatomical area investigated, to be integrated in the Canadian Association of Radiologists standard for the use of lossy compression in medical imaging.

65 citations


Proceedings ArticleDOI
TL;DR: A procedure for subjective evaluation of the new JPEG XR codec for compression of still pictures is described in details and the obtained results show high consistency and allow an accurate comparison of codec performance.
Abstract: In this paper a procedure for subjective evaluation of the new JPEG XR codec for compression of still pictures is described in details. The new algorithm has been compared to the existing JPEG and JPEG 2000 standards when considering compression of high resolution 24 bpp pictures, by mean of a campaign of subjective quality assessment tests which followed the guidelines defined by the AIC JPEG ah-hoc group. Sixteen subjects took part in experiments at EPFL and each subject participated in four test sessions, scoring a total of 208 test stimuli. A detailed procedure for statistical analysis of subjective data is also proposed and performed. The obtained results show high consistency and allow an accurate comparison of codec performance.

61 citations


Book ChapterDOI
03 Sep 2009
TL;DR: It is demonstrated that it is even possible to beat the quality of the much more advanced JPEG 2000 standard when one uses subdivisions on rectangles and a number of additional optimisations.
Abstract: Although widely used standards such as JPEG and JPEG 2000 exist in the literature, lossy image compression is still a subject of ongoing research. Galic et al. (2008) have shown that compression based on edge-enhancing anisotropic diffusion can outperform JPEG for medium to high compression ratios when the interpolation points are chosen as vertices of an adaptive triangulation. In this paper we demonstrate that it is even possible to beat the quality of the much more advanced JPEG 2000 standard when one uses subdivisions on rectangles and a number of additional optimisations. They include improved entropy coding, brightness rescaling, diffusivity optimisation, and interpolation swapping. Experiments on classical test images are presented that illustrate the potential of our approach.

Journal ArticleDOI
TL;DR: Based on the idea of second generation image coding, a novel scheme for coding still images is presented and it is demonstrated that the proposed method performs better than those of the current one, such as JPEG, CMP, EZW and JPEG2000.
Abstract: Based on the idea of second generation image coding, a novel scheme for coding still images is presented. At first, an image was partitioned with a pulse-coupled neural network; and then an improved chain code and the 2D discrete cosine transform was adopted to encode the shape and the color of its edges respectively. To code its smooth and texture regions, an improved zero-trees strategy based on the 2nd generation wavelet was chosen. After that, the zero-tree chart was selected to rearrange quantified coefficients. And finally some regulations were given according to psychology of various users. Experiments under noiseless channels demonstrate that the proposed method performs better than those of the current one, such as JPEG, CMP, EZW and JPEG2000.

Journal ArticleDOI
TL;DR: Based on the JPEG 2000 image-compression standard, the JHelioviewer solar image visualization tool lets users browse petabyte-scale image archives as well as locate and manipulate specific data sets.
Abstract: All disciplines that work with image data-from astrophysics to medical research and historic preservation-increasingly require efficient ways to browse and inspect large sets of high-resolution images. Based on the JPEG 2000 image-compression standard, the JHelioviewer solar image visualization tool lets users browse petabyte-scale image archives as well as locate and manipulate specific data sets.

Proceedings ArticleDOI
04 May 2009
TL;DR: Experimental results show that the proposed scheme not only offers a significant gain over JPEG2000 on various types of depth maps but also produces depth maps without edge artifacts particularly suited to 3D warping and free viewpoint video applications.
Abstract: This paper presents a novel strategy for the compression of depth maps. The proposed scheme starts with a segmentation step which identifies and extracts edges and main objects, then it introduces an efficient compression strategy for the segmented regions' shape. In the subsequent step a novel algorithm is used to predict the surface shape from the segmented regions and a set of regularly spaced samples. Finally the few prediction residuals are efficiently compressed using standard image compression techniques. Experimental results show that the proposed scheme not only offers a significant gain over JPEG2000 on various types of depth maps but also produces depth maps without edge artifacts particularly suited to 3D warping and free viewpoint video applications.

Journal ArticleDOI
TL;DR: By employing a new comparison methodology and using transform coefficients as input to face recognition algorithms, it is shown that face recognition can efficiently be implemented directly into compressed domain.

Journal ArticleDOI
TL;DR: This paper presents prominent extensions that have been proposed for the Consultative Committee for Space Data Systems Recommendation for Image Data Compression (CCSDS-122-B-1), and reports suggest that the proposal is competitive with the JPEG2000 standard.
Abstract: This paper presents prominent extensions that have been proposed for the Consultative Committee for Space Data Systems Recommendation for Image Data Compression (CCSDS-122-B-1). Thanks to the proposed extensions, the Recommendation gains several important featured advantages: It allows any number of spatial wavelet decomposition levels; it provides scalability by quality, position, resolution, and component; and it supports multi-/hyper-/ultraspectral data coding, allowing a spectral decorrelation if requested. As a consequence, compression performance is notably improved with respect to the Recommendation for a large variety of remote sensing images, both monoband and multi-/hyper-/ultraspectral images. Reported results for hyperspectral data suggest that our proposal is competitive with the JPEG2000 standard.

Proceedings ArticleDOI
16 Mar 2009
TL;DR: This work presents a SSIM optimal JPEG 2000 rate allocation algorithm and proposes the SSIM index proposed by Sheik and Bovik which is both simple enough to be implemented efficiently in rate control algorithms, but yet correlates better to visual quality than MSE.
Abstract: In this work, we present a SSIM optimal JPEG 2000 rate allocation algorithm. However, our aim is less improving the visual performance of JPEG 2000, but more the study of the performance of the SSIM full reference metric by means beyond correlation measurements.Full reference image quality metrics assign a quality index to a pair of a reference and distorted image. The performance of a metric is then measured by the degree of correlation between the scores obtained from the metric and those from subjective tests. It is the aim of a rate allocation algorithm to minimize the distortion created by a lossy image compression scheme under a rate constraint.Noting this relation between objective function and performance evaluation allows us now to define an alternative approach to evaluate the usefulness of a candidate metric: we want to judge the quality of a metric by its ability to define an objective function for rate control purposes, and evaluate images compressed in this scheme subjectively. It turns out that deficiencies of image quality metrics become much easier visible --- even in the literal sense --- than under traditional correlation experiments.Our candidate metric in this work is the SSIM index proposed by Sheik and Bovik which is both simple enough to be implemented efficiently in rate control algorithms, but yet correlates better to visual quality than MSE; our candidate compression scheme is the highly flexible JPEG 2000 standard.

Posted Content
TL;DR: JHelioviewer as discussed by the authors is a visualization software for solar physics data based on the JPEG 2000 image compression standard, which enables users to browse petabyte-scale image archives.
Abstract: Across all disciplines that work with image data - from astrophysics to medical research and historic preservation - there is a growing need for efficient ways to browse and inspect large sets of high-resolution images. We present the development of a visualization software for solar physics data based on the JPEG 2000 image compression standard. Our implementation consists of the JHelioviewer client application that enables users to browse petabyte-scale image archives and the JHelioviewer server, which integrates a JPIP server, metadata catalog and an event server. JPEG 2000 offers many useful new features and has the potential to revolutionize the way high-resolution image data are disseminated and analyzed. This is especially relevant for solar physics, a research field in which upcoming space missions will provide more than a terabyte of image data per day. Providing efficient access to such large data volumes at both high spatial and high time resolution is of paramount importance to support scientific discovery.

Proceedings ArticleDOI
07 Nov 2009
TL;DR: A new scheme based on compressed sensing to compress a depth map and derive a reconstruction scheme to recover the original map from the subsamples using a non-linear conjugate gradient minimization scheme.
Abstract: We propose in this paper a new scheme based on compressed sensing to compress a depth map. We first subsample the entity in the frequency domain to take advantage of its compressibility. We then derive a reconstruction scheme to recover the original map from the subsamples using a non-linear conjugate gradient minimization scheme. We preserve the discontinuities of the depth map at the edges and ensure its smoothness elsewhere by incorporating the Total Variation constraint in the minimization. The results we obtained on various test depth maps show that the proposed method leads to lower error rate at high compression ratio when compared to standard image compression techniques like JPEG and JPEG 2000.

Book ChapterDOI
29 Aug 2009
TL;DR: This paper presents a lossy compression method for cartoon-like images that exploits information at image edges with the Marr---Hildreth operator followed by hysteresis thresholding and outperforms the widely-used JPEG standard.
Abstract: It is well-known that edges contain semantically important image information. In this paper we present a lossy compression method for cartoon-like images that exploits information at image edges. These edges are extracted with the Marr---Hildreth operator followed by hysteresis thresholding. Their locations are stored in a lossless way using JBIG. Moreover, we encode the grey or colour values at both sides of each edge by applying quantisation, subsampling and PAQ coding. In the decoding step, information outside these encoded data is recovered by solving the Laplace equation, i.e. we inpaint with the steady state of a homogeneous diffusion process. Our experiments show that the suggested method outperforms the widely-used JPEG standard and can even beat the advanced JPEG2000 standard for cartoon-like images.

Proceedings ArticleDOI
07 Nov 2009
TL;DR: Experimental results demonstrate that the proposed hashing-based RR quality measure system can accurately estimate the quality degradation due to JPEG and JPEG2000, the two widely adopted compression techniques in nowadays network transmission services.
Abstract: Quality monitoring is of great importance for online media broadcasting service. Without access to the original reference image in most practical scenarios, reduced-referenced (RR) image quality assessment is a good tradeoff and generally more reliable than no-reference (NR) metrics. In this paper, we propose employing image hashing features as side information to estimate the image quality. With its monotone sensitivity to the content quality degradation (e.g. due to compression), the proposed RR quality monitoring method based on our FJLT (Fast Johnson-Lindenstrauss transform) hashing provides two advantages: the accurate image quality estimate in term of conventional objective quality measure such as PSNR, and the low data rate required for delivering the partial reference information. Experimental results demonstrate that the proposed hashing-based RR quality measure system can accurately estimate the quality degradation due to JPEG and JPEG2000, the two widely adopted compression techniques in nowadays network transmission services.


Proceedings ArticleDOI
19 Oct 2009
TL;DR: A novel method of JPEG steganalysis is proposed based on an observation of bi-variate generalized Gaussian distribution in Discrete Cosine Transform (DCT) domain, neighboring joint density features on both intra-block and inter-block are extracted.
Abstract: Detection of information-hiding in JPEG images is actively delivered in steganalysis community due to the fact that JPEG is a widely used compression standard and several steganographic systems have been designed for covert communication in JPEG images. In this paper, we propose a novel method of JPEG steganalysis. Based on an observation of bi-variate generalized Gaussian distribution in Discrete Cosine Transform (DCT) domain, neighboring joint density features on both intra-block and inter-block are extracted. Support Vector Machines (SVMs) are applied for detection. Experimental results indicate that this new method prominently improves a current art of steganalysis in detecting several steganographic systems in JPEG images. Our study also shows that it is more accurate to evaluate the detection performance in terms of both image complexity and information hiding ratio.

Journal ArticleDOI
TL;DR: A dichotomic technique for searching the optimal UEP strategy, which lends ideas from existing algorithms, is presented, and a method of virtual interleaving is adopted to be used for the transmission of high bit rate streams over packet loss channels, guaranteeing a large PSNR advantage over a plain transmission scheme.
Abstract: The transmission of JPEG 2000 images or video over wireless channels has to cope with the high probability and burstyness of errors introduced by Gaussian noise, linear distortions, and fading. At the receiver side, there is distortion due to the compression performed at the sender side, and to the errors introduced in the data stream by the channel. Progressive source coding can also be successfully exploited to protect different portions of the data stream with different channel code rates, based upon the relative importance that each portion has on the reconstructed image. Unequal error protection (UEP) schemes are generally adopted, which offer a close to the optimal solution. In this paper, we present a dichotomic technique for searching the optimal UEP strategy, which lends ideas from existing algorithms, for the transmission of JPEG 2000 images and video over a wireless channel. Moreover, we also adopt a method of virtual interleaving to be used for the transmission of high bit rate streams over packet loss channels, guaranteeing a large PSNR advantage over a plain transmission scheme. These two protection strategies can also be combined to maximize the error correction capabilities.

Book ChapterDOI
23 Sep 2009
TL;DR: A new method based on binary space partitions to simultaneously mesh and compress a depth map that is represented with a compressed adaptive mesh that can be directly applied to render the 3D scene.
Abstract: We propose in this paper a new method based on binary space partitions to simultaneously mesh and compress a depth map. The method divides the map adaptively into a mesh that has the form of a binary triangular tree (tritree). The nodes of the mesh are the sparse non-uniform samples of the depth map and are able to interpolate the other pixels with minimal error. We apply differential coding after that to represent the sparse disparities at the mesh nodes. We then use entropy coding to compress the encoded disparities. We finally benefit from the binary tree and compress the mesh via binary tree coding to condense its representation. The results we obtained on various depth images show that the proposed scheme leads to lower depth error rate at higher compression ratios when compared to standard compression techniques like JPEG 2000. Moreover, using our method, a depth map is represented with a compressed adaptive mesh that can be directly applied to render the 3D scene.

Journal ArticleDOI
TL;DR: A JPEG decompression algorithm which is designed to produce substantially higher quality images from the same standard JPEG encodings by incorporating a document image model into the decoding process which accounts for the wide variety of content in modern complex color documents.
Abstract: The JPEG standard is one of the most prevalent image compression schemes in use today. While JPEG was designed for use with natural images, it is also widely used for the encoding of raster documents. Unfortunately, JPEG's characteristic blocking and ringing artifacts can severely degrade the quality of text and graphics in complex documents. We propose a JPEG decompression algorithm which is designed to produce substantially higher quality images from the same standard JPEG encodings. The method works by incorporating a document image model into the decoding process which accounts for the wide variety of content in modern complex color documents. The method works by first segmenting the JPEG encoded document into regions corresponding to background, text, and picture content. The regions corresponding to text and background are then decoded using maximum a posteriori (MAP) estimation. Most importantly, the MAP reconstruction of the text regions uses a model which accounts for the spatial characteristics of text and graphics. Our experimental comparisons to the baseline JPEG decoding as well as to three other decoding schemes, demonstrate that our method substantially improves the quality of decoded images, both visually and as measured by PSNR.

Proceedings ArticleDOI
23 Oct 2009
TL;DR: The study shows that the proposed method to detect resized JPEG images and spliced images, which are widely used in image forgery, is highly effective and related to both image complexity and resize scale factor.
Abstract: Today's ubiquitous digital media are easily tampered by, e.g., removing or adding objects from or into images without leaving any obvious clues. JPEG is a most widely used standard in digital images and it can be easily doctored. It is therefore necessary to have reliable methods to detect forgery in JPEG images for applications in law enforcement, forensics, etc. In this paper, based on the correlation of neighboring Discrete Cosine Transform (DCT) coefficients, we propose a method to detect resized JPEG images and spliced images, which are widely used in image forgery. In detail, the neighboring joint density features of the DCT coefficients are extracted; then Support Vector Machines (SVM) are applied to the features for detection. To improve the evaluation of JPEG resized detection, we utilize the shape parameter of generalized Gaussian distribution (GGD) of DCT coefficients to measure the image complexity.The study shows that our method is highly effective in detecting JPEG images resizing and splicing forgery. In the detection of resized JPEG image, the performance is related to both image complexity and resize scale factor. At the same scale factor, the detection performance in high image complexity is, as can be expected, lower than that in low image complexity.

Proceedings ArticleDOI
30 Oct 2009
TL;DR: The impact of using different lossless compression algorithms on the compression ratios and timings when processing various biometric sample data is investigated.
Abstract: The impact of using different lossless compression algorithms when compressing biometric iris sample data from several public iris databases is investigated. In particular, we relate the application of dedicated lossless image codecs like lossless JPEG, JPEG-LS, PNG, and GIF, lossless variants of lossy codecs like JPEG2000, JPEG XR, and SPIHT, and a few general purpose compression schemes to rectilinear iris imagery. The results are discussed in the light of the recent ISO/IEC FDIS 19794-6 and ANSI/NIST-ITL 1-2011 standards and the IREX recommendations.

Journal ArticleDOI
TL;DR: In this paper, the authors conducted subjective tests using two representative still image coders, JPEG and JPEG 2000, and found that an observer would indeed prefer a lower spatial resolution (at a fixed viewing distance) in order to reduce the visibility of the compression artifacts.
Abstract: Most full-reference fidelity/quality metrics compare the original image to a distorted image at the same resolution assuming a fixed viewing condition. However, in many applications, such as video streaming, due to the diversity of channel capacities and display devices, the viewing distance and the spatiotemporal resolution of the displayed signal may be adapted in order to optimize the perceived signal quality. For example, at low bitrate coding applications an observer may prefer to reduce the resolution or increase the viewing distance to reduce the visibility of the compression artifacts. The tradeoff between resolution/viewing conditions and visibility of compression artifacts requires new approaches for the evaluation of image quality that account for both image distortions and image size. In order to better understand such tradeoffs, we conducted subjective tests using two representative still image coders, JPEG and JPEG 2000. Our results indicate that an observer would indeed prefer a lower spatial resolution (at a fixed viewing distance) in order to reduce the visibility of the compression artifacts, but not all the way to the point where the artifacts are completely invisible. Moreover, the observer is willing to accept more artifacts as the image size decreases. The subjective test results we report can be used to select viewing conditions for coding applications. They also set the stage for the development of novel fidelity metrics. The focus of this paper is on still images, but it is expected that similar tradeoffs apply to video.

Journal ArticleDOI
TL;DR: This paper presents a lossless bitplane-based method for efficient compression of microarray images based on arithmetic coding driven by image-dependent multibitplane finite-context models that produces an embedded bitstream that allows progressive, lossy-to-lossless decoding.
Abstract: The use of microarray expression data in state-of-the-art biology has been well established. The widespread adoption of this technology, coupled with the significant volume of data generated per experiment, in the form of images, has led to significant challenges in storage and query retrieval. In this paper, we present a lossless bitplane-based method for efficient compression of microarray images. This method is based on arithmetic coding driven by image-dependent multibitplane finite-context models. It produces an embedded bitstream that allows progressive, lossy-to-lossless decoding. We compare the compression efficiency of the proposed method with three image compression standards (JPEG2000, JPEG-LS, and JBIG) and also with the two most recent specialized methods for microarray image coding. The proposed method gives better results for all images of the test sets and confirms the effectiveness of bitplane-based methods and finite-context modeling for the lossless compression of microarray images.