scispace - formally typeset
Search or ask a question

Showing papers on "JPEG 2000 published in 2007"


Journal ArticleDOI
TL;DR: It is shown that the scheme based on the proposed low-complexity KLT significantly outperforms previous schemes as to rate-distortion performance, and an evaluation framework based on both reconstruction fidelity and impact on image exploitation is introduced.
Abstract: Transform-based lossy compression has a huge potential for hyperspectral data reduction. Hyperspectral data are 3-D, and the nature of their correlation is different in each dimension. This calls for a careful design of the 3-D transform to be used for compression. In this paper, we investigate the transform design and rate allocation stage for lossy compression of hyperspectral data. First, we select a set of 3-D transforms, obtained by combining in various ways wavelets, wavelet packets, the discrete cosine transform, and the Karhunen-Loegraveve transform (KLT), and evaluate the coding efficiency of these combinations. Second, we propose a low-complexity version of the KLT, in which complexity and performance can be balanced in a scalable way, allowing one to design the transform that better matches a specific application. Third, we integrate this, as well as other existing transforms, in the framework of Part 2 of the Joint Photographic Experts Group (JPEG) 2000 standard, taking advantage of the high coding efficiency of JPEG 2000, and exploiting the interoperability of an international standard. We introduce an evaluation framework based on both reconstruction fidelity and impact on image exploitation, and evaluate the proposed algorithm by applying this framework to AVIRIS scenes. It is shown that the scheme based on the proposed low-complexity KLT significantly outperforms previous schemes as to rate-distortion performance. As for impact on exploitation, we consider multiclass hard classification, spectral unmixing, binary classification, and anomaly detection as benchmark applications

292 citations


Proceedings ArticleDOI
27 Feb 2007
TL;DR: A novel statistical model based on Benford's law for the probability distributions of the first digits of the block-DCT and quantized JPEG coefficients is presented and a parametric logarithmic law, i.e., the generalized Benford't law, is formulated.
Abstract: In this paper, a novel statistical model based on Benford's law for the probability distributions of the first digits of the block-DCT and quantized JPEG coefficients is presented. A parametric logarithmic law, i.e., the generalized Benford's law, is formulated. Furthermore, some potential applications of this model in image forensics are discussed in this paper, which include the detection of JPEG compression for images in bitmap format, the estimation of JPEG compression Qfactor for JPEG compressed bitmap image, and the detection of double compressed JPEG image. The results of our extensive experiments demonstrate the effectiveness of the proposed statistical model.

287 citations


Journal ArticleDOI
TL;DR: Experimental results show that the proposed ADL-based image coding technique outperforms JPEG 2000 in both PSNR and visual quality, with the improvement up to 2.0 dB on images with rich orientation features.
Abstract: We present a novel 2-D wavelet transform scheme of adaptive directional lifting (ADL) in image coding. Instead of alternately applying horizontal and vertical lifting, as in present practice, ADL performs lifting-based prediction in local windows in the direction of high pixel correlation. Hence, it adapts far better to the image orientation features in local windows. The ADL transform is achieved by existing 1-D wavelets and is seamlessly integrated into the global wavelet transform. The predicting and updating signals of ADL can be derived even at the fractional pixel precision level to achieve high directional resolution, while still maintaining perfect reconstruction. To enhance the ADL performance, a rate-distortion optimized directional segmentation scheme is also proposed to form and code a hierarchical image partition adapting to local features. Experimental results show that the proposed ADL-based image coding technique outperforms JPEG 2000 in both PSNR and visual quality, with the improvement up to 2.0 dB on images with rich orientation features

238 citations


Proceedings ArticleDOI
15 Apr 2007
TL;DR: A novel method for the detection of image tampering operations in JPEG images by exploiting the blocking artifact characteristics matrix (BACM) to train a support vector machine (SVM) classifier for recognizing whether an image is an original JPEG image or it has been cropped from another JPEG image and re-saved as a JPEG image.
Abstract: One of the most common practices in image tampering involves cropping a patch from a source and pasting it onto a target. In this paper, we present a novel method for the detection of such tampering operations in JPEG images. The lossy JPEG compression introduces inherent blocking artifacts into the image and our method exploits such artifacts to serve as a 'watermark' for the detection of image tampering. We develop the blocking artifact characteristics matrix (BACM) and show that, for the original JPEG images, the BACM exhibits regular symmetrical shape; for images that are cropped from another JPEG image and re-saved as JPEG images, the regular symmetrical property of the BACM is destroyed. We fully exploit this property of the BACM and derive representation features from the BACM to train a support vector machine (SVM) classifier for recognizing whether an image is an original JPEG image or it has been cropped from another JPEG image and re-saved as a JPEG image. We present experiment results to show the efficacy of our method.

197 citations


Journal ArticleDOI
TL;DR: A SIMD algorithm is presented that performs the convolution-based DWT completely on a GPU, which brings us significant performance gain on a normal PC without extra cost.
Abstract: Discrete wavelet transform (DWT) has been heavily studied and developed in various scientific and engineering fields. Its multiresolution and locality nature facilitates applications requiring progressiveness and capturing high-frequency details. However, when dealing with enormous data volume, its performance may drastically reduce. On the other hand, with the recent advances in consumer-level graphics hardware, personal computers nowadays usually equip with a graphics processing unit (GPU) based graphics accelerator which offers SIMD-based parallel processing power. This paper presents a SIMD algorithm that performs the convolution-based DWT completely on a GPU, which brings us significant performance gain on a normal PC without extra cost. Although the forward and inverse wavelet transforms are mathematically different, the proposed algorithm unifies them to an almost identical process that can be efficiently implemented on GPU. Different wavelet kernels and boundary extension schemes can be easily incorporated by simply modifying input parameters. To demonstrate its applicability and performance, we apply it to wavelet-based geometric design, stylized image processing, texture-illuminance decoupling, and JPEG2000 image encoding

101 citations


Book ChapterDOI
22 Aug 2007
TL;DR: This paper proposes a lossless data hiding technique for JPEG images based on histogram pairs that embeds data into the JPEG quantized 8x8 block DCT coefficients and can obtain higher payload than the prior arts.
Abstract: This paper proposes a lossless data hiding technique for JPEG images based on histogram pairs It embeds data into the JPEG quantized 8x8 block DCT coefficients and can achieve good performance in terms of PSNR versus payload through manipulating histogram pairs with optimum threshold and optimum region of the JPEG DCT coefficients It can obtain higher payload than the prior arts In addition, the increase of JPEG file size after data embedding remains unnoticeable These have been verified by our extensive experiments

91 citations


Journal ArticleDOI
TL;DR: To ensure that the iris-matching algorithms studied are not degraded by image compression, it is recommended that normalized iris images should be exchanged at 512 times 80 pixel resolution, compressed by JPEG 2000 to 0.5 bpp.
Abstract: The resilience of identity verification systems to subsampling and compression of human iris images is investigated for three high-performance iris-matching algorithms. For evaluation, 2156 images from 308 irises from the extended Chinese Academy of Sciences Institute of Automation database were mapped into a rectangular format with 512 pixels circumferentially and 80 radially. For identity verification, the 48 rows that were closest to the pupil were taken and images were resized by subsampling their Fourier coefficients. Negligible degradation in verification is observed if at least 171 circumferential and 16 radial Fourier coefficients are preserved, which would correspond to sampling the polar image at 342 times 32 pixels. With JPEG2000 compression, improved matching performance is observed down to 0.3 b/pixel (bpp), attributed to noise reduction without a significant loss of texture. To ensure that the iris-matching algorithms studied are not degraded by image compression, it is recommended that normalized iris images should be exchanged at 512 times 80 pixel resolution, compressed by JPEG 2000 to 0.5 bpp. This achieves a smaller file size than the ANSI/INCITS 379-2004 iris image interchange format.

71 citations


Proceedings ArticleDOI
TL;DR: The overall performance of rate-distortion performance between JPEG 2000, AVC/H.264 High 4:4:4 Intra and HD Photo is quite comparable for the three coding approaches, within an average range of ±10% in bitrate variations, and outperforming the conventional JPEG.
Abstract: In this paper, we report a study evaluating rate-distortion performance between JPEG 2000, AVC/H.264 High 4:4:4 Intra and HD Photo. A set of ten high definition color images with different spatial resolutions has been used. Both the PSNR and the perceptual MSSIM index were considered as distortion metrics. Results show that, for the material used to carry out the experiments, the overall performance, in terms of compression efficiency, are quite comparable for the three coding approaches, within an average range of ±10% in bitrate variations, and outperforming the conventional JPEG.

69 citations


Journal ArticleDOI
TL;DR: A novel multiple description coding technique is proposed, based on optimal Lagrangian rate allocation, which enables easy tuning of the required coding redundancy and generated streams are fully compatible with Part 1 of the standard.
Abstract: In this paper, a novel multiple description coding technique is proposed, based on optimal Lagrangian rate allocation. The method assumes the coded data consists of independently coded blocks. Initially, all the blocks are coded at two different rates. Then blocks are split into two subsets with similar rate distortion characteristics; two balanced descriptions are generated by combining code blocks belonging to the two subsets encoded at opposite rates. A theoretical analysis of the approach is carried out, and the optimal rate distortion conditions are worked out. The method is successfully applied to the JPEG 2000 standard and simulation results show a noticeable performance improvement with respect to state-of-the art algorithms. The proposed technique enables easy tuning of the required coding redundancy. Moreover, the generated streams are fully compatible with Part 1 of the standard

61 citations


Journal ArticleDOI
TL;DR: A novel bit-rate-reduced approach for reducing the memory required to store a remote diagnosis and rapidly transmission it and the embedded DCT-CSPIHT image compression reduced the computational complexity to only a quarter of the wavelet-based subband decomposition and improved the quality of the reconstructed medical image.

57 citations


Journal ArticleDOI
TL;DR: A lossless wavelet-based image compression method with adaptive prediction that achieves a higher compression rate on CT and MRI images comparing with several state-of-the-art methods.

Journal ArticleDOI
TL;DR: This letter presents an advanced discrete cosine transform (DCT)-based image compression method that combines advantages of several approaches and provides significantly better compression than JPEG and other DCT-based techniques.
Abstract: This letter presents an advanced discrete cosine transform (DCT)-based image compression method that combines advantages of several approaches. First, an image is divided into blocks of different sizes by a rate-distortion-based modified horizontal-vertical partition scheme. Statistical redundancy of quantized DCT coefficients of each image block is reduced by a bit-plane dynamical arithmetical coding with a sophisticated context modeling. Finally, a post-filtering removes blocking artifacts in decompressed images. The proposed method provides significantly better compression than JPEG and other DCT-based techniques. Moreover, it outperforms JPEG2000 and other wavelet-based image coders

Proceedings ArticleDOI
02 Jul 2007
TL;DR: An accurate end-to-end distortion model is proposed to analyze the influence of different channel packets on the distortion of received images, where it provides a feasible way to optimally allocate unequal FEC to a JPEG2000 bitstream according to given channel conditions.
Abstract: JPEG2000, the latest international image compression standard, owns many unique characteristics that are different from other well-known image compression schemes such as JPEG and SPIHT. As a result, how to robustly transmit JPEG2000 bitstreams is an important research topic. In this paper, we apply our previous IL-ULP (improved layered unequal loss protection) scheme to transmit JPEG2000 coded images. We propose an accurate end-to-end distortion model to analyze the influence of different channel packets on the distortion of received images, where we consider the distortion contribution from each coding pass in each code-block. Our end-to-end analysis provides a feasible way to optimally allocate unequal FEC to a JPEG2000 bitstream according to given channel conditions. Experimental results demonstrate that our proposed IL-ULP can achieve good performance for transmission of JPEG2000 bitstreams.

Journal ArticleDOI
TL;DR: It is found that the image contrast and the average gray level play important roles in image compression and quality evaluation and in the future, the image gray level and contrast effect should be considered in developing new objective metrics.
Abstract: Previous studies have shown that Joint Photographic Experts Group (JPEG) 2000 compression is better than JPEG at higher compression ratio levels. However, some findings revealed that this is not valid at lower levels. In this study, the qualities of compressed medical images in these ratio areas (∼20), including computed radiography, computed tomography head and body, mammographic, and magnetic resonance T1 and T2 images, were estimated using both a pixel-based (peak signal to noise ratio) and two 8 × 8 window-based [Q index and Moran peak ratio (MPR)] metrics. To diminish the effects of blocking artifacts from JPEG, jump windows were used in both window-based metrics. Comparing the image quality indices between jump and sliding windows, the results showed that blocking artifacts were produced from JPEG compression, even at low compression ratios. However, even after the blocking artifacts were omitted in JPEG compressed images, JPEG2000 outperformed JPEG at low compression levels. We found in this study that the image contrast and the average gray level play important roles in image compression and quality evaluation. There were drawbacks in all metrics that we used. In the future, the image gray level and contrast effect should be considered in developing new objective metrics.

Proceedings ArticleDOI
28 Jan 2007
TL;DR: JPG compression performs surprisingly well at high bitrates in face recognition systems, given the low PSNR performance observed, although PSNR suggests JPEG to deliver worse recognition results in the case of face imagery.
Abstract: The impact of using different lossy compression algorithms on the matching accuracy of fingerprint and face recognition systems is investigated. In particular, we relate rate-distortion performance as measured in PSNR to the matching scores as obtained by the recognition systems. JPEG2000 and SPIHT are correctly predicted by PSNR to be the most suited compression algorithms to be used in fingerprint and face recognition systems. Fractal compression is identified to be least suited for the use in the investigated recognition systems, although PSNR suggests JPEG to deliver worse recognition results in the case of face imagery. JPEG compression performs surprisingly well at high bitrates in face recognition systems, given the low PSNR performance observed.

Journal ArticleDOI
TL;DR: A new architecture of lifting processor for JPEG2000 is proposed and implemented with both FPGA and ASIC and includes a new cell structure that executes a unit of lifting calculation to satisfy the requirements of the lifting process of a repetitive arithmetic.
Abstract: In this paper, we proposed a new architecture of lifting processor for JPEG2000 and implemented it with both FPGA and ASIC. It includes a new cell structure that executes a unit of lifting calculation to satisfy the requirements of the lifting process of a repetitive arithmetic. After analyzing the operational sequence of lifting arithmetic in detail and imposing the causality to implement in hardware, the unit cell was optimized. A new simple lifting kernel was organized by repeatedly arranging the unit cells and a lifting processor was realized for Motion JPEG2000 with the kernel. The proposed processor can handle any size of tiles and support both lossy and lossless operation with (9,7) filter and (5,3) filter, respectively. Also, it has the same throughput rate as the input, and can continuously output the wavelet coefficients of the four types (LL, LH, HL, HH) simultaneously. The lifting processor was implemented in a 0.35 mum CMOS fabrication process, the result of which occupied about 90 000 gates, and was stably operated in about 150 MHz

Journal ArticleDOI
TL;DR: Two versions of the new CBA algorithms are introduced and compared and it is shown that by using a Laplacian probability model for the DCT coefficients as well as down-sampling the subordinate colors, the compression results are further improved.
Abstract: Most coding techniques for color image compression employ a de-correlation approach-the RGB primaries are transformed into a de-correlated color space, such as YUV or YCbCr, then the de-correlated color components are encoded separately. Examples of this approach are the JPEG and JPEG2000 image compression standards. A different method, of a correlation-based approach (CBA), is presented in this paper. Instead of de-correlating the color primaries, we employ the existing inter-color correlation to approximate two of the components as a parametric function of the third one, called the base component. We then propose to encode the parameters of the approximation function and part of the approximation errors. We use the DCT (discrete cosine transform) block transform to enhance the algorithm's performance. Thus the approximation of two of the color components based on the third color is performed for each DCT subband separately. We use the rate-distortion theory of subband transform coders to optimize the algorithm's bits allocation for each subband and to find the optimal color components transform to be applied prior to coding. This pre-processing stage is similar to the use of the RGB to YUV transform in JPEG and may further enhance the algorithm's performance. We introduce and compare two versions of the new algorithm and show that by using a Laplacian probability model for the DCT coefficients as well as down-sampling the subordinate colors, the compression results are further improved. Simulation results are provided showing that the new CBA algorithms are superior to presently available algorithms based on the common de-correlation approach, such as JPEG.

Journal ArticleDOI
TL;DR: This paper proposes a mesh streaming method based on JPEG 2000 standard and integrates it into an existed multimedia streaming server, so that this method can directly benefit from current image and video streaming technologies.
Abstract: For PC and even mobile device, video and image streaming technologies, such as H.264 and JPEG/JPEG 2000, are already mature. However, the streaming technology for 3D model or so-called mesh data is still far from practical use. Therefore, in this paper, we propose a mesh streaming method based on JPEG 2000 standard and integrate it into an existed multimedia streaming server, so that our mesh streaming method can directly benefit from current image and video streaming technologies. In this method, the mesh data of a 3D model is first converted into a JPEG 2000 image, and then based on the JPEG 2000 streaming technique, the mesh data can then be transmitted over the Internet as a mesh streaming. Furthermore, we also extend this mesh streaming method for deforming meshes as the extension from a JPEG 2000 image to a motion JPEG 2000 video, so that our mesh streaming method is not only for transmitting a static 3D model but also a 3D animation model. To increase the usability of our method, the mesh stream can also be inserted into a X3D scene as an extension node of X3D. Moreover, since this method is based on the JPEG 2000 standard, our system is much suitable to be integrated into any existed client-server or peer-to-peer multimedia streaming system.

Patent
17 Apr 2007
TL;DR: In this article, a method and apparatus for segmenting an image, adaptively scaling an image and automatically scaling and cropping an image based on codestream headers data is described.
Abstract: A method and apparatus is described for segmenting an image, for adaptively scaling an image, and for automatically scaling and cropping an image based on codestream headers data. In one embodiment, a file that can provide a header that contains multi-scale entropy distribution information on blocks of an image is received. For each block, the block is assigned to a scale from a set of scales that maximizes a cost function. The cost function is a product of a total likelihood and a prior. The total likelihood is a product of likelihoods of the blocks. The image is segmented by grouping together blocks that have been assigned equivalent scales. In one embodiment, the file represents an image in JPEG 2000 format.

Journal ArticleDOI
TL;DR: This paper addresses the problem of scene dependency and scene susceptibility in image quality assessments and proposes image analysis as a means to group test scenes, according to basic inherent scene properties that human observers refer to when they judge the quality of images.
Abstract: Image quality assessments have shown that both JPEG and JPEG2000 compression schemes are dependent on scene content. This paper addresses the problem of scene dependency and scene susceptibility in image quality assessments and proposes image analysis as a means to group test scenes, according to basic inherent scene properties that human observers refer to when they judge the quality of images. Experimental work is carried out to investigate the relationship between scene content and the subjective results obtained from experimental work carried out in [E. Allen, S. Triantaphillidou, and R. E. Jacobson, "Image quality comparison between JPEG and JPEG2000. l. Psychophysical Investigation", J. Imaging Sci. Technol. 51, 248 (2007)]. The content of the test images used in this work is analyzed using simple image analysis measures that quantify various image features, such as original scene contrast and global brightness, amount of dominant lines, scene busyness (defined here as a scene/image property indicating the presence or absence of detail), and flat areas within the scene. Preliminary results and conclusions are obtained and suggestions are made to form a basis for further studies on scene dependency and scene classification with respect to image quality measurements.

Journal ArticleDOI
TL;DR: Results on Airborne Visible/Infrared Imaging Spectrometer scenes show that the new lossy compression algorithm provides better rate-distortion performance, as well as improved anomaly detection performance, with respect to the state of the art.
Abstract: We propose a new lossy compression algorithm for hyperspectral images, which is based on the spectral Karhunen-Loeve transform, followed by spatial JPEG 2000, which employs a model of anomalous pixels during the compression process. Results on Airborne Visible/Infrared Imaging Spectrometer scenes show that the new algorithm provides better rate-distortion performance, as well as improved anomaly detection performance, with respect to the state of the art.

Journal ArticleDOI
TL;DR: A novel multi-level wavelet based fusion algorithm that combines information from fingerprint, face, iris, and signature images of an individual into a single composite image that reduces the memory size, increases the recognition accuracy using multi-modal biometric features, and withstands common attacks.

Journal ArticleDOI
TL;DR: A comparison of subjective image quality between JPEG and JPEG 2000 to establish whether JPEG 2000 does indeed demonstrate significant improvements in visual quality, and a particular focus of this work is the inherent scene dependency of the two algorithms and their influence on subjective imagequality results.
Abstract: The original JPEG compression standard is efficient at low to medium levels of compression with relatively low levels of loss in visual image quality and has found widespread use in the imaging industry. Excessive compression using JPEG however, results in well-known artifacts such as "blocking" and "ringing," and the variation in image quality as a result of differing scene content is well documented. JPEG 2000 has been developed to improve on JPEG in terms of functionality and image quality at lower bit rates. One of the more fundamental changes is the use of a discrete wavelet transform instead of a discrete cosine transform, which provides several advantages both in terms of the way in which the image is encoded and overall image quality. This study involves a comparison of subjective image quality between JPEG and JPEG 2000 to establish whether JPEG 2000 does indeed demonstrate significant improvements in visual quality. A particular focus of this work is the inherent scene dependency of the two algorithms and their influence on subjective image quality results. Further work on the characterization of scene content is carried out in a connected study [S. Triantaphillidou, E. Allen, and R. E. Jacobson, "Image quality comparison between JPEG and JPEG2000. II. Scene dependency, scene analysis, and classification"

Journal Article
TL;DR: An objective image quality assessment to measure the quality of gray scale compressed image, which is correlation well with subjective quality measurement (MOS) and least time taken is proposed.
Abstract: Measurement of the quality of image compression is important for image processing application. In this paper, we propose an objective image quality assessment to measure the quality of gray scale compressed image, which is correlation well with subjective quality measurement (MOS) and least time taken. The new objective image quality measurement is developed from a few fundamental of objective measurements to evaluate the compressed image quality based on JPEG and JPEG2000. The reliability between each fundamental objective measurement and subjective measurement (MOS) is found. From the experimental results, we found that the Maximum Difference measurement (MD) and a new proposed measurement, Structural Content Laplacian Mean Square Error (SCLMSE), are the suitable measurements that can be used to evaluate the quality of JPEG200 and JPEG compressed image, respectively. In addition, MD and SCLMSE measurements are scaled to make them equivalent to MOS, given the rate of compressed image quality from 1 to 5 (unacceptable to excellent quality). Keywords—JPEG, JPEG2000, Objective image quality measurement, Subjective image quality measurement, correlation coefficients.

Journal ArticleDOI
TL;DR: An error-resilient arithmetic coder with a forbidden symbol is used in order to improve the performance of the joint source/channel scheme and the practical relevance of the proposed joint decoding approach is demonstrated within the JPEG 2000 coding standard.
Abstract: In this paper, an innovative joint-source channel coding scheme is presented. The proposed approach enables iterative soft decoding of arithmetic codes by means of a soft-in soft-out decoder based on suboptimal search and pruning of a binary tree. An error-resilient arithmetic coder with a forbidden symbol is used in order to improve the performance of the joint source/channel scheme. The performance in the case of transmission across the AWGN channel is evaluated in terms of word error probability and compared to a traditional separated approach. The interleaver gain, the convergence property of the system, and the optimal source/channel rate allocation are investigated. Finally, the practical relevance of the proposed joint decoding approach is demonstrated within the JPEG 2000 coding standard. In particular, an iterative channel and JPEG 2000 decoder is designed and tested in the case of image transmission across the AWGN channel

Book ChapterDOI
01 Jan 2007
TL;DR: This chapter analyzes the effects that standard image compression methods - JPEG (Wallace, 1991) and JPEG2000 (Skodras et al., 2001) - have on two well known subspace appearance-based face recognition algorithms: Principal Component Analysis - PCA and ICA.
Abstract: With the growing number of face recognition applications in everyday life, image- and video-based recognition methods are becoming important research topic (Zhao et al., 2003). Effects of pose, illumination and expression are issues currently most studied in face recognition. So far, very little has been done to investigate the effects of compression on face recognition, even though the images are mainly stored and/or transported in a compressed format. Still-to-still image experimental setups are often researched, but only in uncompressed image formats. Still-to-video research (Zhou et al., 2003) mostly deals with issues of tracking and recognizing faces in a sense that still uncompressed images are used as a gallery and compressed video segments as probes. In this chapter we analyze the effects that standard image compression methods - JPEG (Wallace, 1991) and JPEG2000 (Skodras et al., 2001) - have on two well known subspace appearance-based face recognition algorithms: Principal Component Analysis - PCA (Turk & Pentland, 1991), Linear Discriminant Analysis - LDA (Belhumeur et al., 1996) and Independent Component Analysis - ICA (Bartlett et al., 2002). We use McNemar's hypothesis test (Beveridge et al., 2001 ; Delac et al., 2006) when comparing recognition accuracy in order to determine if the observed outcomes of the experiments are statistically important or a matter of chance. Following the idea of a reproducible research, a comprehensive description of our experimental setup is given, along with details on the choice of images used in the training and testing stage, exact preprocessing steps and recognition algorithms parameters setup. Image database chosen for the experiments is the grayscale portion of the FERET database (Phillips et al., 2000) and its accompanying protocol for face identification, including standard image gallery and probe sets. Image compression is performed using standard JPEG and JPEG2000 coder implementations and all experiments are done in pixel domain (i.e. the images are compressed to a certain number of bits per pixel and then uncompressed prior to use in recognition experiments). The recognition system's overall setup we test is twofold. In the first part, only probe images are compressed and training and gallery images are uncompressed (Delac et al., 2005). This setup mimics the expected first step in implementing compression in real-life face recognition applications: an image captured by a surveillance camera is probed to an existing high-quality gallery image. In the second part, a leap towards justifying fully compressed domain face recognition is taken by using compressed images in both training and testing stage (Delac, 2006). We will show that, contrary to common opinion, compression does not deteriorate performance but it even improves it slightly in some cases. We will also suggest some prospective lines of further research based on our findings.

Journal ArticleDOI
TL;DR: An optimized content-aware authentication scheme for JPEG-2000 streams over lossy networks, where a received packet is consumed only when it is both decodable and authenticated, achieves its design goal in that the rate-distortion curve of the authenticated image is very close to the R-D curve when no authentication is required.
Abstract: This paper proposes an optimized content-aware authentication scheme for JPEG-2000 streams over lossy networks, where a received packet is consumed only when it is both decodable and authenticated. In a JPEG-2000 codestream, some packets are more important than others in terms of coding dependency and image quality. This naturally motivates allocating more redundant authentication information for the more important packets in order to maximize their probability of authentication and thereby minimize the distortion at the receiver. Towards this goal, with the awareness of its corresponding image content, we formulate an optimization framework to compute an authentication graph to maximize the expected media quality at the receiver, given specific authentication overhead and knowledge of network loss rate. System analysis and experimental results demonstrate that the proposed scheme achieves our design goal in that the rate-distortion (R-D) curve of the authenticated image is very close to the R-D curve when no authentication is required

Journal ArticleDOI
01 Apr 2007-Eye
TL;DR: Performance of classic JPEG and JPEG2000 algorithms is equivalent when compressing digital images of DR lesions from 1.26 MB to 118 KB and 58 KB, but higher compression ratios show slightly better results with JPEG2000 compression, but may be insufficient for screening purposes.
Abstract: Evaluation of the effect of JPEG and JPEG2000 image compression on the detection of diabetic retinopathy

Proceedings ArticleDOI
27 Mar 2007
TL;DR: By incorporating adaptive directional lifting and 2D piecewise autoregressive model into the encoder and decoder respectively, this work is able to improve the performance of JPEG 2000 image codec at low to modest bit rates.
Abstract: Considering that quincunx lattice is a more efficient spatial sampling scheme than square lattice, we investigate a new approach of image coding for quincunx sample arrangement. The key findings are: 1) adaptive directional lifting is particularly suited to decorrelate samples on quincunx lattice, and 2) quincunx samples can be processed by a 2D piecewise autoregressive model to reproduce the image of conventional square pixel grid, while preserving high frequency spatial features well. By incorporating these two techniques into the encoder and decoder respectively, we are able to improve the performance of JPEG 2000 image codec at low to modest bit rates. Since an image can be easily split into quincunx segments, this work has significance for multiple description image/video coding as well

Journal ArticleDOI
TL;DR: This letter proposes a method that exploits the data rate-distortion characteristics to generate multiple descriptions for JPEG 2000 with tunable redundancy levels and the identification of the best number of descriptions as a function of the network conditions.
Abstract: Multiple description coding (MDC) is a good way to combat packet losses in error-prone networks subject to packet erasures. However, redundancy tuning is often difficult, and this makes the generation of descriptions with good redundancy-rate-distortion performance a hard job. Moreover, the complexity of generating more than two descriptions represents a strong limitation to MDC. In this letter, we propose a method that exploits the data rate-distortion characteristics to generate multiple descriptions for JPEG 2000 with tunable redundancy levels. The identification of the best number of descriptions as a function of the network conditions is also addressed