scispace - formally typeset
Search or ask a question

Showing papers on "JPEG 2000 published in 2005"


Journal ArticleDOI
TL;DR: It is claimed that natural scenes contain nonlinear dependencies that are disturbed by the compression process, and that this disturbance can be quantified and related to human perceptions of quality.
Abstract: Measurement of image or video quality is crucial for many image-processing algorithms, such as acquisition, compression, restoration, enhancement, and reproduction. Traditionally, image quality assessment (QA) algorithms interpret image quality as similarity with a "reference" or "perfect" image. The obvious limitation of this approach is that the reference image or video may not be available to the QA algorithm. The field of blind, or no-reference, QA, in which image quality is predicted without the reference image or video, has been largely unexplored, with algorithms focussing mostly on measuring the blocking artifacts. Emerging image and video compression technologies can avoid the dreaded blocking artifact by using various mechanisms, but they introduce other types of distortions, specifically blurring and ringing. In this paper, we propose to use natural scene statistics (NSS) to blindly measure the quality of images compressed by JPEG2000 (or any other wavelet based) image coder. We claim that natural scenes contain nonlinear dependencies that are disturbed by the compression process, and that this disturbance can be quantified and related to human perceptions of quality. We train and test our algorithm with data from human subjects, and show that reasonably comprehensive NSS models can help us in making blind, but accurate, predictions of quality. Our algorithm performs close to the limit imposed on useful prediction by the variability between human subjects.

612 citations


Journal ArticleDOI
TL;DR: This paper proposed two solutions for platform-based design of H.264/AVC intra frame coder with comprehensive analysis of instructions and exploration of parallelism, and proposed a system architecture with four-parallel intra prediction and mode decision to enhance the processing capability.
Abstract: Intra prediction with rate-distortion constrained mode decision is the most important technology in H.264/AVC intra frame coder, which is competitive with the latest image coding standard JPEG2000, in terms of both coding performance and computational complexity. The predictor generation engine for intra prediction and the transform engine for mode decision are critical because the operations require a lot of memory access and occupy 80% of the computation time of the entire intra compression process. A low cost general purpose processor cannot process these operations in real time. In this paper, we proposed two solutions for platform-based design of H.264/AVC intra frame coder. One solution is a software implementation targeted at low-end applications. Context-based decimation of unlikely candidates, subsampling of matching operations, bit-width truncation to reduce the computations, and interleaved full-search/partial-search strategy to stop the error propagation and to maintain the image quality, are proposed and combined as our fast algorithm. Experimental results show that our method can reduce 60% of the computation used for intra prediction and mode decision while keeping the peak signal-to-noise ratio degradation less than 0.3 dB. The other solution is a hardware accelerator targeted at high-end applications. After comprehensive analysis of instructions and exploration of parallelism, we proposed our system architecture with four-parallel intra prediction and mode decision to enhance the processing capability. Hadamard-based mode decision is modified as discrete cosine transform-based version to reduce 40% of memory access. Two-stage macroblock pipelining is also proposed to double the processing speed and hardware utilization. The other features of our design are reconfigurable predictor generator supporting all of the 13 intra prediction modes, parallel multitransform and inverse transform engine, and CAVLC bitstream engine. A prototype chip is fabricated with TSMC 0.25-/spl mu/m CMOS 1P5M technology. Simulation results show that our implementation can process 16 mega-pixels (4096/spl times/4096) within 1 s, or namely 720/spl times/480 4:2:0 30 Hz video in real time, at the operating frequency of 54 MHz. The transistor count is 429 K, and the core size is only 1.855/spl times/1.885 mm/sup 2/.

331 citations


Journal ArticleDOI
27 Jun 2005
TL;DR: The rapid development in the field during the past 40 years and current state-of-the art strategies for coding images and videos are outlined and novel techniques targeted at achieving higher compression gains, error robustness, and network/device adaptability are described and discussed.
Abstract: The objective of the paper is to provide an overview on recent trends and future perspectives in image and video coding. Here, I review the rapid development in the field during the past 40 years and outline current state-of-the art strategies for coding images and videos. These and other coding algorithms are discussed in the context of international JPEG, JPEG 2000, MPEG-1/2/4, and H.261/3/4 standards. Novel techniques targeted at achieving higher compression gains, error robustness, and network/device adaptability are described and discussed.

217 citations


Book ChapterDOI
28 Jan 2005

196 citations


Journal ArticleDOI
TL;DR: A high-performance and memory-efficient pipeline architecture which performs the one-level two-dimensional (2-D) discrete wavelet transform (DWT) in the 5/3 and 9/7 filters by cascading the three key components.
Abstract: In this paper, we propose a high-performance and memory-efficient pipeline architecture which performs the one-level two-dimensional (2-D) discrete wavelet transform (DWT) in the 5/3 and 9/7 filters. In general, the internal memory size of 2-D architecture highly depends on the pipeline registers of one-dimensional (1-D) DWT. Based on the lifting-based DWT algorithm, the primitive data path is modified and an efficient pipeline architecture is derived to shorten the data path. Accordingly, under the same arithmetic resources, the 1-D DWT pipeline architecture can operate at a higher processing speed (up to 200 MHz in 0.25-/spl mu/m technology) than other pipelined architectures with direct implementation. The proposed 2-D DWT architecture is composed of two 1-D processors (column and row processors). Based on the modified algorithm, the row processor can partially execute each row-wise transform with only two column-processed data. Thus, the pipeline registers of 1-D architecture do not fully turn into the internal memory of 2-D DWT. For an N/spl times/M image, only 3.5N internal memory is required for the 5/3 filter, and 5.5N is required for the 9/7 filter to perform the one-level 2-D DWT decomposition with the critical path of one multiplier delay (i.e., N and M indicate the height and width of an image). The pipeline data path is regular and practicable. Finally, the proposed architecture implements the 5/3 and 9/7 filters by cascading the three key components.

147 citations


Journal ArticleDOI
TL;DR: A wavelet-based HDR still-image encoding method that maps the logarithm of each pixel value into integer values and then sends the results to a JPEG 2000 encoder to meet the HDR encoding requirement.
Abstract: The raw size of a high-dynamic-range (HDR) image brings about problems in storage and transmission. Many bytes are wasted in data redundancy and perceptually unimportant information. To address this problem, researchers have proposed some preliminary algorithms to compress the data, like RGBE/XYZE, OpenEXR, LogLuv, and so on. HDR images can have a dynamic range of more than four orders of magnitude while conventional 8-bit images retain only two orders of magnitude of the dynamic range. This distinction between an HDR image and a conventional image leads to difficulties in using most existing image compressors. JPEG 2000 supports up to 16-bit integer data, so it can already provide image compression for most HDR images. In this article, we propose a JPEG 2000-based lossy image compression scheme for HDR images of all dynamic ranges. We show how to fit HDR encoding into a JPEG 2000 encoder to meet the HDR encoding requirement. To achieve the goal of minimum error in the logarithm domain, we map the logarithm of each pixel value into integer values and then send the results to a JPEG 2000 encoder. Our approach is basically a wavelet-based HDR still-image encoding method.

137 citations


Book ChapterDOI
19 Jun 2005
TL;DR: DCT based image compression using blocks of size 32x32 is considered and an effective method of bit-plane coding of quantized DCT coefficients is proposed to provide the quality of decoding images higher than for JPEG2000 by up to 1.9 dB.
Abstract: DCT based image compression using blocks of size 32x32 is considered. An effective method of bit-plane coding of quantized DCT coefficients is proposed. Parameters of post-filtering for removing of blocking artifacts in decoded images are given. The efficiency of the proposed method for test images compression is analyzed. It is shown that the proposed method is able to provide the quality of decoding images higher than for JPEG2000 by up to 1.9 dB.

104 citations


Patent
02 Dec 2005
TL;DR: In this paper, the authors proposed a method for high capacity embedding of data that is lossless (or distortion-free) because, after embedded information is extracted from a cover image, we revert to an exact copy of the original image before the embedding took place.
Abstract: Current methods of embedding hidden data in an image inevitably distort the original image by noise. This distortion cannot generally be removed completely because of quantization, bit-replacement, or truncation at the grayscales 0 and 255. The distortion, though often small, may make the original image unacceptable for medical applications, or for military and law enforcement applications where an image must be inspected under unusual viewing conditions (e.g., after filtering or extreme zoom). The present invention provides high-capacity embedding of data that is lossless (or distortion-free) because, after embedded information is extracted from a cover image, we revert to an exact copy of the original image before the embedding took place. This new technique is a powerful tool for a variety of tasks, including lossless robust watermarking, lossless authentication with fragile watermarks, and steganalysis. The technique is applicable to raw, uncompressed formats (e.g., BMP, PCX, PGM, RAS, etc.), lossy image formats (JPEG, JPEG2000, wavelet), and palette formats (GIF, PNG).

96 citations


Journal ArticleDOI
TL;DR: This paper presents a contrast-based quantization strategy for use in lossy wavelet image compression that attempts to preserve visual quality at any bit rate and yields images which are competitive in visual quality with results from current visually lossless approaches at high bit rates and which demonstrate improved visual quality over current visually lost approaches at low bit rates.
Abstract: This paper presents a contrast-based quantization strategy for use in lossy wavelet image compression that attempts to preserve visual quality at any bit rate. Based on the results of recent psychophysical experiments using near-threshold and suprathreshold wavelet subband quantization distortions presented against natural-image backgrounds, subbands are quantized such that the distortions in the reconstructed image exhibit root-mean-squared contrasts selected based on image, subband, and display characteristics and on a measure of total visual distortion so as to preserve the visual system's ability to integrate edge structure across scale space. Within a single, unified framework, the proposed contrast-based strategy yields images which are competitive in visual quality with results from current visually lossless approaches at high bit rates and which demonstrate improved visual quality over current visually lossy approaches at low bit rates. This strategy operates in the context of both nonembedded and embedded quantization, the latter of which yields a highly scalable codestream which attempts to maintain visual quality at all bit rates; a specific application of the proposed algorithm to JPEG-2000 is presented.

85 citations


Proceedings ArticleDOI
25 Jun 2005
TL;DR: A method based on the coevolutionary genetic algorithm introduced in [11] is used to evolve specialized wavelets for fingerprint images that consistently outperform the hand-designed wavelet currently used by the FBI to compress fingerprints.
Abstract: Wavelet-based image coders like the JPEG2000 standard are the state of the art in image compression. Unlike traditional image coders, however, their performance depends to a large degree on the choice of a good wavelet. Most wavelet-based image coders use standard wavelets that are known to perform well on photographic images. However, these wavelets do not perform as well on other common image classes, like scanned documents or fingerprints. In this paper, a method based on the coevolutionary genetic algorithm introduced in [11] is used to evolve specialized wavelets for fingerprint images. These wavelets are compared to the hand-designed wavelet currently used by the FBI to compress fingerprints. The results show that the evolved wavelets consistently outperform the hand-designed wavelet. Using evolution to adapt wavelets to classes of images can therefore significantly increase the quality of compressed images.

78 citations


Journal ArticleDOI
27 Jun 2005
TL;DR: This paper provides a survey of state-of-the-art hardware architectures for image and video coding with particular emphasis on efficient dedicated implementation for MPEG-4 video coding and JPEG 2000 still image coding.
Abstract: This paper provides a survey of state-of-the-art hardware architectures for image and video coding. Fundamental design issues are discussed with particular emphasis on efficient dedicated implementation. Hardware architectures for MPEG-4 video coding and JPEG 2000 still image coding are reviewed as design examples, and special approaches exploited to improve efficiency are identified. Further perspectives are also presented to address the challenges of hardware architecture design for advanced image and video coding in the future.

Journal ArticleDOI
TL;DR: A low-power, high-speed architecture which performs two-dimension forward and inverse discrete wavelet transform (DWT) for the set of filters in JPEG2000 is proposed by using a line-based and lifting scheme.
Abstract: A low-power, high-speed architecture which performs two-dimension forward and inverse discrete wavelet transform (DWT) for the set of filters in JPEG2000 is proposed by using a line-based and lifting scheme It consists of one row processor and one column processor each of which contains four sub-filters And the row processor which is time-multiplexed performs in parallel with the column processor Optimized shift-add operations are substituted for multiplications, and edge extension is implemented by an embedded circuit The whole architecture which is optimized in the pipeline design way to speed up and achieve higher hardware utilization has been demonstrated in FPGA Two pixels per clock cycle can be encoded at 100 MHz The architecture can he used as a compact and independent IP core for JPEG2000 VLSI implementation and various real-time image/video applications

Journal ArticleDOI
D.T. Lee1
27 Jun 2005
TL;DR: This paper attempts to summarize the lessons learned from the JPEG 2000 development experience and draw some conclusions on the success factors of this important standard.
Abstract: JPEG 2000 is a new image coding system that delivers superior compression performance and provides many advanced features in scalability, flexibility, and system functionalities that outperform all previous standards. It brings exciting possibilities to many imaging applications such as the Internet, wireless, security, and digital cinema. This paper gives an overview of this triumph in innovations and teamwork. It gives brief introductions to the four new parts that are under development by the JPEG committee. It attempts to summarize the lessons learned from the JPEG 2000 development experience and draw some conclusions on the success factors of this important standard.

Proceedings ArticleDOI
12 Jan 2005
TL;DR: A novel learning based method is proposed for No-Reference image quality assessment that aims to directly get the quality metric by means of learning.
Abstract: In this paper, a novel learning based method is proposed for No-Reference image quality assessment. Instead of examining the exact prior knowledge for the given type of distortion and finding a suitable way to represent it, our method aims to directly get the quality metric by means of learning. At first, some training examples are prepared for both high-quality and low-quality classes; then a binary classifier is built on the training set; finally the quality metric of an un-labeled example is denoted by the extent to which it belongs to these two classes. Different schemes to acquire examples from a given image, to build the binary classifier and to model the quality metric are proposed and investigated. While most existing methods are tailored for some specific distortion type, the proposed method might provide a general solution for No-Reference image quality assessment. Experimental results on JPEG and JPEG2000 compressed images validate the effectiveness of the proposed method.

Book ChapterDOI
22 Aug 2005
TL;DR: This is the first comprehensive study of standard JPEG2000 compression effects on face recognition, as well as an extension of existing experiments for JPEG compression.
Abstract: In this paper we analyse the effects that JPEG and JPEG2000 compression have on subspace appearance-based face recognition algorithms. This is the first comprehensive study of standard JPEG2000 compression effects on face recognition, as well as an extension of existing experiments for JPEG compression. A wide range of bitrates (compression ratios) was used on probe images and results are reported for 12 different subspace face recognition algorithms. Effects of image compression on recognition performance are of interest in applications where image storage space and image transmission time are of critical importance. It will be shown that not only that compression does not deteriorate performance but it, in some cases, even improves it slightly. Some unexpected effects will be presented (like the ability of JPEG2000 to capture the information essential for recognizing changes caused by images taken later in time) and lines of further research suggested.

Journal ArticleDOI
TL;DR: A shared key algorithm that works directly in the JPEG domain, thus enabling shared key image encryption for a variety of applications and retaining the storage advantage provided by JPEG compression standard is proposed.
Abstract: Several methods have been proposed for encrypting images by shared key encryption mechanisms since the work of Naor and Shamir. All the existing techniques are applicable to primarily non-compressed images in either monochrome or color domains. However, most imaging applications including digital photography, archiving, and Internet communications nowadays use images in the JPEG compressed format. Application of the existing shared key cryptographic schemes for these images requires conversion back into spatial domain. In this paper we propose a shared key algorithm that works directly in the JPEG domain, thus enabling shared key image encryption for a variety of applications. The scheme directly works on the quantized DCT coefficients and the resulting noise-like shares are also stored in the JPEG format. The decryption process is lossless preserving the original JPEG data. The experiments indicate that each share image is approximately the same size as the original JPEG image retaining the storage advantage provided by JPEG compression standard. Three extensions, one to improve the random appearance of the generated shares, another to obtain shares with asymmetric file sizes, and the third to generalize the scheme for n>2 share cases, are described as well.

Proceedings ArticleDOI
15 Jun 2005
TL;DR: This paper gives an overview of the study BioCompress that has been conducted at Fraunhofer IGD and evaluated the impact of lossy compression algorithms on the recognition performance of biometric recognition systems.
Abstract: A variety of widely accepted and efficient compression methods do exist for still images. To name a few, there are standardised schemes like JPEG and JPEG2000 which are well suited for photorealistic true colour and grey scale images and usually operated in lossy mode to achieve high compression ratios. These schemes are well suited for images that are processed within face recognition systems. In the case of forensic biometric systems, compression of fingerprint images has already been applied in automatic fingerprint identification systems (AFIS) applications, where the size of the digital fingerprint archives would be tremendous for uncompressed images. In these large scale applications wavelet scalar quantization has a long tradition as an effective encoding scheme. This paper gives an overview of the study BioCompress that has been conducted at Fraunhofer IGD on behalf of the Federal Office for Information Security (BSI). Based on fingerprint and face image databases and different biometric algorithms we evaluated the impact of lossy compression algorithms on the recognition performance of biometric recognition systems.

Proceedings ArticleDOI
14 Nov 2005
TL;DR: This paper proposes an encryption scheme maintaining hierarchy of JPEG 2000 codestreams for flexible access control, where only one managed key exists, and for a user permitted to access a reserved image quality, only one key generated by the master key is delivered.
Abstract: This paper proposes an encryption scheme maintaining hierarchy of JPEG 2000 codestreams for flexible access control. JPEG 2000 generates hierarchical codestreams and has flexible scalability, such as, SNR, space, or color components. In a proposed method, only one managed key (master key) exists, and for a user permitted to access a reserved image quality, only one key generated by the master key is delivered. Also, a different key generated by the master key is delivered for a user permitted to access a different quality. The proposed access control method is available for all kinds of JPEG 2000 scalability. On the other hand, some conventional methods need several keys to control image quality. Another conventional method is not suitable for JPEG 2000 scalability.

Book ChapterDOI
20 Sep 2005
TL;DR: Image pre-filtering is shown to be expedient for coded image quality improvement and/or increase of compression ratio and some recommendations on how to set the compression ratio to provide quasioptimal quality of coded images are given.
Abstract: Lossy compression of noise-free and noisy images differs from each other. While in the first case image quality is decreasing with an increase of compression ratio, in the second case coding image quality evaluated with respect to a noise-free image can be improved for some range of compression ratios. This paper is devoted to the problem of lossy compression of noisy images that can take place, e.g., in compression of remote sensing data. The efficiency of several approaches to this problem is studied. Image pre-filtering is shown to be expedient for coded image quality improvement and/or increase of compression ratio. Some recommendations on how to set the compression ratio to provide quasioptimal quality of coded images are given. A novel DCT-based image compression method is briefly described and its performance is compared to JPEG and JPEG2000 with application to lossy noisy image coding.

Journal ArticleDOI
TL;DR: A parallel architecture for the Embedded Block Coding (EBC) in JPEG 2000 is presented, based on the proposed word-level EBC algorithm, which can losslessly process 54 MSamples/s at 81 MHz and can support HDTV 720p resolution at 30 frames/s.
Abstract: This paper presents a parallel architecture for the Embedded Block Coding (EBC) in JPEG 2000. The architecture is based on the proposed word-level EBC algorithm. By processing all the bit planes in parallel, the state variable memories for the context formation (CF) can be completely eliminated. The length of the FIFO (first-in first-out) between the CF and the arithmetic encoder (AE) is optimized by a reconfigurable FIFO architecture. To reduce the hardware cost of the parallel architecture, we proposed a folded AE architecture. The parallel EBC architecture can losslessly process 54 MSamples/s at 81 MHz, which can support HDTV 720p resolution at 30 frames/s.

Proceedings ArticleDOI
06 Apr 2005
TL;DR: The proposed algorithm thus yields images that require the minimum bit-rate such that the reconstructed images are visually indistinguishable from the original images, thereby avoiding overly conservative or overly aggressive compression.
Abstract: A visually lossless compression algorithm for digitized radiographs, which predicts the maximum contrast that wavelet subband quantization distortions can exhibit in the reconstructed image such that the distortions are visually undetectable, is presented. Via a psychophysical experiment, contrast thresholds were measured for the detection of 1.15-18.4 cycles/degree wavelet subband quantization distortions in five digitized radiographs; results indicate that digitized radiographs impose image- and frequency-selective effects on detection. A quantization algorithm is presented which predicts the thresholds for individual images based on a model of visual masking. When incorporated into JPEG-2000 and applied to a suite of images, results indicate that digitized radiographs can be compressed in a visually lossless manner at an average compression ratio of 6.25:1, with some images requiring visually lossless ratios as low as 4:1 and as great as 9:1. The proposed algorithm thus yields images that require the minimum bit-rate such that the reconstructed images are visually indistinguishable from the original images. The primary utility of the proposed algorithm is its ability to provide image-adaptive visually lossless compression, thereby avoiding overly conservative or overly aggressive compression.

Journal ArticleDOI
TL;DR: A dedicated architecture of the block-coding engine was implemented in VHDL and synthesized for field-programmable gate array devices and results show that the single engine can process about 22 million samples at 66-MHz working frequency.
Abstract: JPEG 2000 offers critical advantages over other still image compression schemes at the price of increased computational complexity. Hardware-accelerated performance is a key to successful development of real time JPEG 2000 solutions for applications such as digital cinema and digital home theatre. The crucial role in the whole processing plays embedded block coding with optimized truncation because it requires bit-level operations. In this paper, a dedicated architecture of the block-coding engine is presented. Square-based bit-plane scanning and the internal first-in first-out are combined to speed up the context generation. A dynamic significance state restoring technique reduces the size of the state memories to 1 kbits. The pipeline architecture enhanced by an inverse multiple branch selection method is exploited to code two context-symbol pairs per clock cycle in the arithmetic coder module. The block-coding architecture was implemented in VHDL and synthesized for field-programmable gate array devices. Simulation results show that the single engine can process, on average, about 22 million samples at 66-MHz working frequency.

Proceedings ArticleDOI
14 Nov 2005
TL;DR: A new metric using contrast signal-to-noise ratio (CSNR), which measures the ratio of the contrast information level of distorted signal to the contrast level of the error signal, is proposed.
Abstract: Peak signal-to-noise ratio (PSNR) is commonly used as an objective quality metric in signal processing. However, PSNR correlates poorly with the subjective quality rating. In this paper, we propose a new metric using contrast signal-to-noise ratio (CSNR), which measures the ratio of the contrast information level of distorted signal to the contrast level of the error signal. The performance of the proposed method has been verified using a database of images compressed with JPEG and JPEG 2000. The results show that it can achieve very good correlation with the subjective mean opinion scores in terms of prediction accuracy, monotonicity and consistency.

Patent
28 Mar 2005
TL;DR: In this paper, the authors proposed a method of JPEG compression of an image frame divided up into a plurality of non-overlapping, tiled 8×8 pixel blocks X i.
Abstract: A method of JPEG compression of an image frame divided up into a plurality of non-overlapping, tiled 8×8 pixel blocks X i . A global quantization matrix Q is determined by either selecting a standard JPEG quantization table or selecting a quantization table such that the magnitude of each quantization matrix coefficient, Q[m,n] is inversely proportional to the aggregate visual importance in the image of the corresponding DCT basis vector. Next a linear scaling factor S i is selected for each block, bounded by user selected values S min and S max . Transform coefficients, Y i , obtained from a digital cosine transform of X i , are quantized with global table S min Q while emulated the effects of quantization with local table S i Q and the quantized coefficients T i [m,n] and global quantization table S min Q are entropy encoded , where S min is a user selected minimum scaling factor, to create a JPEG Part 1 image file. The algorithm is unique in that it allows for the effect of variable-quantization to be achieved while still producing a fully compliant JPEG Part 1 file.

Proceedings ArticleDOI
18 Mar 2005
TL;DR: Improvements in compression of both inter- and intra-frame images by the matching pursuits (MP) algorithm are reported, and lower distortion is achieved on the residual images tested, and also on intra-frames at low bit rates.
Abstract: This paper reports improvements in compression of both inter- and intra-frame images by the matching pursuits (MP) algorithm. For both types of image, applying a 2D wavelet decomposition prior to MP coding is beneficial. The MP algorithm is then applied using various separable ID codebooks. MERGE coding with precision limited quantization (PLQ) is used to yield a highly embedded data stream. For inter-frames (residuals) a codebook of only 8 bases with compact footprint is found to give improved fidelity at lower complexity than previous MP methods. Great improvement is also achieved in MP coding of still images (intra-frames). Compared to JPEG 2000, lower distortion is achieved on the residual images tested, and also on intra-frames at low bit rates.

Proceedings ArticleDOI
01 Jan 2005
TL;DR: The results show that JPEG-LS is the algorithm with the best performance, both in terms of compression ratio and compression speed in the application of compressing medical infrared images.
Abstract: Several popular lossless image compression algorithms were evaluated for the application of compressing medical infrared images. Lossless JPEG, JPEG-LS, JPEG2000, PNG, and CALIC were tested on an image dataset of 380+ thermal images. The results show that JPEG-LS is the algorithm with the best performance, both in terms of compression ratio and compression speed

Journal ArticleDOI
TL;DR: This work explores how advanced correlation filters, such as the minimum average correlation energy filter and its variants, perform by using face images captured under different illumination conditions and encoded with different bit rates under the JPEG2000 wavelet-encoding standard.
Abstract: Face recognition on mobile devices, such as personal digital assistants and cell phones, is a big challenge owing to the limited computational resources available to run verifications on the devices themselves. One approach is to transmit the captured face images by use of the cell-phone connection and to run the verification on a remote station. However, owing to limitations in communication bandwidth, it may be necessary to transmit a compressed version of the image. We propose using the image compression standard JPEG2000, which is a wavelet-based compression engine used to compress the face images to low bit rates suitable for transmission over low-bandwidth communication channels. At the receiver end, the face images are reconstructed with a JPEG2000 decoder and are fed into the verification engine. We explore how advanced correlation filters, such as the minimum average correlation energy filter [ Appl. Opt.26, 3633 ( 1987)] and its variants, perform by using face images captured under different illumination conditions and encoded with different bit rates under the JPEG2000 wavelet-encoding standard. We evaluate the performance of these filters by using illumination variations from the Carnegie Mellon University’s Pose, Illumination, and Expression (PIE) face database. We also demonstrate the tolerance of these filters to noisy versions of images with illumination variations.


Proceedings ArticleDOI
06 Dec 2005
TL;DR: A virtual microscope system, based on JPEG 2000, which utilizes extended depth of field (EDF) imaging is described, which shows that it can efficiently represent high-quality, high-resolution colour images of microscopic specimens with less than 1 bit per pixel.
Abstract: In this paper, we describe a virtual microscope system, based on JPEG 2000, which utilizes extended depth of field (EDF) imaging. Through a series of observer trials we show that EDF imaging improves both the local image quality of individual fields of view (FOV) and the accuracy with which the FOVs can be mosaiced (stitched) together. In addition, we estimate the required bit rate to adequately render a set of histology and cytology specimens at a quality suitable for on-line learning and collaboration. We show that, using JPEG 2000, we can efficiently represent high-quality, high-resolution colour images of microscopic specimens with less than 1 bit per pixel.

Journal ArticleDOI
TL;DR: A high speed FPGA based implementation of Embedded Block Coding with Optimized Truncation (EBCOT) algorithm used in JPEG 2000 is proposed and implemented and an architecture based on parallel processing of the three coding passes is proposed to speed up the encoding.