scispace - formally typeset
Search or ask a question

Showing papers on "JPEG published in 2003"


Book ChapterDOI
20 Oct 2003
TL;DR: An information-theoretic method for performing steganography and steganalysis using a statistical model of the cover medium is presented, which achieves a higher embedding efficiency and message capacity than previous methods while remaining secure against first order statistical attacks.
Abstract: This paper presents an information-theoretic method for performing steganography and steganalysis using a statistical model of the cover medium The methodology is general, and can be applied to virtually any type of media It provides answers for some fundamental questions which have not been fully addressed by previous steganographic methods, such as how large a message can be hidden without risking detection by certain statistical methods, and how to achieve this maximum capacity Current steganographic methods have been shown to be insecure against fairly simple statistical attacks Using the model-based methodology, an example steganography method is proposed for JPEG images which achieves a higher embedding efficiency and message capacity than previous methods while remaining secure against first order statistical attacks

470 citations


Journal Article
TL;DR: A steganalytic method that can reliably detect messages (and estimate their size) hidden in JPEG images using the steganographic algorithm F5 is presented.
Abstract: In this paper, we present a steganalytic method that can reliably detect messages (and estimate their size) hidden in JPEG images using the steganographic algorithm F5. The key element of the method is estimation of the cover-image histogram from the stego-image. This is done by decompressing the stego-image, cropping it by four pixels in both directions to remove the quantization in the frequency domain, and recompressing it using the same quality factor as the stego-image. The number of relative changes introduced by F5 is determined using the least square fit by comparing the estimated histograms of selected DCT coefficients with those of the stego-image. Experimental results indicate that relative modifications as small as 10% of the usable DCT coefficients can be reliably detected. The method is tested on a diverse set of test images that include both raw and processed images in the JPEG and BMP formats.

433 citations


Journal ArticleDOI
Zhigang Fan1, R.L. de Queiroz1
TL;DR: A fast and efficient method is provided to determine whether an image has been previously JPEG compressed, and a method for the maximum likelihood estimation of JPEG quantization steps is developed.
Abstract: Sometimes image processing units inherit images in raster bitmap format only, so that processing is to be carried without knowledge of past operations that may compromise image quality (e.g., compression). To carry further processing, it is useful to not only know whether the image has been previously JPEG compressed, but to learn what quantization table was used. This is the case, for example, if one wants to remove JPEG artifacts or for JPEG re-compression. In this paper, a fast and efficient method is provided to determine whether an image has been previously JPEG compressed. After detecting a compression signature, we estimate compression parameters. Specifically, we developed a method for the maximum likelihood estimation of JPEG quantization steps. The quantizer estimation method is very robust so that only sporadically an estimated quantizer step size is off, and when so, it is by one value.

373 citations


Jan Lukás1
01 Jan 2003
TL;DR: It is explained in this paper, how double compression detection techniques and primary quantization matrix estimators can be used in steganalysis of JPEG files and in digital forensic analysis for detection of digital forgeries.
Abstract: In this report, we present a method for estimation of primary quantization matrix from a double compressed JPEG image. We first identify characteristic features that occur in DCT histograms of individual coefficients due to double compression. Then, we present 3 different approaches that estimate the original quantization matrix from double compressed images. Finally, most successful of them Neural Network classifier is discussed and its performance and reliability is evaluated in a series of experiments on various databases of double compressed images. It is also explained in this paper, how double compression detection techniques and primary quantization matrix estimators can be used in steganalysis of JPEG files and in digital forensic analysis for detection of digital forgeries.

353 citations


Journal ArticleDOI
TL;DR: An image enhancement algorithm for images compressed using the JPEG standard is presented, based on a contrast measure defined within the discrete cosine transform (DCT) domain that does not affect the compressibility of the original image.
Abstract: An image enhancement algorithm for images compressed using the JPEG standard is presented. The algorithm is based on a contrast measure defined within the discrete cosine transform (DCT) domain. The advantages of the psychophysically motivated algorithm are 1) the algorithm does not affect the compressibility of the original image because it enhances the images in the decompression stage and 2) the approach is characterized by low computational complexity. The proposed algorithm is applicable to any DCT-based image compression standard, such as JPEG, MPEG 2, and H. 261.

317 citations


Journal ArticleDOI
TL;DR: An approach for filling-in blocks of missing data in wireless image transmission is presented, which aims to reconstruct the lost data using correlation between the lost block and its neighbors.
Abstract: An approach for filling-in blocks of missing data in wireless image transmission is presented. When compression algorithms such as JPEG are used as part of the wireless transmission process, images are first tiled into blocks of 8 /spl times/ 8 pixels. When such images are transmitted over fading channels, the effects of noise can destroy entire blocks of the image. Instead of using common retransmission query protocols, we aim to reconstruct the lost data using correlation between the lost block and its neighbors. If the lost block contained structure, it is reconstructed using an image inpainting algorithm, while texture synthesis is used for the textured blocks. The switch between the two schemes is done in a fully automatic fashion based on the surrounding available blocks. The performance of this method is tested for various images and combinations of lost blocks. The viability of this method for image compression, in association with lossy JPEG, is also discussed.

243 citations


Patent
24 Oct 2003
TL;DR: In this article, the authors propose an image schema that defines the properties, behaviors, and relationships for Images in the system, and the Schema also enforces rules about Images, for example, what data specific Images must contain, how specific Images can be extended, and so on and so forth.
Abstract: In an Item-based system, Images (e.g., JPEG, TIFF, bitmap, and so on) are treated as core platform objects (“Image Items” or, more simply, “Images”) and exist in an “Image Schema” that provides an extensible representation of an Image in the system—that is, the characteristics of an Image and how that Image relates to other Items (including but not limited to other Images) in the system. To this end, the Image Schema defines the properties, behaviors, and relationships for Images in the system, and the Schema also enforces rules about Images, for example, what data specific Images must contain, what data specific Images may optionally contain, how specific Images can be extended, and so on and so forth.

217 citations


Journal ArticleDOI
TL;DR: The results show that the proposed watermark scheme is robust to common signal distortions, including geometric manipulations, and robustness against scaling was achieved when the watermarked image size is scaled down to 0.4% of its original size.
Abstract: In recent years, digital watermarking techniques have been proposed to protect the copyright of multimedia data. Different watermarking schemes have been suggested for images. The goal of this paper is to develop a watermarking algorithm based on the discrete cosine transform (DCT) and image segmentation. The image is first segmented in different portions based on the Voronoi diagram and features extraction points. Then, a pseudorandom sequence of real numbers is embedded in the DCT domain of each image segment. Different experiments are conducted to show the performance of the scheme under different types of attacks. The results show that our proposed watermark scheme is robust to common signal distortions, including geometric manipulations. The robustness against Joint Photographic Experts Group (JPEG) compression is achieved for a compression ratio of up to 45, and robustness against average, median, and Wiener filters is shown for the 3/spl times/3 up to 9/spl times/9 pixel neighborhood. It is observed that robustness against scaling was achieved when the watermarked image size is scaled down to 0.4% of its original size.

179 citations


Journal ArticleDOI
TL;DR: Using general principles for developing steganalytic methods that can accurately estimate the number of changes to the cover image imposed during embedding, the secret message length is estimated for the most common embedding archetypes.
Abstract: The objective of steganalysis is to detect messages hidden in cover objects, such as digital images. In practice, the steganalyst is frequently interested in more than whether or not a secret message is present. The ultimate goal is to extract and decipher the secret message. However, in the absence of the knowledge of the stego technique and the stego and cipher keys, this task may be extremely time consuming or completely infeasible. Therefore, any additional information, such as the message length or its approximate placement in image features, could prove very valuable to the analyst. In this paper, we present general principles for developing steganalytic methods that can accurately estimate the number of changes to the cover image imposed during embedding. Using those principles, we show how to estimate the secret message length for the most common embedding archetypes, including the F5 and OutGuess algorithms for JPEG, EzStego algorithm with random straddling for palette images, and the classical LSB embedding with random straddling for uncompressed image formats. The paper concludes with an outline of ideas for future research such as estimating the steganographic capacity of embedding algorithms.

172 citations


Journal ArticleDOI
TL;DR: This work proposes an adaptive approach which performs blockiness reduction in both the DCT and spatial domains to reduce the block-to-block discontinuities and takes advantage of the fact that the original pixel levels in the same block provide continuity.
Abstract: One of the major drawbacks of the block-based DCT compression methods is that they may result in visible artifacts at block boundaries due to coarse quantization of the coefficients. We propose an adaptive approach which performs blockiness reduction in both the DCT and spatial domains to reduce the block-to-block discontinuities. For smooth regions, our method takes advantage of the fact that the original pixel levels in the same block provide continuity and we use this property and the correlation between the neighboring blocks to reduce the discontinuity of the pixels across the boundaries. For texture and edge regions, we apply an edge-preserving smoothing filter. Simulation results show that the proposed algorithm significantly reduces the blocking artifacts of still and video images as judged by both objective and subjective measures.

171 citations


Journal ArticleDOI
TL;DR: Experimental results show that embedded watermarks using the proposed techniques can give good image quality and are robust in varying degree to JPEG compression, low-pass filtering, noise contamination, and print-and-scan.
Abstract: Three novel blind watermarking techniques are proposed to embed watermarks into digital images for different purposes. The watermarks are designed to be decoded or detected without the original images. The first one, called single watermark embedding (SWE), is used to embed a watermark bit sequence into digital images using two secret keys. The second technique, called multiple watermark embedding (MWE), extends SWE to embed multiple watermarks simultaneously in the same watermark space while minimizing the watermark (distortion) energy. The third technique, called iterative watermark embedding (IWE), embeds watermarks into JPEG-compressed images. The iterative approach of IWE can prevent the potential removal of a watermark in the JPEG recompression process. Experimental results show that embedded watermarks using the proposed techniques can give good image quality and are robust in varying degree to JPEG compression, low-pass filtering, noise contamination, and print-and-scan.

Journal ArticleDOI
TL;DR: It is shown how down-sampling an image to a low resolution, then using JPEG at the lower resolution, and subsequently interpolating the result to the original resolution can improve the overall PSNR performance of the compression process.
Abstract: The most popular lossy image compression method used on the Internet is the JPEG standard. JPEG's good compression performance and low computational and memory complexity make it an attractive method for natural image compression. Nevertheless, as we go to low bit rates that imply lower quality, JPEG introduces disturbing artifacts. It is known that, at low bit rates, a down-sampled image, when JPEG compressed, visually beats the high resolution image compressed via JPEG to be represented by the same number of bits. Motivated by this idea, we show how down-sampling an image to a low resolution, then using JPEG at the lower resolution, and subsequently interpolating the result to the original resolution can improve the overall PSNR performance of the compression process. We give an analytical model and a numerical analysis of the down-sampling, compression and up-sampling process, that makes explicit the possible quality/compression trade-offs. We show that the image auto-correlation can provide a good estimate for establishing the down-sampling factor that achieves optimal performance. Given a specific budget of bits, we determine the down-sampling factor necessary to get the best possible recovered image in terms of PSNR.

Journal ArticleDOI
TL;DR: Methods and issues involved in the compression of CFA data before full color interpretation are discussed, which operate on the same number of pixels as the sensor data.
Abstract: Many consumer digital color cameras use a single light sensitive sensor and a color filter array (CFA) with each pixel element recording intensity information of one color component. The captured data is interpolated into a full color image, which is then compressed in many applications. Carrying out color interpolation before compression introduces redundancy in the data. In this paper we discuss methods and issues involved in the compression of CFA data before full color interpretation. The compression methods described operate on the same number of pixels as the sensor data. To obtain improved image quality, median filtering is applied as post-processing. Furthermore, to assure low complexity, the CFA data is compressed by JPEG. Simulations have demonstrated that substantial improvement in image quality is achievable using these new schemes.

Proceedings ArticleDOI
24 Nov 2003
TL;DR: A new method to evaluate the quality if distorted images based on a comparison between the structural information extracted from the distorted image and from the original image, which is highly correlated with human judgments (mean opinion score).
Abstract: This paper presents a new method to evaluate the quality if distorted images. This method is based on a comparison between the structural information extracted from the distorted image and from the original image. The interest of our method is that it uses reduced references containing perceptual structural information. First, a quick overview of image quality evaluation methods is given. Then the implementation of our human visual system (HVS) model is detailed. At last, results are given for quality evaluation of JPEG and JPEG2000 coded images. They show that our method provides results which are highly correlated with human judgments (mean opinion score). This method has been implemented in an application available on the Internet.

Proceedings ArticleDOI
06 Apr 2003
TL;DR: The paper presents a digital color image watermarking scheme using a hypercomplex numbers representation and the quaternion Fourier transform (QFT) and the fact that perceptive QFT embedding can offer robustness to luminance filtering techniques is outlined.
Abstract: The paper presents a digital color image watermarking scheme using a hypercomplex numbers representation and the quaternion Fourier transform (QFT). Previous color image watermarking methods are first presented and the quaternion representation is then described. In this framework, RGB pixel values are associated with a unique quaternion number having three imaginary parts. The QFT is presented; this transform depends on an arbitrary unit pure quaternion, /spl mu/. The value of /spl mu/ is selected to provide embedding spaces having robustness and/or perceptual properties. In our approach, /spl mu/ is a function of the mean color value of a block and a perceptual component. A watermarking scheme based on the QFT and the quantization index modulation scheme is then presented. This scheme is evaluated for different color image filtering processes (JPEG, blur). The fact that perceptive QFT embedding can offer robustness to luminance filtering techniques is outlined.

Proceedings ArticleDOI
02 Nov 2003
TL;DR: This demonstration will show the video sensor networking technologies developed at the OGI School of Science and Engineering, which allow programmers to create application-specific filtering, power management, and event triggering mechanisms.
Abstract: Video-based sensor networks can provide important visual information in a number of applications including: environmental monitoring, health care, emergency response, and video security. This paper describes the Panoptes video-based sensor networking architecture, including its design, implementation, and performance. We describe a video sensor platform that can deliver high-quality video over 802.11 networks with a power requirement of approximately 5 watts. In addition, we describe the streaming and prioritization mechanisms that we have designed to allow it to survive long-periods of disconnected operation. Finally, we describe a sample application and bitmapping algorithm that we have implemented to show the usefulness of our platform. Our experiments include an in-depth analysis of the bottlenecks within the system as well as power measurements for the various components of the system.

Journal ArticleDOI
TL;DR: A simple parallel algorithm for decoding a Huffman encoded file is presented, exploiting the tendency of Huffman codes to resynchronize quickly, i.e. recovering after possible decoding errors, in most cases.
Abstract: A simple parallel algorithm for decoding a Huffman encoded file is presented, exploiting the tendency of Huffman codes to resynchronize quickly, i.e. recovering after possible decoding errors, in most cases. The average number of bits that have to be processed until synchronization is analyzed and shows good agreement with empirical data. As Huffman coding is also a part of the JPEG image compression standard, the suggested algorithm is then adapted to the parallel decoding of JPEG files.

01 Jul 2003
TL;DR: This proposal presents a variation of the YCoCg color space, including its simple transformation equations relative to RGB and its improved coding gain relative to both RGB and YCbCr, which is applied to JPEG XR image compression and to texture compression through the Y coCg-DXT algorithm.
Abstract: At the latest JVT meeting we presented the YCoCg color space, including its simple transformation equations relative to RGB and its improved coding gain relative to both RGB and YCbCr. We also discussed the reversibility of RGB to YCoCg conversion process in the case that two additional bits of precision are used for YCoCg relative to the precision used for source RGB data. In this proposal, we additionally present a variation of this color space, which we call YCoCg-R. The YCoCg color space has been applied to JPEG XR image compression and to texture compression through the YCoCg-DXT algorithm.

Proceedings ArticleDOI
TL;DR: This paper identifies two limitations of the proposed approach and shows how they can be overcome to obtain accurate detection in every case and outlines a condition that must be satisfied by all secure high-capacity steganographic algorithms for JPEGs.
Abstract: In this paper, we present general methodology for developing attacks on steganographic systems for the JPEG image format. The detection first starts by decompressing the JPEG stego image, geometrically distorting it (e.g., by cropping), and recompressing. Because the geometrical distortion breaks the quantized structure of DCT coefficients during recompression, the distorted/recompressed image will have many macroscopic statistics approximately equal to those of the cover image. We choose such macroscopic statistic S that also predictably changes with the embedded message length. By doing so, we estimate the unknown message length by comparing the values of S for the stego image and the cropped/recompressed stego image. The details of this detection methodology are explained on the F5 algorithm and OutGuess. The accuracy of the message length estimate is demonstrated on test images for both algorithms. Finally, we identify two limitations of the proposed approach and show how they can be overcome to obtain accurate detection in every case. The paper is closed with outlining a condition that must be satisfied by all secure high-capacity steganographic algorithms for JPEGs.

Journal ArticleDOI
TL;DR: A novel statistical feature extraction algorithm is proposed to characterize the image content directly in its compressed domain through computing a set of moments directly from DCT coefficients without involving full decompression or inverse DCT.

Patent
Wei-ge Chen1, Chao He1
14 Jul 2003
TL;DR: In this paper, a unified lossy and lossless audio compression scheme was proposed, which combines lossy audio compression within a same audio signal, and employs mixed lossless coding of a transition frame between lossy coding frames to produce seamless transitions.
Abstract: A unified lossy and lossless audio compression scheme combines lossy and lossless audio compression within a same audio signal. This approach employs mixed lossless coding of a transition frame between lossy and lossless coding frames to produce seamless transitions. The mixed lossless coding performs a lapped transform and inverse lapped transform to produce an appropriately windowed and folded pseudo-time domain frame, which can then be losslessly coded. The mixed lossless coding also can be applied for frames that exhibit poor lossy compression performance.

Journal ArticleDOI
TL;DR: The results show how model observers can be successfully used to perform automated evaluation and optimization of diagnostic performance in clinically relevant visual tasks using real anatomic backgrounds.
Abstract: We compared the ability of three model observers (nonprewhitening matched filter with an eye filter, Hotelling and channelized Hotelling) in predicting the effect of JPEG and wavelet-Crewcode image compression on human visual detection of a simulated lesion in single frame digital x-ray coronary angiograms. All three model observers predicted the JPEG superiority present in human performance, although the nonprewhitening matched filter with an eye filter (NPWE) and the channelized Hotelling models were better predictors than the Hotelling model. The commonly used root mean square error and related peak signal to noise ratio metrics incorrectly predicted a JPEG inferiority. A particular image discrimination/perceptual difference model correctly predicted a JPEG advantage at low compression ratios but incorrectly predicted a JPEG inferiority at high compression ratios. In the second part of the paper, the NPWE model was used to perform automated simulated annealing optimization of the quantization matrix of the JPEG algorithm at 25:1 compression ratio. A subsequent psychophysical study resulted in improved human detection performance for images compressed with the NPWE optimized quantization matrix over the JPEG default quantization matrix. Together, our results show how model observers can be successfully used to perform automated evaluation and optimization of diagnostic performance in clinically relevant visual tasks using real anatomic backgrounds.

Proceedings ArticleDOI
09 Mar 2003
TL;DR: This paper proposes a fast and effective steganalytic technique based on statistical distributions of DCT coefficients which is aimed at two kinds of popular JSteg-like steganographic systems, sequentialJSteg and random JSteG for JPEG images.
Abstract: Detection of hidden messages in images, also known as image steganalysis, is of great significance to network information security. In this paper, we propose a fast and effective steganalytic technique based on statistical distributions of DCT coefficients which is aimed at two kinds of popular JSteg-like steganographic systems, sequential JSteg and random JSteg for JPEG images. Our approach can not only determine the existence of hidden messages in JPEG images reliably, but also estimate the amount of hidden messages exactly. Its advantages also include simplicity, computational efficiency and easy implementation of real-time detection. Experiment results show the superiority of our approach over other steganalytic techniques.

Journal ArticleDOI
TL;DR: This paper details work undertaken on the application of an algorithm for visual attention (VA) to region of interest (ROI) coding in JPEG 2000 (JP2K), and describes how the output of the VA algorithm is post-processed so that an ROI is produced that can be efficiently coded using coefficient scaling in JP2K.

Patent
12 Nov 2003
TL;DR: In this article, the focal plane array based motion sensor of the hybrid simultaneous-mode MPEG X/JPEG X security video camera (100) is positioned to capture moving suspects.
Abstract: FIG. 1 is a diagram of an unmanned, fully automatic, security installation with electronic pan and tilt functions, the focal plane array based motion sensor ( 120 ) of the hybrid simultaneous-mode MPEG X/JPEG X security video camera ( 100 ) is positioned to capture moving suspects, the moving suspect ( 800 ) is shown, the local area network (LAN) cable ( 804 ) is shown leading away from the hybrid MPEG X/JPEG X security video camera ( 100 ), a security room personal computer viewing station ( 808 ) is shown, lastly a digital computer tape video logging station ( 816 ) is shown.

Journal ArticleDOI
TL;DR: This paper describes an effective technique for image authentication, which can prevent malicious manipulations but allow JPEG lossy compression, and shows that the design of the authenticator depends on the number of recompression times and whether the image is decoded into integral values in the pixel domain during the recompression process.
Abstract: Image authentication verifies the originality of an image by detecting malicious manipulations. This goal is different from that of image watermarking which embeds into the image a signature surviving most manipulations. Most existing methods for image authentication treat all types of manipulation equally (i.e., as unacceptable). However, some applications demand techniques that can distinguish acceptable manipulations (e.g., compression) from malicious ones. In this paper, we describe an effective technique for image authentication, which can prevent malicious manipulations but allow JPEG lossy compression. The authentication signature is based on the invariance of the relationship between the DCT coefficients at the same position in separate blocks of an image. This relationship will be preserved when these coefficients are quantized in a JPEG compression process. Our proposed method can distinguish malicious manipulations from JPEG lossy compression regardless of how high the compression ratio is. We also show that, in different practical cases, the design of the authenticator depends on the number of recompression times, and whether the image is decoded into integral values in the pixel domain during the recompression process. Theoretical and experimental results indicate that this technique is effective for image authentication.

Proceedings ArticleDOI
TL;DR: A new image quality measure that can be used as a multidimensional or a scalar measure to predict the distortion introduced by a wide range of noise sources based on the Singular Value Decomposition.
Abstract: The important criteria used in subjective evaluation of distorted images include the amount of distortion, the type of distortion, and the distribution of error. An ideal image quality measure should therefore be able to mimic the human observer. We present a new image quality measure that can be used as a multidimensional or a scalar measure to predict the distortion introduced by a wide range of noise sources. Based on the Singular Value Decomposition, it reliably measures the distortion not only within a distortion type at different distortion levels but also across different distortion types. The measure was applied to Lena using six types of distortion (JPEG, JPEG 2000, Gaussian blur, Gaussian noise, sharpening and DC-shifting), each with five distortion levels.

Journal ArticleDOI
TL;DR: A novel heuristic for requantizing JPEG images which incorporates the well-known Laplacian distribution of the AC discrete cosine transform coefficients with an analysis of the error introduced by requantization is reported.
Abstract: We report a novel heuristic for requantizing JPEG images. The resulting images are generally smaller and often have improved perceptual image quality over a "blind" requantization approach, that is, one that does not consider the properties of the quantization matrices. The heuristic is supported by a detailed mathematical treatment which incorporates the well-known Laplacian distribution of the AC discrete cosine transform (DCT) coefficients with an analysis of the error introduced by requantization. We note that the technique is applicable to any image compression method which employs discrete cosine transforms and quantization.

Proceedings ArticleDOI
06 Jul 2003
TL;DR: A novel content-based image authentication framework which embeds the authentication information into the host image using a lossless data hiding approach and can tolerate JPEG compression to a certain extent while rejecting common tampering to the image.
Abstract: In this paper, we present a novel content-based image authentication framework which embeds the authentication information into the host image using a lossless data hiding approach. In this framework the features of a target image are first extracted and signed using the digital signature algorithm (DSA). The authentication information is generated from the signature and the features are then inserted into the target image using a lossless data hiding algorithm. In this way, the unperturbed version of the original image can be obtained after the embedded data are extracted. An important advantage of our approach is that it can tolerate JPEG compression to a certain extent while rejecting common tampering to the image. The experimental results show that our framework works well with JPEG quality factors greater than or equal to 80 which are acceptable for most authentication applications.

Patent
05 Dec 2003
TL;DR: In this article, the authors proposed an image processing apparatus for carrying out image compression/decoding with a simple arithmetic operation, making the processing system compact, and obtaining a compression efficiency and image quality after decoding equivalent to those by the JPEG.
Abstract: PROBLEM TO BE SOLVED: To provide an image processing apparatus for carrying out image compression/decoding with a simple arithmetic operation, making the processing system compact, and obtaining a compression efficiency and image quality after decoding equivalent to those by the JPEG. SOLUTION: The image processing apparatus includes: a DC image generating section 3 for averaging pixel values to obtain a DC value in a pixel block comprising 4×4 pixels; a first stage Hadamard encoding section 5 that divides the pixel block into sub pixel blocks each comprising 2×2 pixels, obtains the average pixel value of each of the sub pixel blocks, performs an arithmetic operation for predicting an AC component to obtain a DC pixel value of the sub pixel blocks on the basis of the DC values of the pixel block of the processing object and other pixel blocks adjacent to the processing object pixel block, carries out a composite arithmetic operation of the difference between the average pixel value and the DC pixel value, applies Hadamard transform to each pixel of the difference and outputs a first Hadamard coefficient; and a second stage Hadamard encoding section 7 that performs a composite arithmetic operation of an arithmetic operation to predict an AC component for obtaining the DC value of each pixel from the average pixel value and an arithmetic operation to obtain the difference between the DC value and the pixel block for each pixel and obtains the Hadamard coefficient of the difference. COPYRIGHT: (C)2005,JPO&NCIPI