scispace - formally typeset
Search or ask a question

Showing papers on "JPEG published in 2009"


Journal ArticleDOI
Hany Farid1
TL;DR: A technique to detect whether the part of an image was initially compressed at a lower quality than the rest of the image is described, applicable to images of high and low quality as well as resolution.
Abstract: When creating a digital forgery, it is often necessary to combine several images, for example, when compositing one person's head onto another person's body. If these images were originally of different JPEG compression quality, then the digital composite may contain a trace of the original compression qualities. To this end, we describe a technique to detect whether the part of an image was initially compressed at a lower quality than the rest of the image. This approach is applicable to images of high and low quality as well as resolution.

427 citations


Journal ArticleDOI
TL;DR: This paper proposes detecting tampered images by examining the double quantization effect hidden among the discrete cosine transform (DCT) coefficients, and is the only one to date that can automatically locate the tampered region.

340 citations


Journal ArticleDOI
01 Mar 2009
TL;DR: The experimental results prove that the estimated visual quality of the proposed RCGA-ELM emulates the mean opinion score very well and indicate that the proposed schemes significantly improve the performance of ELM classifier under imbalance condition for image quality assessment.
Abstract: In this paper, we present a machine learning approach to measure the visual quality of JPEG-coded images. The features for predicting the perceived image quality are extracted by considering key human visual sensitivity (HVS) factors such as edge amplitude, edge length, background activity and background luminance. Image quality assessment involves estimating the functional relationship between HVS features and subjective test scores. The quality of the compressed images are obtained without referring to their original images ('No Reference' metric). Here, the problem of quality estimation is transformed to a classification problem and solved using extreme learning machine (ELM) algorithm. In ELM, the input weights and the bias values are randomly chosen and the output weights are analytically calculated. The generalization performance of the ELM algorithm for classification problems with imbalance in the number of samples per quality class depends critically on the input weights and the bias values. Hence, we propose two schemes, namely the k-fold selection scheme (KS-ELM) and the real-coded genetic algorithm (RCGA-ELM) to select the input weights and the bias values such that the generalization performance of the classifier is a maximum. Results indicate that the proposed schemes significantly improve the performance of ELM classifier under imbalance condition for image quality assessment. The experimental results prove that the estimated visual quality of the proposed RCGA-ELM emulates the mean opinion score very well. The experimental results are compared with the existing JPEG no-reference image quality metric and full-reference structural similarity image quality metric.

228 citations


Journal ArticleDOI
TL;DR: The block artifact grids are extracted blindly with a new extraction algorithm, and then abnormal BAGs can be detected with a marking procedure, and the phenomenon of grid mismatch or grid blank can be taken as a trail of such forensics.

166 citations


Journal ArticleDOI
TL;DR: JPEG XR is the newest image coding standard from the JPEG committee and achieves high image quality, on par with JPEG 2000, while requiring low computational resources and storage capacity.
Abstract: JPEG XR is the newest image coding standard from the JPEG committee. It primarily targets the representation of continuous-tone still images such as photographic images and achieves high image quality, on par with JPEG 2000, while requiring low computational resources and storage capacity. Moreover, it effectively addresses the needs of emerging high dynamic range imagery applications by including support for a wide range of image representation formats.

163 citations


BookDOI
28 Dec 2009
TL;DR: A survey of recent results in the field of compression of remote sensed 3D data, with a particular interest in hyperspectral imagery is provided, where there is a tradeoff between compression achieved and the quality of the decompressed image.
Abstract: Hyperspectral Data Compression provides a survey of recent results in the field of compression of remote sensed 3D data, with a particular interest in hyperspectral imagery.Chapter 1 addresses compression architecture, and reviews and compares compression methods. Chapters 2 through 4 focus on lossless compression (where the decompressed image must be bit for bit identical to the original).Chapter 5, contributed by the editors, describes a lossless algorithm based on vector quantization with extensions to near lossless and possibly lossy compression for efficient browning and pure pixel classification.Chapter 6 deals with near lossless compression while. Chapter 7 considers lossy techniques constrained by almost perfect classification. Chapters 8 through 12 address lossy compression of hyperspectral imagery, where there is a tradeoff between compression achieved and the quality of the decompressed image.Chapter 13 examines artifacts that can arise from lossy compression.

119 citations


Journal ArticleDOI
TL;DR: A quantitative comparison between the energy costs associated with direct transmission of uncompressed images and sensor platform-based JPEG compression followed by transmission of the compressed image data is presented.
Abstract: One of the most important goals of current and future sensor networks is energy-efficient communication of images. This paper presents a quantitative comparison between the energy costs associated with 1) direct transmission of uncompressed images and 2) sensor platform-based JPEG compression followed by transmission of the compressed image data. JPEG compression computations are mapped onto various resource-constrained platforms using a design environment that allows computation using the minimum integer and fractional bit-widths needed in view of other approximations inherent in the compression process and choice of image quality parameters. Advanced applications of JPEG, such as region of interest coding and successive/progressive transmission, are also examined. Detailed experimental results examining the tradeoffs in processor resources, processing/transmission time, bandwidth utilization, image quality, and overall energy consumption are presented.

103 citations


Journal ArticleDOI
01 Sep 2009
TL;DR: This paper proposes a method that combines the JPEG-LS and an interframe coding with motion vectors to enhance the compression performance of using JPEG- LS alone, and achieves average compression gains of 13.3% and 26.3 % over the methods of using Photoshop and JPEG2000 alone, respectively.
Abstract: Hospitals and medical centers produce an enormous amount of digital medical images every day, especially in the form of image sequences, which requires considerable storage space. One solution could be the application of lossless compression. Among available methods, JPEG-LS has excellent coding performance. However, it only compresses a single picture with intracoding and does not utilize the interframe correlation among pictures. Therefore, this paper proposes a method that combines the JPEG-LS and an interframe coding with motion vectors to enhance the compression performance of using JPEG-LS alone. Since the interframe correlation between two adjacent images in a medical image sequence is usually not as high as that in a general video image sequence, the interframe coding is activated only when the interframe correlation is high enough. With six capsule endoscope image sequences under test, the proposed method achieves average compression gains of 13.3% and 26.3 % over the methods of using JPEG-LS and JPEG2000 alone, respectively. Similarly, for an MRI image sequence, coding gains of 77.5% and 86.5% are correspondingly obtained.

97 citations


BookDOI
07 Aug 2009
TL;DR: The author reveals how the JPSEC Framework changed the way that JPEG 2000 was designed and how that changed the nature of the JPEG 2000 encoding system itself.
Abstract: Contributor Biographies. Foreword. Series Editor's Preface. Preface. Acknowledgments. List of Acronyms. Part A. 1 JPEG 2000 Core Coding System (Part 1) ( Majid Rabbani, Rajan L. Joshi, and Paul W. Jones ). 1.1 Introduction. 1.2 JPEG 2000 Fundamental Building Blocks. 1.3 JPEG 2000 Bit-Stream Organization. 1.4 JPEG 2000 Rate Control. 1.5 Performance Comparison of the JPEG 2000 Encoder Options. 1.6 Additional Features of JPEG 2000 Part 1. Acknowledgments. References. 2 JPEG 2000 Extensions (Part 2) ( Margaret Lepley, J. Scott Houchin, James Kasner, and Michael Marcellin ). 2.1 Introduction. 2.2 Variable DC Offset. 2.3 Variable Scalar Quantization. 2.4 Trellis-Coded Quantization. 2.5 Precinct-Dependent Quantization. 2.6 Extended Visual Masking. 2.7 Arbitrary Decomposition. 2.8 Arbitrary Wavelet Transforms. 2.9 Multiple-Component Transform Extensions. 2.10 Nonlinear Point Transform. 2.11 Geometric Manipulation via a Code-Block Anchor Point (CBAP). 2.12 Single-Sample Overlap. 2.13 Region of Interest. 2.14 Extended File Format: JPX. 2.15 Extended Capabilities Signaling. Acknowledgments. References. 3 Motion JPEG 2000 and ISO Base Media File Format (Parts 3 and 12) ( Joerg Mohr ). 3.1 Introduction. 3.2 Motion JPEG 2000 and ISO Base Media File Format. 3.3 ISO Base Media File Format. 3.4 Motion JPEG 2000. References. 4 Compound Image File Format (Part 6) ( Frederik Temmermans, Tim Bruylants, Simon McPartlin, and Louis Sharpe ). 4.1 Introduction. 4.2 The JPM File Format. 4.3 Mixed Raster Content Model (MRC). 4.4 Streaming JPM Files. 4.5 Referencing JPM Files. 4.6 Metadata. 4.7 Boxes. 4.8 Profiles. 4.9 Conclusions. References. 5 JPSEC: Securing JPEG 2000 Files (Part 8) ( Susie Wee and Zhishou Zhang ). 5.1 Introduction. 5.2 JPSEC Security Services. 5.3 JPSEC Architecture. 5.4 JPSEC Framework. 5.5 What: JPSEC Security Services. 5.6 Where: Zone of Influence (ZOI). 5.7 How: Processing Domain and Granularity. 5.8 JPSEC Examples. 5.9 Summary. References. 6 JPIP - Interactivity Tools, APIs, and Protocols (Part 9) ( Robert Prandolini ). 6.1 Introduction. 6.2 Data-Bins. 6.3 JPIP Basics. 6.4 Client Request-Server Response. 6.5 Advanced Topics. 6.6 Conclusions. Acknowledgments. References. 7 JP3D - Extensions for Three-Dimensional Data (Part 10) ( Tim Bruylants, Peter Schelkens, and Alexis Tzannes ). 7.1 Introduction. 7.2 JP3D: Going Volumetric. 7.3 Bit-Stream Organization. 7.4 Additional Features of JP3D. 7.5 Compression performances: JPEG 2000 Part 1 versus JP3D. 7.6 Implications for Other Parts of JPEG 2000. Acknowledgments. References. 8 JPWL - JPEG 2000 Wireless (Part 11) ( Frederic Dufaux ). 8.1 Introduction. 8.2 Background. 8.3 JPWL Overview. 8.4 Normative Parts. 8.5 Informative Parts. 8.6 Summary. Acknowledgments. References. Part B. 9 JPEG 2000 for Digital Cinema ( Siegfried F o ssel ). 9.1 Introduction. 9.2 General Requirements for Digital Cinema. 9.3 Distribution of Digital Cinema Content. 9.4 Archiving of Digital Movies. 9.5 Future Use of JPEG 2000 within Digital Cinema. 9.6 Conclusions. Acknowledgments. References. 10 Security Applications for JPEG 2000 Imagery ( John Apostolopoulos, Frederic Dufaux, and Qibin Sun ). 10.1 Introduction. 10.2 Secure Transcoding and Secure Streaming. 10.3 Multilevel Access Control. 10.4 Selective or Partial Encryption of Image Content. 10.5 Image Authentication. 10.6 Summary. Acknowledgments. References. 11 Video Surveillance and Defense Imaging ( Touradj Ebrahimi and Frederic Dufaux ). 11.1 Introduction. 11.2 Scrambling. 11.3 Overview of a Typical Video Surveillance System. 11.4 Overview of a Video Surveillance System Based on JPEG 2000 and ROI Scrambling. 12 JPEG 2000 Application in GIS and Remote Sensing ( Bernard Brower, Robert Fiete, and Roddy Shuler ). 12.1 Introduction. 12.2 Geographic Information Systems. 12.3 Recommendations for JPEG 2000 Encoding. 12.4 Other JPEG 2000 Parts to Consider. References. 13 Medical Imaging ( Alexis Tzannes and Ron Gut ). 13.1 Introduction. 13.2 Background. 13.3 DICOM and JPEG 2000 Part 1. 13.4 DICOM and JPEG 2000 Part 2. 13.5 Example Results. 13.6 Image Streaming, DICOM, and JPIP. References. 14 Digital Culture Imaging ( Greg Colyer, Robert Buckley, and Athanassios Skodras ). 14.1 Introduction. 14.2 The Digital Culture Context. 14.3 Digital Culture and JPEG 2000. 14.4 Application - National Digital Newspaper Program. Acknowledgments. References. 15 Broadcast Applications ( Hans Hoffman, Adi Kouadio, and Luk Overmeire ). 15.1 Introduction - From Tape-Based to File-Based Production. 15.2 Broadcast Production Chain Reference Model. 15.3 Codec Requirements for Broadcasting Applications. 15.4 Overview of State-of-the-Art HD Compression Schemes. 15.5 JPEG 2000 Applications. 15.6 Multigeneration Production Processes. 15.7 JPEG 2000 Comparison with SVC. 15.8 Conclusion. References. 16 JPEG 2000 in 3-D Graphics Terrain Rendering ( Gauthier Lafruit, Wolfgang Van Raemdonck, Klaas Tack, and Eric Delfosse ). 16.1 Introduction. 16.2 Tiling: The Straightforward Solution to Texture Streaming. 16.3 View-Dependent JPEG 2000 Texture Streaming and Mipmapping. 16.4 JPEG 2000 Quality and Decoding Time Scalability for Optimal Quality-Workload Tradeoff. 16.5 Conclusion. References. 17 Conformance Testing, Reference Software, and Implementations ( Peter Schelkens, Yiannis Andreopoulos, and Joeri Barbarien ). 17.1 Introduction. 17.2 Part 4 - Conformance Testing. 17.3 Part 5 - Reference Software. 17.4 Implementation of the Discrete Wavelet Transform as Suggested by the JPEG 2000 Standard. 17.5 JPEG 2000 Hardware and Software Implementations. 17.6 Conclusions. Acknowledgments. References. 18 Ongoing Standardization Efforts ( Touradj Ebrahimi, Athanassios Skodras, and Peter Schelkens ). 18.1 Introduction. 18.2 JPSearch. 18.3 JPEG XR. 18.4 Advanced Image Coding and Evaluation Methodologies (AIC). References. Index.

96 citations


Journal ArticleDOI
TL;DR: An iterative algorithm is presented to jointly optimize run-length coding, Huffman coding, and quantization table selection that results in a compressed bitstream completely compatible with existing JPEG and MPEG decoders, but is also computationally efficient.
Abstract: To maximize rate distortion performance while remaining faithful to the JPEG syntax, the joint optimization of the Huffman tables, quantization step sizes, and DCT indices of a JPEG encoder is investigated. Given Huffman tables and quantization step sizes, an efficient graph-based algorithm is first proposed to find the optimal DCT indices in the form of run-size pairs. Based on this graph-based algorithm, an iterative algorithm is then presented to jointly optimize run-length coding, Huffman coding, and quantization table selection. The proposed iterative algorithm not only results in a compressed bitstream completely compatible with existing JPEG and MPEG decoders, but is also computationally efficient. Furthermore, when tested over standard test images, it achieves the best JPEG compression results, to the extent that its own JPEG compression performance even exceeds the quoted PSNR results of some state-of-the-art wavelet-based image coders such as Shapiro's embedded zerotree wavelet algorithm at the common bit rates under comparison. Both the graph-based algorithm and the iterative algorithm can be applied to application areas such as web image acceleration, digital camera image compression, MPEG frame optimization, and transcoding, etc.

91 citations


Proceedings ArticleDOI
01 Dec 2009
TL;DR: This paper investigates the important case of resampling detection in re-compressed JPEG images and shows how blocking artifacts of the previous compression step can help to increase the otherwise drastically reduced detection performance in JPEG compressed images.
Abstract: Resampling detection has become a standard tool in digital image forensics. This paper investigates the important case of resampling detection in re-compressed JPEG images. We show how blocking artifacts of the previous compression step can help to increase the otherwise drastically reduced detection performance in JPEG compressed images. We give a formulation on how affine transformations of JPEG compressed images affect state-of-the-art resampling detectors and derive a new efficient detection variant, which better suits this relevant detection scenario. The principal appropriateness of using JPEG pre-compression artifacts for the detection of resampling in re-compressed images is backed with experimental evidence on a large image set and for a variety of different JPEG qualities.

Proceedings ArticleDOI
29 Jul 2009
TL;DR: An noreference (NR) perceptual quality assessment for JPEG coded stereoscopic images based on segmented local features of artifacts and disparity is proposed and indicates that the model performs quite well over a wide rang of image content and distortion levels.
Abstract: Three-dimensional (3D) imaging has attracted considerable attention recently due to its increasingly wide range of applications. Consequently, perceived quality is a great important issue to assess the performance of all 3D imaging applications. Perceived distortion and depth of any stereoscopic images are strongly dependent on the local features, such as edge, flat and texture. In this paper, we propose an noreference (NR) perceptual quality assessment for JPEG coded stereoscopic images based on segmented local features of artifacts and disparity. The local features information of stereoscopic pair images such as edge, flat and texture areas and also the blockiness and zero crossing rate within the block of the images are evaluated for artifacts and disparity in this method. The result on our subjective stereoscopic images database indicates that the model performs quite well over a wide rang of image content and distortion levels.

Journal ArticleDOI
TL;DR: Simulations on compressed images and videos show improvement in artifact reduction of the proposed adaptive fuzzy filter over other conventional spatial or temporal filtering approaches.
Abstract: A fuzzy filter adaptive to both sample's activity and the relative position between samples is proposed to reduce the artifacts in compressed multidimensional signals. For JPEG images, the fuzzy spatial filter is based on the directional characteristics of ringing artifacts along the strong edges. For compressed video sequences, the motion compensated spatiotemporal filter (MCSTF) is applied to intraframe and interframe pixels to deal with both spatial and temporal artifacts. A new metric which considers the tracking characteristic of human eyes is proposed to evaluate the flickering artifacts. Simulations on compressed images and videos show improvement in artifact reduction of the proposed adaptive fuzzy filter over other conventional spatial or temporal filtering approaches.

Journal ArticleDOI
TL;DR: Levels at which lossy compression can be confidently used in diagnostic imaging applications are determined and a table of recommended compression ratios for each modality and anatomical area investigated is provided to be integrated in the Canadian Association of Radiologists standard for the use oflossy compression in medical imaging.
Abstract: New technological advancements including multislice CT scanners and functional MRI, have dramatically increased the size and number of digital images generated by medical imaging departments. Despite the fact that the cost of storage is dropping, the savings are largely surpassed by the increasing volume of data being generated. While local area network bandwidth within a hospital is adequate for timely access to imaging data, efficiently moving the data between institutions requires wide area network bandwidth, which has a limited availability at a national level. A solution to address those issues is the use of lossy compression as long as there is no loss of relevant information. The goal of this study was to determine levels at which lossy compression can be confidently used in diagnostic imaging applications. In order to provide a fair assessment of existing compression tools, we tested and compared the two most commonly adopted DISCOM compression algorithms: JPEG and JPEG-2000. We conducted an extensive pan-Canadian evaluation of lossy compression applied to seven anatomical areas and five modalities using two recognized techniques: objective methods or diagnostic accuracy and subjective assessment based on Just Noticeable Difference. By incorporating both diagnostic accuracy and subjective evaluation techniques, enabled us to define a range of compression for each modality and body part tested. The results of our study suggest that at low levels of compression, there was no significant difference between the performance of lossy JPEG and lossy JPEG 2000, and that they are both appropriate to use for reporting on medical images. At higher levels, lossy JPEG proved to be more effective than JPEG 2000 in some cases, mainly neuro CT. More evaluation is required to assess the effect of compression on thin slice CT. We provide a table of recommended compression ratios for each modality and anatomical area investigated, to be integrated in the Canadian Association of Radiologists standard for the use of lossy compression in medical imaging.

Proceedings ArticleDOI
22 Sep 2009
TL;DR: The purpose of this research was to develop and test a methodology for evaluating a digital image from a fundus camera in real-time and giving the operator feedback as to the quality of the image and achieved a 100 percent sensitivity and 96 percent specificity in identifying “rejected” images.
Abstract: Real-time medical image quality is a critical requirement in a number of healthcare environments, including ophthalmology where studies suffer loss of data due to unusable (ungradeable) retinal images. Several published reports indicate that from 10% to 15% of images are rejected from studies due to image quality. With the transition of retinal photography to lesser trained individuals in clinics, image quality will suffer unless there is a means to assess the quality of an image in real-time and give the photographer recommendations for correcting technical errors in the acquisition of the photograph. The purpose of this research was to develop and test a methodology for evaluating a digital image from a fundus camera in real-time and giving the operator feedback as to the quality of the image. By providing real-time feedback to the photographer, corrective actions can be taken and loss of data or inconvenience to the patient eliminated. The methodology was tested against image quality as perceived by the ophthalmologist. We successfully applied our methodology on over 2,000 images from four different cameras acquired through dilated and undilated imaging conditions. We showed that the technique was equally effective on uncompressed and compressed (JPEG) images. We achieved a 100 percent sensitivity and 96 percent specificity in identifying “rejected” images.

Proceedings ArticleDOI
TL;DR: A procedure for subjective evaluation of the new JPEG XR codec for compression of still pictures is described in details and the obtained results show high consistency and allow an accurate comparison of codec performance.
Abstract: In this paper a procedure for subjective evaluation of the new JPEG XR codec for compression of still pictures is described in details. The new algorithm has been compared to the existing JPEG and JPEG 2000 standards when considering compression of high resolution 24 bpp pictures, by mean of a campaign of subjective quality assessment tests which followed the guidelines defined by the AIC JPEG ah-hoc group. Sixteen subjects took part in experiments at EPFL and each subject participated in four test sessions, scoring a total of 208 test stimuli. A detailed procedure for statistical analysis of subjective data is also proposed and performed. The obtained results show high consistency and allow an accurate comparison of codec performance.

Book ChapterDOI
03 Sep 2009
TL;DR: It is demonstrated that it is even possible to beat the quality of the much more advanced JPEG 2000 standard when one uses subdivisions on rectangles and a number of additional optimisations.
Abstract: Although widely used standards such as JPEG and JPEG 2000 exist in the literature, lossy image compression is still a subject of ongoing research. Galic et al. (2008) have shown that compression based on edge-enhancing anisotropic diffusion can outperform JPEG for medium to high compression ratios when the interpolation points are chosen as vertices of an adaptive triangulation. In this paper we demonstrate that it is even possible to beat the quality of the much more advanced JPEG 2000 standard when one uses subdivisions on rectangles and a number of additional optimisations. They include improved entropy coding, brightness rescaling, diffusivity optimisation, and interpolation swapping. Experiments on classical test images are presented that illustrate the potential of our approach.

01 Jan 2009
TL;DR: Experimental results show no visible difference between the watermarked frames and the original frames and show the robustness against a wide range of attacks such as MPEG coding, JPEG coding, Gaussian noise addition, histogram equalization, gamma correction, contrast adjustment, sharpen filter, cropping, resizing, and rotation.
Abstract: ††† † † Summary This paper presents a novel technique for embedding a binary logo watermark into video frames. The proposed scheme is an imperceptible and a robust hybrid video watermarking scheme. PCA is applied to each block of the two bands (LL – HH) which result from Discrete Wavelet transform of every video frame. The watermark is embedded into the principal components of the LL blocks and HH blocks in different ways. Combining the two transforms improved the performance of the watermark algorithm. The scheme is tested by applying various attacks. Experimental results show no visible difference between the watermarked frames and the original frames and show the robustness against a wide range of attacks such as MPEG coding, JPEG coding, Gaussian noise addition, histogram equalization, gamma correction, contrast adjustment, sharpen filter, cropping, resizing, and rotation.

Proceedings ArticleDOI
29 May 2009
TL;DR: SubJPEG as mentioned in this paper is a state-of-the-art multi-standard 65nm CMOS JPEG encoding coprocessor that enables ultra-wide V DD scaling with only 1.3pJ/operation energy consumption.
Abstract: Many digital ICs can benefit from sub/near threshold operations that provide ultra-low-energy/operation for long battery lifetime. In addition, sub/near threshold operation largely mitigates the transient current hence lowering the ground bounce noise. This also helps to improve the performance of sensitive analog circuits on the chip, such as delay-lock loops (DLL), which is crucial for the functioning of large digital circuits. However, aggressive voltage scaling causes throughput and reliability degradation. This paper presents SubJPEG, a state of the art multi-standard 65nm CMOS JPEG encoding coprocessor that enables ultra-wide V DD scaling. With a 0.45V power supply, it delivers 15fps 640×480 VGA application with only 1.3pJ/operation energy consumption per DCT and quantization computation. This co-processor is very suitable for applications such as digital cameras, portable wireless and medical imaging. To the best of our knowledge, this is the largest sub-threshold processor so far.

Journal ArticleDOI
01 Sep 2009
TL;DR: This paper proposes a technique based on bit sequence matching to identify fragments created by the same Huffman code tables and addresses the construction of a pseudo header needed for recovery of stand-alone file fragments.
Abstract: Recovery of fragmented files proves to be a challenging task for encoded files like JPEG. In this paper, we consider techniques for addressing two issues related to fragmented JPEG file recovery. First issue concerns more efficient identification of the next fragment of a file undergoing recovery. Second issue concerns the recovery of file fragments which cannot be linked to an existing image header or for which there is no available image header. Current file recovery approaches are not well suited to deal with these practical issues. In addressing these problems, we utilize JPEG file format specifications. More specifically, we propose a technique based on bit sequence matching to identify fragments created by the same Huffman code tables. We also address the construction of a pseudo header needed for recovery of stand-alone file fragments. Some experimental results are provided to support our claims.

Book ChapterDOI
15 Dec 2009
TL;DR: The method used block-matching procedures, which first divided the image into the same size block, then applied improved singular value decomposition to all of the image blocks to yield a reduced dimension representation for forming the singular value feature matrix of image blocks which was lexicographically sorted.
Abstract: Digital images are easy to be tempered and edited due to availability of image editing software. The most common ways to temper a digital image is copy-paste forgery which is used to conceal objects or produce a non-existing scene. To detect the copy-paste forgery, we divide the image into blocks as the basic feature for detection, and transfer every block to a feature vector with lower dimension for comparison. The number of blocks and dimension of characteristics are the major factor affecting the computation complexity. In this paper, we modify the previous methods by using less cumulative offsets for block matching. The experimental results show that our method can successfully detect the forgery part even when the forged image is saved in a lossy format such as JPEG. The performance of the proposed method is demonstrated on several forged images.

Journal ArticleDOI
TL;DR: Based on the idea of second generation image coding, a novel scheme for coding still images is presented and it is demonstrated that the proposed method performs better than those of the current one, such as JPEG, CMP, EZW and JPEG2000.
Abstract: Based on the idea of second generation image coding, a novel scheme for coding still images is presented. At first, an image was partitioned with a pulse-coupled neural network; and then an improved chain code and the 2D discrete cosine transform was adopted to encode the shape and the color of its edges respectively. To code its smooth and texture regions, an improved zero-trees strategy based on the 2nd generation wavelet was chosen. After that, the zero-tree chart was selected to rearrange quantified coefficients. And finally some regulations were given according to psychology of various users. Experiments under noiseless channels demonstrate that the proposed method performs better than those of the current one, such as JPEG, CMP, EZW and JPEG2000.

Proceedings ArticleDOI
01 Jan 2009
TL;DR: This paper focuses on image specific artifacts and proposes an automatic method capable of detecting them and is presented as a viable alternative to conventional photo-editing software.
Abstract: Verifying the integrity of digital images and detecting the traces of tampering without using any protecting pre-extracted or pre-embedded information has an important role in image forensics and crime detection. When altering a JPEG image, typically it is loaded into a photo-editing software and after manipulations are carried out, the image is re-saved. This operation, typically, brings into the image specific artifacts. In this paper we focus on these artifacts and propose an automatic method capable of detecting them. (6 pages)

Journal ArticleDOI
TL;DR: This research examined whether fixed pattern noise or more specifically Photo Response Non‐Uniformity (PRNU) can be used to identify the source camera of heavily JPEG compressed digital photographs of resolution 640 × 480 pixels.
Abstract: In this research, we examined whether fixed pattern noise or more specifically Photo Response Non-Uniformity (PRNU) can be used to identify the source camera of heavily JPEG compressed digital photographs of resolution 640 x 480 pixels. We extracted PRNU patterns from both reference and questioned images using a two-dimensional Gaussian filter and compared these patterns by calculating the correlation coefficient between them. Both the closed and open-set problems were addressed, leading the problems in the closed set to high accuracies for 83% for single images and 100% for around 20 simultaneously identified questioned images. The correct source camera was chosen from a set of 38 cameras of four different types. For the open-set problem, decision levels were obtained for several numbers of simultaneously identified questioned images. The corresponding false rejection rates were unsatisfactory for single images but improved for simultaneous identification of multiple images.

Journal ArticleDOI
TL;DR: By employing a new comparison methodology and using transform coefficients as input to face recognition algorithms, it is shown that face recognition can efficiently be implemented directly into compressed domain.

Journal ArticleDOI
TL;DR: The matrices of APBT based on WT, DCT and IDCT are deduced, which can be used in image compression instead of the conventional DCT, and the quantization table is simplified and the transform coefficients can be quantized uniformly.
Abstract: This paper proposes new concepts of the all phase biorthogonal transform (APBT) and the dual biorthogonal basis vectors. In the light of all phase digital filtering theory, three kinds of all phase biorthogonal transforms based on the Walsh transform (WT), the discrete cosine transform (DCT) and the inverse discrete cosine transform (IDCT) are proposed. The matrices of APBT based on WT, DCT and IDCT are deduced, which can be used in image compression instead of the conventional DCT. Compared with DCT-based JPEG (DCT-JPEG) image compression algorithm at the same bit rates, the PSNR and visual quality of the reconstructed images using these transforms are approximate to DCT, outgoing DCT-JPEG at low bit rates especially. But the advantage is that the quantization table is simplified and the transform coefficients can be quantized uniformly. Therefore, the computing time becomes shorter and the hardware implementation easier.

Journal ArticleDOI
TL;DR: In this paper, QR bar code and image processing techniques are used to construct a nested steganography scheme that can conceal lossless and lossy secret data into a cover image simultaneously and is robust to JPEG attacks.
Abstract: In this paper, QR bar code and image processing techniques are used to construct a nested steganography scheme. There are two types of secret data lossless and lossy embedded into a cover image. The lossless data is text that is first encoded by the QR barcode; its data does not have any distortion when comparing with the extracted data and original data. The lossy data is a kind of image; the face image is suitable for our case. Because the extracted text is lossless, the error correction rate of QR encoding must be carefully designed. We found a 25% error correction rate is suitable for our goal. In image embedding, because it can sustain minor perceptible distortion, we thus adopted the lower nibble byte discard of the face image to reduce the secret data. When the image is extracted, we use a median filter to filter out the noise and obtain a smoother image quality. After simulation, it is evident that our scheme is robust to JPEG attacks. Compared to other steganogra- phy schemes, our proposed method has three advantages: i the nested scheme is an enhanced security system never previously developed; ii our scheme can conceal lossless and lossy secret data into a cover image simultaneously; and iii the QR barcode used as secret data can widely extend this method's application fields. © 2009 Society of Photo-Optical

Proceedings ArticleDOI
07 Nov 2009
TL;DR: A new scheme based on compressed sensing to compress a depth map and derive a reconstruction scheme to recover the original map from the subsamples using a non-linear conjugate gradient minimization scheme.
Abstract: We propose in this paper a new scheme based on compressed sensing to compress a depth map. We first subsample the entity in the frequency domain to take advantage of its compressibility. We then derive a reconstruction scheme to recover the original map from the subsamples using a non-linear conjugate gradient minimization scheme. We preserve the discontinuities of the depth map at the edges and ensure its smoothness elsewhere by incorporating the Total Variation constraint in the minimization. The results we obtained on various test depth maps show that the proposed method leads to lower error rate at high compression ratio when compared to standard image compression techniques like JPEG and JPEG 2000.

Book ChapterDOI
29 Aug 2009
TL;DR: This paper presents a lossy compression method for cartoon-like images that exploits information at image edges with the Marr---Hildreth operator followed by hysteresis thresholding and outperforms the widely-used JPEG standard.
Abstract: It is well-known that edges contain semantically important image information. In this paper we present a lossy compression method for cartoon-like images that exploits information at image edges. These edges are extracted with the Marr---Hildreth operator followed by hysteresis thresholding. Their locations are stored in a lossless way using JBIG. Moreover, we encode the grey or colour values at both sides of each edge by applying quantisation, subsampling and PAQ coding. In the decoding step, information outside these encoded data is recovered by solving the Laplace equation, i.e. we inpaint with the steady state of a homogeneous diffusion process. Our experiments show that the suggested method outperforms the widely-used JPEG standard and can even beat the advanced JPEG2000 standard for cartoon-like images.

Proceedings ArticleDOI
07 Nov 2009
TL;DR: The paper presents a dictionary construction method for spatial texture prediction based on sparse approximations that considers a locally adaptive dictionary, A, formed by atoms derived from texture patches present in a causal neighborhood of the block to be predicted.
Abstract: The paper presents a dictionary construction method for spatial texture prediction based on sparse approximations. Sparse approximations have been recently considered for image prediction using static dictionaries such as a DCT or DFT dictionary. These approaches rely on the assumption that the texture is periodic, hence the use of a static dictionary formed by pre-defined waveforms. However, in real images, there are more complex and non-periodic textures. The main idea underlying the proposed spatial prediction technique is instead to consider a locally adaptive dictionary, A, formed by atoms derived from texture patches present in a causal neighborhood of the block to be predicted. The sparse spatial prediction method is assessed against the sparse prediction method based on a static DCT dictionary. The spatial prediction method is then assessed in a complete image coding scheme where the prediction residue is encoded using a coding approach similar to JPEG.