scispace - formally typeset
Search or ask a question

Showing papers on "Digital watermarking published in 2008"


Journal ArticleDOI
TL;DR: This paper demonstrates the effectiveness of the proposed framework for nonintrusive digital image forensics by demonstrating the absence of camera-imposed fingerprints from a test image indicates that the test image is not a camera output and is possibly generated by other image production processes.
Abstract: Digital imaging has experienced tremendous growth in recent decades, and digital camera images have been used in a growing number of applications. With such increasing popularity and the availability of low-cost image editing software, the integrity of digital image content can no longer be taken for granted. This paper introduces a new methodology for the forensic analysis of digital camera images. The proposed method is based on the observation that many processing operations, both inside and outside acquisition devices, leave distinct intrinsic traces on digital images, and these intrinsic fingerprints can be identified and employed to verify the integrity of digital data. The intrinsic fingerprints of the various in-camera processing operations can be estimated through a detailed imaging model and its component analysis. Further processing applied to the camera captured image is modelled as a manipulation filter, for which a blind deconvolution technique is applied to obtain a linear time-invariant approximation and to estimate the intrinsic fingerprints associated with these postcamera operations. The absence of camera-imposed fingerprints from a test image indicates that the test image is not a camera output and is possibly generated by other image production processes. Any change or inconsistencies among the estimated camera-imposed fingerprints, or the presence of new types of fingerprints suggest that the image has undergone some kind of processing after the initial capture, such as tampering or steganographic embedding. Through analysis and extensive experimental studies, this paper demonstrates the effectiveness of the proposed framework for nonintrusive digital image forensics.

281 citations


Journal ArticleDOI
TL;DR: By using the proposed algorithm, a 90% tampered image can be recovered to a dim yet still recognizable condition (PSNR ~20dB).

274 citations


01 Jan 2008
TL;DR: This study presents a robust and semi-blind watermarking by embedding information into low frequency AC coefficients of Discrete Cosine Transform (DCT) by utilizing the DC values of the neighboring blocks to predict the AC coefficient of the center block.
Abstract: This study presents a robust and semi-blind watermarking by embedding information into low frequency AC coefficients of Discrete Cosine Transform (DCT). Since the imperceptibility is the most significant issue in watermarking, the DC value is maintained unchanged. The proposed methods utilize the DC values of the neighboring blocks to predict the AC coefficients of the center block. The low frequency AC coefficients are modified to carry watermark information. The Least Mean Squares (LMS) is employed to yield the intermediate filters, cooperating with the neighboring DC coefficients to precisely predict the original AC coefficients. Two watermarking methods, namely Watermark Embedding in the Predicted AC Coefficient (WEPAC) and Watermark Embedding in the Original AC coefficient (WEOAC), are presented in this study. Moreover, many attacks are addressed to show the robustness of the proposed methods.

270 citations


Journal ArticleDOI
TL;DR: The experimental results show that the high visual quality of stego-images, the data embedding capacity, and the robustness of the proposed lossless data hiding scheme against compression are acceptable for many applications, including semi-fragile image authentication.
Abstract: Recently, among various data hiding techniques, a new subset, lossless data hiding, has received increasing interest. Most of the existing lossless data hiding algorithms are, however, fragile in the sense that the hidden data cannot be extracted out correctly after compression or other incidental alteration has been applied to the stego-image. The only existing semi-fragile (referred to as robust in this paper) lossless data hiding technique, which is robust against high-quality JPEG compression, is based on modulo-256 addition to achieve losslessness. In this paper, we first point out that this technique has suffered from the annoying salt-and-pepper noise caused by using modulo-256 addition to prevent overflow/underflow. We then propose a novel robust lossless data hiding technique, which does not generate salt-and-pepper noise. By identifying a robust statistical quantity based on the patchwork theory and employing it to embed data, differentiating the bit-embedding process based on the pixel group's distribution characteristics, and using error correction codes and permutation scheme, this technique has achieved both losslessness and robustness. It has been successfully applied to many images, thus demonstrating its generality. The experimental results show that the high visual quality of stego-images, the data embedding capacity, and the robustness of the proposed lossless data hiding scheme against compression are acceptable for many applications, including semi-fragile image authentication. Specifically, it has been successfully applied to authenticate losslessly compressed JPEG2000 images, followed by possible transcoding. It is expected that this new robust lossless data hiding algorithm can be readily applied in the medical field, law enforcement, remote sensing and other areas, where the recovery of original images is desired.

214 citations


Journal ArticleDOI
TL;DR: This paper presents two vector watermarking schemes that are based on the use of complex and quaternion Fourier transforms and demonstrates, for the first time, how to embed watermarks into the frequency domain that is consistent with the human visual system.
Abstract: This paper presents two vector watermarking schemes that are based on the use of complex and quaternion Fourier transforms and demonstrates, for the first time, how to embed watermarks into the frequency domain that is consistent with our human visual system. Watermark casting is performed by estimating the just-noticeable distortion of the images, to ensure watermark invisibility. The first method encodes the chromatic content of a color image into the CIE chromaticity coordinates while the achromatic content is encoded as CIE tristimulus value. Color watermarks (yellow and blue) are embedded in the frequency domain of the chromatic channels by using the spatiochromatic discrete Fourier transform. It first encodes and as complex values, followed by a single discrete Fourier transform. The most interesting characteristic of the scheme is the possibility of performing watermarking in the frequency domain of chromatic components. The second method encodes the components of color images and watermarks are embedded as vectors in the frequency domain of the channels by using the quaternion Fourier transform. Robustness is achieved by embedding a watermark in the coefficient with positive frequency, which spreads it to all color components in the spatial domain and invisibility is satisfied by modifying the coefficient with negative frequency, such that the combined effects of the two are insensitive to human eyes. Experimental results demonstrate that the two proposed algorithms perform better than two existing algorithms - ac- and discrete cosine transform-based schemes.

210 citations


Journal ArticleDOI
TL;DR: The experimental results show that the proposed blind watermarking algorithm is quite effective against JPEG compression, low-pass filtering, and Gaussian noise; the PSNR value of a watermarked image is greater than 40 dB.
Abstract: This paper proposes a blind watermarking algorithm based on the significant difference of wavelet coefficient quantization for copyright protection. Every seven nonoverlap wavelet coefficients of the host image are grouped into a block. The largest two coefficients in a block are called significant coefficients in this paper and their difference is called significant difference. We quantized the local maximum wavelet coefficient in a block by comparing the significant difference value in a block with the average significant difference value in all blocks. The maximum wavelet coefficients are so quantized that their significant difference between watermark bit 0 and watermark bit 1 exhibits a large energy difference which can be used for watermark extraction. During the extraction, an adaptive threshold value is designed to extract the watermark from the watermarked image under different attacks. We compare the adaptive threshold value to the significant difference which was quantized in a block to determine the watermark bit. The experimental results show that the proposed method is quite effective against JPEG compression, low-pass filtering, and Gaussian noise; the PSNR value of a watermarked image is greater than 40 dB.

209 citations


Book
17 Dec 2008
TL;DR: This new edition of this best-selling book on cryptography and information hiding delineates a number of different methods to hide information in all types of digital media files, and includes 5 completely new chapters that introduce newer more sophisticated and refined cryptographic algorithms and techniques capable of withstanding the evolved forms of attack.
Abstract: Cryptology is the practice of hiding digital information by means of various obfuscatory and steganographic techniques. The application of said techniques facilitates message confidentiality and sender/receiver identity authentication, and helps to ensure the integrity and security of computer passwords, ATM card information, digital signatures, DVD and HDDVD content, and electronic commerce. Cryptography is also central to digital rights management (DRM), a group of techniques for technologically controlling the use of copyrighted material that is being widely implemented and deployed at the behest of corporations that own and create revenue from the hundreds of thousands of mini-transactions that take place daily on programs like iTunes. This new edition of our best-selling book on cryptography and information hiding delineates a number of different methods to hide information in all types of digital media files. These methods include encryption, compression, data embedding and watermarking, data mimicry, and scrambling. During the last 5 years, the continued advancement and exponential increase of computer processing power have enhanced the efficacy and scope of electronic espionage and content appropriation. Therefore, this edition has amended and expanded outdated sections in accordance with new dangers, and includes 5 completely new chapters that introduce newer more sophisticated and refined cryptographic algorithms and techniques (such as fingerprinting, synchronization, and quantization) capable of withstanding the evolved forms of attack. Each chapter is divided into sections, first providing an introduction and high-level summary for those who wish to understand the concepts without wading through technical explanations, and then presenting concrete examples and greater detail for those who want to write their own programs. This combination of practicality and theory allows programmers and system designers to not only implement tried and true encryption procedures, but also consider probable future developments in their designs, thus fulfilling the need for preemptive caution that is becoming ever more explicit as the transference of digital media escalates. * Includes 5 completely new chapters that delineate the most current and sophisticated cryptographic algorithms, allowing readers to protect their information against even the most evolved electronic attacks. * Conceptual tutelage in conjunction with detailed mathematical directives allows the reader to not only understand encryption procedures, but also to write programs which anticipate future security developments in their design. * Grants the reader access to online source code which can be used to directly implement proven cryptographic procedures such as data mimicry and reversible grammar generation into their own work.

205 citations



Journal ArticleDOI
TL;DR: A survey and a comparison of emerging techniques for image authentication, that is strict or selective authentication, tamper detection, localization and reconstruction capabilities and robustness against different desired image processing operations are presented.
Abstract: Image authentication techniques have recently gained great attention due to its importance for a large number of multimedia applications. Digital images are increasingly transmitted over non-secure channels such as the Internet. Therefore, military, medical and quality control images must be protected against attempts to manipulate them; such manipulations could tamper the decisions based on these images. To protect the authenticity of multimedia images, several approaches have been proposed. These approaches include conventional cryptography, fragile and semi-fragile watermarking and digital signatures that are based on the image content. The aim of this paper is to present a survey and a comparison of emerging techniques for image authentication. Methods are classified according to the service they provide, that is strict or selective authentication, tamper detection, localization and reconstruction capabilities and robustness against different desired image processing operations. Furthermore, we introduce the concept of image content and discuss the most important requirements for an effective image authentication system design. Different algorithms are described and we focus on their comparison according to the properties cited above.

180 citations


Book ChapterDOI
01 Jan 2008
TL;DR: This chapter focuses on a specific data-hiding application—steganography, the main property of steganography is statistical undetectability of embedded data.
Abstract: Publisher Summary This chapter focuses on a specific data-hiding application—steganography. As opposed to digital watermarking, the main property of steganography is statistical undetectability of embedded data. The payload is usually unrelated to the cover Work, which only serves as a decoy. The information-theoretic definition of steganographic security (Cachin's definition) is the most widely used definition in practice. It is usually applied in a simplified form by accepting a model for the cover Work. Secure steganographic schemes must take into account steganalytic methods. One possibility is to replace the embedding operation of LSB flipping (F5) to avoid introducing easily detectable artifacts. Another possibility is to mask the embedding distortion as a naturally occurring phenomenon, such as during image acquisition (stochastic modulation). Alternatively, one can design schemes that preserve some vital statistical characteristics of the cover image (OutGuess) or a model of the cover that is recoverable from the stego Work (model-based steganography). In a typical steganographic scheme, the placement of embedding changes (the selection rule) is shared between the sender and the recipient. However, there are many situations when this information cannot be shared, such as in adaptive steganography, selection rules determined from side information, or in public-key steganography. The problem of nonshared selection rules is equivalent to writing in memory with defective cells and can be efficiently approached using sparse linear codes, known as LT codes.

175 citations


Journal ArticleDOI
TL;DR: A new SVD-based digital watermarking scheme for ownership protection that solves the problem of false-positive detection and is extremely robust against geometrical distortion attacks.

Journal ArticleDOI
TL;DR: This paper gives a comprehensive survey on 3-D mesh watermarking, which is considered an effective solution to the above two emerging problems.
Abstract: Three-dimensional (3-D) meshes have been used more and more in industrial, medical and entertainment applications during the last decade. Many researchers, from both the academic and the industrial sectors, have become aware of their intellectual property protection and authentication problems arising with their increasing use. This paper gives a comprehensive survey on 3-D mesh watermarking, which is considered an effective solution to the above two emerging problems. Our survey covers an introduction to the relevant state of the art, an attack-centric investigation, and a list of existing problems and potential solutions. First, the particular difficulties encountered while applying watermarking on 3-D meshes are discussed. Then we give a presentation and an analysis of the existing algorithms by distinguishing them between fragile techniques and robust techniques. Since attacks play an important role in the design of 3-D mesh watermarking algorithms, we also provide an attack-centric viewpoint of this state of the art. Finally, some future working directions are pointed out especially on the ways of devising robust and blind algorithms and on some new probably promising watermarking feature spaces.

Journal ArticleDOI
TL;DR: This paper presents an image watermarking scheme by the use of two statistical features (the histogram shape and the mean) in the Gaussian filtered low-frequency component of images that is mathematically invariant to scaling the size of images and robust to interpolation errors during geometric transformations, and common image processing operations.
Abstract: Watermark resistance to geometric attacks is an important issue in the image watermarking community. Most countermeasures proposed in the literature usually focus on the problem of global affine transforms such as rotation, scaling and translation (RST), but few are resistant to challenging cropping and random bending attacks (RBAs). The main reason is that in the existing watermarking algorithms, those exploited robust features are more or less related to the pixel position. In this paper, we present an image watermarking scheme by the use of two statistical features (the histogram shape and the mean) in the Gaussian filtered low-frequency component of images. The two features are: 1) mathematically invariant to scaling the size of images; 2) independent of the pixel position in the image plane; 3)statistically resistant to cropping; and 4) robust to interpolation errors during geometric transformations, and common image processing operations. As a result, the watermarking system provides a satisfactory performance for those content-preserving geometric deformations and image processing operations, including JPEG compression, lowpass filtering, cropping and RBAs.

Journal ArticleDOI
TL;DR: This paper proposes a novel fragile watermarking scheme capable of perfectly recovering the original image from its tampered version using a lossless data hiding method.
Abstract: This paper proposes a novel fragile watermarking scheme capable of perfectly recovering the original image from its tampered version. In the scheme, a tailor-made watermark consisting of reference-bits and check-bits is embedded into the host image using a lossless data hiding method. On the receiver side, by comparing the extracted and calculated check-bits, one can identify the tampered image-blocks. Then, the reliable reference-bits extracted from other blocks are used to exactly reconstruct the original image. Although content replacement may destroy a portion of the embedded watermark data, as long as the tampered area is not too extensive, the original image information can be restored without any error.

Journal ArticleDOI
TL;DR: A new adaptive digital image watermarking method that is built according to the image features such as the brightness, edges, and region activities and extended to the DCT domain by searching the extreme value of the quadratic function subject to the bounds on the variables.

Journal ArticleDOI
TL;DR: Experimental results have demonstrated that the proposed method is capable of hiding more secret data while maintaining imperceptible stego-image quality degradation.

Journal ArticleDOI
TL;DR: This paper presents a mechanism for proof of ownership based on the secure embedding of a robust imperceptible watermark in relational data and formulate the watermarking of relational databases as a constrained optimization problem and discusses efficient techniques to solve the optimizationproblem and to handle the constraints.
Abstract: Proving ownership rights on outsourced relational databases is a crucial issue in today's internet-based application environments and in many content distribution applications In this paper, we present a mechanism for proof of ownership based on the secure embedding of a robust imperceptible watermark in relational data We formulate the watermarking of relational databases as a constrained optimization problem and discuss efficient techniques to solve the optimization problem and to handle the constraints Our watermarking technique is resilient to watermark synchronization errors because it uses a partitioning approach that does not require marker tuples Our approach overcomes a major weakness in previously proposed watermarking techniques Watermark decoding is based on a threshold-based technique characterized by an optimal threshold that minimizes the probability of decoding errors We implemented a proof of concept implementation of our watermarking technique and showed by experimental results that our technique is resilient to tuple deletion, alteration, and insertion attacks

Journal ArticleDOI
TL;DR: A novel optimal watermarking scheme based on singular-value decomposition (SVD) using genetic algorithm (GA) is presented, which shows both the significant improvement in transparency and the robustness under attacks.
Abstract: In this paper, a novel optimal watermarking scheme based on singular-value decomposition (SVD) using genetic algorithm (GA) is presented. The singular values (SVs) of the host image are modified by multiple scaling factors to embed the watermark image. Modifications are optimised using GA to obtain the highest possible robustness without losing the transparency. Experimental results show both the significant improvement in transparency and the robustness under attacks.

Journal ArticleDOI
TL;DR: This paper transfers the shape of audio histogram in the time domain to the low-frequency subband by segmenting an audio signal into portions in reference to the bin width of the time-domain histogram, and DWT filtering the concatenation of the portions in each bin.

Journal ArticleDOI
TL;DR: The proposed near-lossless method is proven to effectively detect a tampered medical image and recover the original ROI image.
Abstract: Digital medical images are very easy to be modified for illegal purposes. For example, microcalcification in mammography is an important diagnostic clue, and it can be wiped off intentionally for insurance purposes or added intentionally into a normal mammography. In this paper, we proposed two methods to tamper detection and recovery for a medical image. A 1024 × 1024 x-ray mammogram was chosen to test the ability of tamper detection and recovery. At first, a medical image is divided into several blocks. For each block, an adaptive robust digital watermarking method combined with the modulo operation is used to hide both the authentication message and the recovery information. In the first method, each block is embedded with the authentication message and the recovery information of other blocks. Because the recovered block is too small and excessively compressed, the concept of region of interest (ROI) is introduced into the second method. If there are no tampered blocks, the original image can be obtained with only the stego image. When the ROI, such as microcalcification in mammography, is tampered with, an approximate image will be obtained from other blocks. From the experimental results, the proposed near-lossless method is proven to effectively detect a tampered medical image and recover the original ROI image. In this study, an adaptive robust digital watermarking method combined with the operation of modulo 256 was chosen to achieve information hiding and image authentication. With the proposal method, any random changes on the stego image will be detected in high probability.

Journal ArticleDOI
TL;DR: It is shown that circular watermarking has robustness comparable to that of the insecure classical spread spectrum, and information leakage measures are proposed to highlight the security level of the new spread-spectrum modulations.
Abstract: It has recently been discovered that using pseudorandom sequences as carriers in spread-spectrum techniques for data-hiding is not at all a sufficient condition for ensuring data-hiding security. Using proper and realistic apriori hypothesis on the messages distribution, it is possible to accurately estimate the secret carriers by casting this estimation problem into a blind source separation problem. After reviewing relevant works on spread-spectrum security for watermarking, we further develop this topic to introduce the concept of security classes which broaden previous notions in watermarking security and fill the gap with steganography security as defined by Cachin. We define four security classes, namely, by order of creasing security: insecurity, key security, subspace security, and stegosecurity. To illustrate these views, we present two new modulations for truly secure watermarking in the watermark-only-attack (WOA) framework. The first one is called natural watermarking and can be made either stegosecure or subspace secure. The second is called circular watermarking and is key secure. We show that circular watermarking has robustness comparable to that of the insecure classical spread spectrum. We shall also propose information leakage measures to highlight the security level of our new spread-spectrum modulations.

Journal ArticleDOI
TL;DR: This novel approach allows image owners to adjust the strength of watermarks through a threshold, so that the robustness of the watermark can be enhanced and preserves the data lossless requirement, so it is suitable for medical and artistic images.

Journal ArticleDOI
TL;DR: This paper proposes an end-to-end, statistical approach for data authentication that provides inherent support for in-network processing and shows that the proposed scheme can successfully authenticate the sensory data with high confidence.

Journal ArticleDOI
TL;DR: A four-scanning attack aimed to Lin et al.'s watermarking method is presented to create tampered images and in case they use encryption to protect their 3-tuple-watermark, a blind attack to tamper watermarked images without being detected is proposed.

Journal Article
TL;DR: It is concluded that Joo's technique is more robust for standard noise attacks than Dote’s technique and building on the experience that was gained, implemented two distinguishing watermarking schemes.
Abstract: In this paper, we start by first characterizing the most important and distinguishing features of wavelet-based watermarking schemes. We studied the overwhelming amount of algorithms proposed in the literature. Application scenario, copyright protection is considered and building on the experience that was gained, implemented two distinguishing watermarking schemes. Detailed comparison and obtained results are presented and discussed. We concluded that Joo’s [1] technique is more robust for standard noise attacks than Dote’s [2] technique. Keywords—Digital image, Copyright protection, Watermarking, Wavelet transform.

Journal ArticleDOI
TL;DR: Experimental results indicate that the proposed analysis-by-synthesis echo hiding scheme is superior to the conventional schemes in terms of robustness, security, and perceptual quality.
Abstract: Audio watermarking using echo hiding has fairly good perceptual quality. However, security and the tradeoff between robustness and imperceptibility are still relevant issues. This paper presents the echo hiding scheme in which the analysis-by-synthesis approach, interlaced kernels, and frequency hopping are adopted to achieve high robustness, security, and perceptual quality. The amplitudes of the embedded echoes are adequately adapted during the embedding process by considering not only the characteristics of the host signals, but also cases in which the watermarked audio signals have suffered various attacks. Additionally, the interlaced kernels are introduced such that the echo positions of the interlaced kernels for embedding "zero" and "one" are interchanged alternately to minimize the influence of host signals and various attacks on the watermarked data. Frequency hopping is employed to increase the robustness and security of the proposed echo hiding scheme in which each audio segment for watermarking is established by combining the fractions selected from all frequency bands based on a pseudonoise sequence as a secret key. Experimental results indicate that the proposed analysis-by-synthesis echo hiding scheme is superior to the conventional schemes in terms of robustness, security, and perceptual quality.

Journal ArticleDOI
TL;DR: It is shown that by limiting the watermark to nonzero-quantized AC residuals in P-frames, the video bit-rate increase can be held to reasonable values, and a watermark detection algorithm is developed that has controllable performance.
Abstract: Most video watermarking algorithms embed the watermark in I-frames, but refrain from embedding in P- and B-frames, which are highly compressed by motion compensation. However, P-frames appear more frequently in the compressed video and their watermarking capacity should be exploited, despite the fact that embedding the watermark in P-frames can increase the video bit rate significantly. This paper gives a detailed overview of a common approach for embedding the watermark in I-frames. This common approach is adopted to use P-frames for video watermarking. We show that by limiting the watermark to nonzero-quantized AC residuals in P-frames, the video bit-rate increase can be held to reasonable values. Since the nonzero-quantized AC residuals in P-frames correspond to nonflat areas that are in motion, temporal and texture masking are exploited at the same time. We also propose embedding the watermark in nonzero quantized AC residuals with spatial masking capacity in I-frames. Since the locations of the nonzero-quantized AC residuals is lost after decoding, we develop a watermark detection algorithm that does not depend on this knowledge. Our video watermark detection algorithm has controllable performance. We demonstrate the robustness of our proposed algorithm to several different attacks.

Patent
17 Dec 2008
TL;DR: In this article, the authors present a method to obtain first data representing a first chrominance channel of a color image or video, where the first data comprises a watermark signal embedded therein.
Abstract: The present invention relate generally to digital watermarking. One claim recites a method including: obtaining first data representing a first chrominance channel of a color image or video, where the first data comprises a watermark signal embedded therein; obtaining second data representing a second chrominance channel of the color image or video, the second data comprising the watermark signal embedded therein but with a signal polarity that is inversely related to the polarity of the watermark signal in the first data; combining the second data with the first data in a manner that reduces image or video interference relative to the watermark signal, said act of combining yielding third data; using at least a processor or electronic processing circuitry, processing the third data to obtain the watermark signal; and once obtained, providing information associated with the watermark signal. Of course, additional combinations and claims are provided as well.

Journal ArticleDOI
TL;DR: The proposed decoding scheme is able to cope with the alterations in features introduced by a new attack and achieves promising improvement in terms of bit correct ratio in comparison to the existing decoding scheme.

Journal ArticleDOI
TL;DR: This paper presents a robust image watermarking scheme for multimedia copyright protection that is more secure and robust to various attacks, viz., JPEG2000 compression, JPEG compression, rotation, scaling, cropping, row-column blanking, rows-column copying, salt and pepper noise, filtering and gamma correction.
Abstract: This paper presents a robust image watermarking scheme for multimedia copyright protection. In this work, host image is partitioned into four sub images. Watermark image such as ‘logo’ is embedded in the two of these sub images, in both D (singular and diagonal matrix) and U (left singular and orthogonal matrix) components of Singular Value Decomposition (SVD) of two sub images. Watermark image is embedded in the D component using Dither quantization. A copy of the watermark is embedded in the columns of U matrix using comparison of the coefficients of U matrix with respect to the watermark image. If extraction of watermark from D matrix is not complete, there is a fair amount of probability that it can be extracted from U matrix. The proposed algorithm is more secure and robust to various attacks, viz., JPEG2000 compression, JPEG compression, rotation, scaling, cropping, row-column blanking, row-column copying, salt and pepper noise, filtering and gamma correction. Superior experimental results are observed with the proposed algorithm over a recent scheme proposed by Chung et al. in terms of Bit Error Rate (BER), Normalized Cross correlation (NC) and Peak Signal to Noise Ratio (PSNR).