Author
Jeffrey J. Rodriguez
Other affiliations: Motorola, University of Texas at Austin
Bio: Jeffrey J. Rodriguez is an academic researcher from University of Arizona. The author has contributed to research in topics: Image segmentation & Segmentation. The author has an hindex of 27, co-authored 146 publications receiving 3490 citations. Previous affiliations of Jeffrey J. Rodriguez include Motorola & University of Texas at Austin.
Papers published on a yearly basis
Papers
More filters
••
TL;DR: The experimental results for many standard test images show that prediction-error expansion doubles the maximum embedding capacity when compared to difference expansion, and there is a significant improvement in the quality of the watermarked image, especially at moderate embedding capacities.
Abstract: Reversible watermarking enables the embedding of useful information in a host signal without any loss of host information. Tian's difference-expansion technique is a high-capacity, reversible method for data embedding. However, the method suffers from undesirable distortion at low embedding capacities and lack of capacity control due to the need for embedding a location map. We propose a histogram shifting technique as an alternative to embedding the location map. The proposed technique improves the distortion performance at low embedding capacities and mitigates the capacity control problem. We also propose a reversible data-embedding technique called prediction-error expansion. This new technique better exploits the correlation inherent in the neighborhood of a pixel than the difference-expansion scheme. Prediction-error expansion and histogram shifting combine to form an effective method for data embedding. The experimental results for many standard test images show that prediction-error expansion doubles the maximum embedding capacity when compared to difference expansion. There is also a significant improvement in the quality of the watermarked image, especially at moderate embedding capacities
1,229 citations
••
24 Oct 2004TL;DR: This work proposes a new reversible watermarking algorithm that exploits the redundancy in the image to achieve very high data embedding rates while keeping the resulting distortion low.
Abstract: Reversible watermarking has become a highly desirable subset of fragile watermarking for sensitive digital imagery in application domains such as military and medical because of the ability to embed data with zero loss of host information. This reversibility enables the recovery of the original host content upon verification of the authenticity of the received content. We propose a new reversible watermarking algorithm. The algorithm exploits the correlation inherent among the neighboring pixels in an image region using a predictor. The prediction-error at each location is calculated and, depending on the amount of information to be embedded, locations are selected for embedding. Data embedding is done by expanding the prediction-error values. A compressed location map of the embedded locations is also embedded along with the information bits. Our algorithm exploits the redundancy in the image to achieve very high data embedding rates while keeping the resulting distortion low.
204 citations
••
TL;DR: Examination of structure-poor OCT images reveals that they frequently display a characteristic texture that is due to speckle, which shows that texture analysis of OCT images may be capable of differentiating tissue types without reliance on visible structures.
Abstract: Optical coherence tomography (OCT) acquires cross- sectional images of tissue by measuring back-reflected light. Images from in vivo OCT systems typically have a resolution of 10 to 15 mm, and are thus best suited for visualizing structures in the range of tens to hundreds of microns, such as tissue layers or glands. Many normal and abnormal tissues lack visible structures in this size range, so it may appear that OCT is unsuitable for identification of these tissues. However, examination of structure-poor OCT images reveals that they frequently display a characteristic texture that is due to speckle. We evaluated the application of statistical and spectral texture analysis techniques for differentiating tissue types based on the structural and speckle content in OCT images. Excellent correct classification rates were obtained when images had slight visual differences (mouse skin and fat, correct classification rates of 98.5 and 97.3%, respectively), and reasonable rates were obtained with nearly identical-appearing images (normal versus abnormal mouse lung, correct classification rates of 64.0 and 88.6%, respectively). This study shows that texture analysis of OCT images may be capable of differentiating tissue types
200 citations
••
TL;DR: A novel image inpainting algorithm that is capable of reproducing the underlying textural details using a nonlocal texture measure and also smoothing pixel intensity seamlessly in order to achieve natural-looking inpainted images is proposed.
Abstract: Nonlocal texture similarity and local intensity smoothness are both essential for solving most image inpainting problems. In this paper, we propose a novel image inpainting algorithm that is capable of reproducing the underlying textural details using a nonlocal texture measure and also smoothing pixel intensity seamlessly in order to achieve natural-looking inpainted images. For matching texture, we propose a Gaussian-weighted nonlocal texture similarity measure to obtain multiple candidate patches for each target patch. To compute the pixel intensity, we apply the $\alpha $ -trimmed mean filter to the candidate patches to inpaint the target patch pixel-by-pixel. The proposed algorithm is compared with four current image inpainting algorithms under different scenarios, including object removal, texture synthesis, and error concealment. Experimental results show that the proposed algorithm outperforms the existing algorithms when inpainting large missing regions in images with texture and geometric structures.
118 citations
••
TL;DR: A multi-polarization fringe projection (MPFP) imaging technique that eliminates saturated points and enhances the fringe contrast by selecting the proper polarized channel measurements is proposed.
Abstract: Traditional fringe-projection three-dimensional (3D) imaging techniques struggle to estimate the shape of high dynamic range (HDR) objects where detected fringes are of limited visibility. Moreover, saturated regions of specular reflections can completely block any fringe patterns, leading to lost depth information. We propose a multi-polarization fringe projection (MPFP) imaging technique that eliminates saturated points and enhances the fringe contrast by selecting the proper polarized channel measurements. The developed technique can be easily extended to include measurements captured under different exposure times to obtain more accurate shape rendering for very HDR objects.
109 citations
Cited by
More filters
••
TL;DR: It is proved analytically and shown experimentally that the peak signal-to-noise ratio of the marked image generated by this method versus the original image is guaranteed to be above 48 dB, which is much higher than that of all reversible data hiding techniques reported in the literature.
Abstract: A novel reversible data hiding algorithm, which can recover the original image without any distortion from the marked image after the hidden data have been extracted, is presented in this paper. This algorithm utilizes the zero or the minimum points of the histogram of an image and slightly modifies the pixel grayscale values to embed data into the image. It can embed more data than many of the existing reversible data hiding algorithms. It is proved analytically and shown experimentally that the peak signal-to-noise ratio (PSNR) of the marked image generated by this method versus the original image is guaranteed to be above 48 dB. This lower bound of PSNR is much higher than that of all reversible data hiding techniques reported in the literature. The computational complexity of our proposed technique is low and the execution time is short. The algorithm has been successfully applied to a wide range of images, including commonly used images, medical images, texture images, aerial images and all of the 1096 images in CorelDraw database. Experimental results and performance comparison with other reversible data hiding schemes are presented to demonstrate the validity of the proposed algorithm.
2,240 citations
••
TL;DR: A heuristic method has been developed for registering two sets of 3-D curves obtained by using an edge-based stereo system, or two dense3-D maps obtained by use a correlation-based stereoscopic system, and it is efficient and robust, and yields an accurate motion estimate.
Abstract: A heuristic method has been developed for registering two sets of 3-D curves obtained by using an edge-based stereo system, or two dense 3-D maps obtained by using a correlation-based stereo system. Geometric matching in general is a difficult unsolved problem in computer vision. Fortunately, in many practical applications, some a priori knowledge exists which considerably simplifies the problem. In visual navigation, for example, the motion between successive positions is usually approximately known. From this initial estimate, our algorithm computes observer motion with very good precision, which is required for environment modeling (e.g., building a Digital Elevation Map). Objects are represented by a set of 3-D points, which are considered as the samples of a surface. No constraint is imposed on the form of the objects. The proposed algorithm is based on iteratively matching points in one set to the closest points in the other. A statistical method based on the distance distribution is used to deal with outliers, occlusion, appearance and disappearance, which allows us to do subset-subset matching. A least-squares technique is used to estimate 3-D motion from the point correspondences, which reduces the average distance between points in the two sets. Both synthetic and real data have been used to test the algorithm, and the results show that it is efficient and robust, and yields an accurate motion estimate.
2,177 citations
01 Jan 1992
TL;DR: In this article, a least-squares technique is used to estimate 3D motion from the point correspondences, which reduces the average distance between curves in two sets, and yields an accurate motion estimate.
Abstract: Geometric matching in general is a difficult unsolved problem in computer vision. Fortunately, in many pratical applications, some a priori knowledge exists which considerably simplifies the problem. In visual navigation, for example, the motion between successive positions is usually either small or approximately known, but a more precise registration is required for environment modeling. The algorithm described in this report meets this need. Objects are represented by free-form curves, i.e., arbitrary spaces curves of the type found in practice. A curve is available in the form of a set of chained points. The proposed algorithm is based on iteratively matching points on one curve to the closest points on the other. A least-squares technique is used to estimate 3-D motion from the point correspondences, which reduces the average distance between curves in two sets. Both synthetic and real data have been used to test the algorithm, and the results show that it is efficient and robust, and yields an accurate motion estimate. The algorithm can be easily extended to solve similar problems such as 2-D curve matching and 3-D surface matching.
1,986 citations
•
23 Nov 2007
TL;DR: This new edition now contains essential information on steganalysis and steganography, and digital watermark embedding is given a complete update with new processes and applications.
Abstract: Digital audio, video, images, and documents are flying through cyberspace to their respective owners. Unfortunately, along the way, individuals may choose to intervene and take this content for themselves. Digital watermarking and steganography technology greatly reduces the instances of this by limiting or eliminating the ability of third parties to decipher the content that he has taken. The many techiniques of digital watermarking (embedding a code) and steganography (hiding information) continue to evolve as applications that necessitate them do the same. The authors of this second edition provide an update on the framework for applying these techniques that they provided researchers and professionals in the first well-received edition. Steganography and steganalysis (the art of detecting hidden information) have been added to a robust treatment of digital watermarking, as many in each field research and deal with the other. New material includes watermarking with side information, QIM, and dirty-paper codes. The revision and inclusion of new material by these influential authors has created a must-own book for anyone in this profession.
*This new edition now contains essential information on steganalysis and steganography
*New concepts and new applications including QIM introduced
*Digital watermark embedding is given a complete update with new processes and applications
1,773 citations