scispace - formally typeset
Search or ask a question

Showing papers on "Top-hat transform published in 2010"


Posted Content
TL;DR: Underlying concepts of underlying concepts, along with algorithms commonly used for image enhancement, are provided, with particular reference to point processing methods and histogram processing.
Abstract: Principle objective of Image enhancement is to process an image so that result is more suitable than original image for specific application. Digital image enhancement techniques provide a multitude of choices for improving the visual quality of images. Appropriate choice of such techniques is greatly influenced by the imaging modality, task at hand and viewing conditions. This paper will provide an overview of underlying concepts, along with algorithms commonly used for image enhancement. The paper focuses on spatial domain techniques for image enhancement, with particular reference to point processing methods and histogram processing.

363 citations


Book
03 May 2010
TL;DR: A Selected List of Books on Image Processing and Computer Vision from Year 2000.
Abstract: PART I: FUNDAMENTALS. 1 INTRODUCTION. 1.1 The World of Signals. 1.2 Digital Image Processing. 1.3 Elements of an Image Processing System. Appendix 1.A Selected List of Books on Image Processing and Computer Vision from Year 2000. References. 2 MATHEMATICAL PRELIMINARIES. 2.1 Laplace Transform. 2.2 Fourier Transform. 2.3 Z-Transform. 2.4 Cosine Transform. 2.5 Wavelet Transform. 3 IMAGE ENHANCEMENT. 3.1 Grayscale Transformation. 3.2 Piecewise Linear Transformation. 3.3 Bit Plane Slicing. 3.4 Histogram Equalization. 3.5 Histogram Specification. 3.6 Enhancement by Arithmetic Operations. 3.7 Smoothing Filter. 3.8 Sharpening Filter. 3.9 Image Blur Types and Quality Measures. 4 MATHEMATICAL MORPHOLOGY. 4.1 Binary Morphology. 4.2 Opening and Closing. 4.3 Hit-or-Miss Transform. 4.4 Grayscale Morphology. 4.5 Basic Morphological Algorithms. 4.6 Morphological Filters. 5 IMAGE SEGMENTATION. 5.1 Thresholding. 5.2 Object (Component) Labeling. 5.3 Locating Object Contours by the Snake Model. 5.4 Edge Operators. 5.5 Edge Linking by Adaptive Mathematical Morphology. 5.6 Automatic Seeded Region Growing. 5.7 A Top-Down Region Dividing Approach. 6 DISTANCE TRANSFORMATION AND SHORTEST PATH PLANNING. 6.1 General Concept. 6.2 Distance Transformation by Mathematical Morphology. 6.3 Approximation of Euclidean Distance. 6.4 Decomposition of Distance Structuring Element. 6.5 The 3D Euclidean Distance. 6.6 The Acquiring Approaches. 6.7 The Deriving Approaches. 6.8 The Shortest Path Planning. 6.9 Forward and Backward Chain Codes for Motion Planning. 6.10 A Few Examples. 7 IMAGE REPRESENTATION AND DESCRIPTION. 7.1 Run-Length Coding. 7.2 Binary Tree and Quadtree. 7.3 Contour Representation. 7.4 Skeletonization by Thinning. 7.5 Medial Axis Transformation. 7.6 Object Representation and Tolerance. 8 FEATURE EXTRACTION. 8.1 Fourier Descriptor and Moment Invariants. 8.2 Shape Number and Hierarchical Features. 8.3 Corner Detection. 8.4 Hough Transform. 8.5 Principal Component Analysis. 8.6 Linear Discriminate Analysis. 8.7 Feature Reduction in Input and Feature Spaces. 9 PATTERN RECOGNITION. 9.1 The Unsupervised Clustering Algorithm. 9.2 Bayes Classifier. 9.3 Support Vector Machine. 9.4 Neural Networks. 9.5 The Adaptive Resonance Theory Network. 9.6 Fuzzy Sets in Image Analysis. PART II: APPLICATIONS. 10 FACE IMAGE PROCESSING AND ANALYSIS. 10.1 Face and Facial Feature Extraction. 10.2 Extraction of Head and Face Boundaries and Facial Features. 10.3 Recognizing Facial Action Units. 10.4 Facial Expression Recognition in JAFFE Database. 11 DOCUMENT IMAGE PROCESSING AND CLASSIFICATION. 11.1 Block Segmentation and Classification. 11.2 Rule-Based Character Recognition System. 11.3 Logo Identification. 11.4 Fuzzy Typographical Analysis for Character Preclassification. 11.5 Fuzzy Model for Character Classification. 12 IMAGE WATERMARKING. 12.1 Watermarking Classification. 12.2 Spatial Domain Watermarking. 12.3 Frequency-Domain Watermarking. 12.4 Fragile Watermark. 12.5 Robust Watermark. 12.6 Combinational Domain Digital Watermarking. 13 IMAGE STEGANOGRAPHY. 13.1 Types of Steganography. 13.2 Applications of Steganography. 13.3 Embedding Security and Imperceptibility. 13.4 Examples of Steganography Software. 13.5 Genetic Algorithm-Based Steganography. 14 SOLAR IMAGE PROCESSING AND ANALYSIS. 14.1 Automatic Extraction of Filaments. 14.2 Solar Flare Detection. 14.3 Solar Corona Mass Ejection Detection. INDEX.

237 citations


Journal ArticleDOI
TL;DR: The quantitative peak signal-to-noise ratio (PSNR) and visual results show the superiority of the proposed technique over the conventional bicubic interpolation, wavelet zero padding, and Irani and Peleg based image resolution enhancement techniques.
Abstract: In this letter, a satellite image resolution enhancement technique based on interpolation of the high-frequency subband images obtained by dual-tree complex wavelet transform (DT-CWT) is proposed. DT-CWT is used to decompose an input low-resolution satellite image into different subbands. Then, the high-frequency subband images and the input image are interpolated, followed by combining all these images to generate a new high-resolution image by using inverse DT-CWT. The resolution enhancement is achieved by using directional selectivity provided by the CWT, where the high-frequency subbands in six different directions contribute to the sharpness of the high-frequency details such as edges. The quantitative peak signal-to-noise ratio (PSNR) and visual results show the superiority of the proposed technique over the conventional bicubic interpolation, wavelet zero padding, and Irani and Peleg based image resolution enhancement techniques.

198 citations


Journal ArticleDOI
TL;DR: Experiments showed that this novel image enhancement approach can not only enhance an image's details but can also preserve its edge features effectively.
Abstract: Low contrast and poor quality are main problems in the production of medical images. By using the wavelet transform and Haar transform, a novel image enhancement approach is proposed. First, a medical image was decomposed with wavelet transform. Secondly, all high-frequency sub-images were decomposed with Haar transform. Thirdly, noise in the frequency field was reduced by the soft-threshold method. Fourthly, high-frequency coefficients were enhanced by different weight values in different sub-images. Then, the enhanced image was obtained through the inverse wavelet transform and inverse Haar transform. Lastly, the image's histogram was stretched by nonlinear histogram equalisation. Experiments showed that this method can not only enhance an image's details but can also preserve its edge features effectively.

117 citations


Journal ArticleDOI
TL;DR: The experimental results demonstrate that the ripplet transform can provide efficient representation of edges in images and holds great potential for image processing such as image restoration, image denoising and image compression.

99 citations


Journal ArticleDOI
TL;DR: The ambience of image (warm or cold color impression) is maintained after enhancement, and no additional light sources are added to the scene, and the halo effect and blocking effect are amplified due to overenhancement.
Abstract: A new algorithm of Natural Enhancement of Color Image (NECI) is proposed. It is inspired bymultiscale Retinexmodel. There are four steps to realize this enhancement. At first, the image appearance is rendered by content-dependent global mapping for light cast correction, and then a modified Retinex filter is applied to enhance the local contrast. Histogram rescaling is used afterwards for normalization purpose. At last, the texture details of image are enhanced by emphasizing the high-frequency components of image using multichannel decomposition of Cortex Transform. In the contrast enhancement step, luminance channel is firstly enhanced, and then a weighing map is calculated by collecting luminance enhancement information and applied to chrominance channel in color space CIELCh which enables a proportional enhancement of chrominance. It avoids the problem of unbalanced enhancement in classical RGB independent channel operation. In this work, it is believed that image enhancement should avoid dramatic modifications to image such as light condition changes, color temperature alteration, or additional artifacts introduced or amplified. Disregarding light conditions of the scene usually leads to unnaturally sharpened images or dramatic white balance changes. In the proposed method, the ambience of image (warm or cold color impression) is maintained after enhancement, and no additional light sources are added to the scene, and no halo effect and blocking effect are amplified due to overenhancement. It realizes a Natural Enhancement of Color Image. Different types of natural scene images have been tested and an encouraging performance is obtained for the proposed method.

64 citations


Patent
Yuuta Hamada1, Takahiro Asai1
06 Jul 2010
TL;DR: In this paper, an image processing apparatus consisting of a generating unit and a superimposing unit is described, where the generating unit generates an image carrier representing a machine-readable image pattern by using information to be embedded into a first image.
Abstract: An image processing apparatus includes a generating unit and a superimposing unit. The generating unit generates an image carrier representing a machine-readable image pattern by using information to be embedded into a first image. The superimposing unit makes a second image translucent, the second image being an image of the image carrier, and superimposes the second image onto the first image in such a manner that the second image can be mechanically read and the first image can be read by a user.

37 citations


Patent
04 Nov 2010
TL;DR: In this paper, an image processing apparatus includes: an image synthesis unit generating a synthesized image by inputting images photographed at different positions and connecting strip areas cut from the images.
Abstract: An image processing apparatus includes: an image synthesis unit generating a synthesized image by inputting images photographed at different positions and connecting strip areas cut from the images. The image synthesis unit generates a left-eye synthesized image applied to display a 3-dimensional image by connecting and synthesizing left-eye image strips set in the images and generates a right-eye synthesized image applied to display a 3-dimensional image by connecting and synthesizing right-eye image strips set in the images. The image synthesis unit performs a process of setting the left-eye image strip and the right-eye image strip in an allowable range of set positions of the left-eye image strip and the right-eye image strip used to generate the left-eye synthesized image and the right-eye synthesized image, which are at different observing points, applicable to display the 3-dimensional images by acquiring the allowable range from a memory or calculating the allowable range.

37 citations


Patent
Seung-Ki Cho1, Hyun-Seok Hong1, Hee-chul Han1, Yang-lim Choi1, Yong-Ju Lee1 
29 Dec 2010
TL;DR: In this article, an apparatus and method for obtaining a high dynamic range (HDR) image is presented. But it is not suitable for high-resolution images, since the resolution of the recovered images is not the same as that of the first image.
Abstract: Provided is an apparatus and method for obtaining a high dynamic range (HDR) image. The apparatus includes an image sensor generating a first image by applying different exposure times for units of different predetermined regions, an image separating unit separating the first image into images, each of which is composed of regions having an identical exposure time, an image restoring unit restoring the separated images in such a way that each of the separated images has a resolution that is the same as a resolution of the first image, and an image synthesizing unit synthesizing the restored images into a second image.

35 citations


Patent
30 Jul 2010
TL;DR: In this paper, the adopting ratio (weight coefficient) between the high image quality processing using the tensor projection method and the low quality image processing using another method according to the degree of deviation from the input condition of the input image, and combines these processes as appropriate.
Abstract: The present invention determines the adopting ratio (weight coefficient) between the high image quality processing using the tensor projection method and the high image quality processing using another method according to the degree of deviation of the input condition of the input image, and combines these processes as appropriate. This allows a satisfactory reconstruction image to be acquired even in a case of deviation from the input condition, and avoids deterioration of the high quality image due to deterioration of the reconstruction image by the projective operation.

34 citations


Patent
Koshi Hatakeyama1
09 Dec 2010
TL;DR: In this article, an image processing method includes the steps of generating a first image through restoration processing of an amplitude component and a phase component of an input image, generating a second image by composing the difference information with the second image according to the restoration level adjustment factor.
Abstract: An image processing method includes the steps of generating a first image through restoration processing of an amplitude component and a phase component of an input image, generating a second image that has an equal state of the phase component to that of the first image and a different state of the amplitude component from that of the first image through restoration processing of the phase component without restoration processing of the amplitude component of the input image, obtaining difference information between the first image and the second image, setting a restoration level adjustment factor used to adjust a restoration degree in the restoration processing, and generating a restoration adjusted image by composing the difference information with the second image according to the restoration level adjustment factor.

Journal ArticleDOI
TL;DR: The Multi-Wavelet Transform of image signals produces a non-redundant image representation, which provides better spatial and spectral localization of image formation than discrete wavelet transform.
Abstract: The fast development of digital image processing leads to the growth of feature extraction of images which leads to the development of Image fusion. Image fusion is defined as the process of combining two or more different images into a new single image retaining important features from each image with extended information content. There are two approaches to image fusion, namely Spatial Fusion and Transform fusion. In Spatial fusion, the pixel values from the source images are directly summed up and taken average to form the pixel of the composite image at that location. Transform fusion uses transform for representing the source images at multi scale. The most common widely used transform for image fusion at multi scale is Wavelet Transform since it minimizes structural distortions. But, wavelet transform suffers from lack of shift invariance & poor directionality and these disadvantages are overcome by Stationary Wavelet Transform and Dual Tree Wavelet Transform. The conventional convolution-based implementation of the discrete wavelet transform has high computational and memory requirements. Lifting Wavelets has been developed to overcome these drawbacks. The Multi-Wavelet Transform of image signals produces a non-redundant image representation, which provides better spatial and spectral localization of image formation than discrete wavelet transform. And there are three levels of image fusion namely Pixel level, Area level and region level. This paper evaluates the performance of all levels of multi focused image fusion of using Discrete Wavelet Transform, Stationary Wavelet Transform, Lifting Wavelet Transform, Multi Wavelet Transform, Dual Tree Discrete Wavelet Transform and Dual Tree Complex Wavelet transform in terms of various performance measures.

Patent
ji youn Han1, Chang-Seog Ko1
12 Nov 2010
TL;DR: In this paper, a three-dimensional image providing method and a display apparatus using the same are provided, where a particular manipulation is input (5630) from a user in the 2D mode, whether an input image is a 2D image or a 3D image.
Abstract: A three-dimensional (3D) image providing method and a display apparatus using the same are provided. According to the 3D image providing method, if a particular manipulation is input (5630) from a user in the two-dimensional (2D) mode, whether an input image is a 2D image or a 3D image is detected. If the input image is the 3D image (5650), the display mode is changed to the 3D mode and the 3D image is displayed (5660). If the input image is the 2D image, the input 2D image is converted to a 3D image (5655) and the converted 3D image is displayed by changing the display mode to the 3D mode (5660). Thus, regardless of whether the input image is the 2D image or the 3D image, users can execute the 3D mode using the single manipulation.

Patent
01 Dec 2010
TL;DR: In this paper, a 3D imaging device is provided with a single set of imaging optics (12, 14), an imaging element (16), and an image processing unit (24), where the imaging element separately performs photoelectric conversion on the images transmitted through the first and second regions, outputting first image data and second image data.
Abstract: The disclosed 3D imaging device (10) is provided with a single set of imaging optics (12, 14), an imaging element (16), and an image processing unit (24). An image of a photographic subject passes through first and second regions that divide the imaging optics in a predetermined direction, undergoing pupil division and forming separate images on the imaging element (16). The imaging element separately performs photoelectric conversion on the images transmitted through the first and second regions, outputting first image data and second image data. The image processing unit (24) performs first image processing on the first image data and second image processing, which differs from the first image processing, on the second image data, in such a way so as to decrease the difference in image quality between the processed first and second image data.

Proceedings ArticleDOI
14 Jul 2010
TL;DR: A technique which will produce an accurate fused image using discrete wavelet transform (DWT) for feature extraction and using Genetic Algorithms (GAs) to get the more optimized combined image is presented.
Abstract: Image fusion is the process of combining the most relevant information from multiple source images to obtain an accurate fused image. In this paper, we want to fuse visual and thermal satellite images. In order to provide enhanced information, we have investigated techniques of image fusion to obtain the most accurate information. This paper presents a technique which will produce an accurate fused image using discrete wavelet transform (DWT) for feature extraction and using Genetic Algorithms (GAs) to get the more optimized combined image. The performance of the proposed image fusion scheme is evaluated with mutual information (MI), root mean square error (RMSE), and it is also compared to the fused image that is generated by using Pixel Level GA based Image Fusion (PLGA_IF) and Discrete Wavelet Transform based Image Fusion (DWT_IF) techniques. Simulation results conducted with DWT and GA show that the proposed method outperforms the existing image fusion algorithms.

Proceedings ArticleDOI
03 Aug 2010
TL;DR: An overview of directional transforms of 2-D image coding is presented as well as a discussion of some existing problems and their potential solutions.
Abstract: Transform-based image coding has been the mainstream for many years, as witnessed in from the early effort in JPEG to the recent advances in HD Photo. Traditionally, a 2-D transform used in image coding is always implemented separately along the vertical and horizontal directions, respectively. However, it is usually true that many image blocks contain oriented structures (e.g., edges) and/or textures that do not follow either the vertical or horizontal direction. The traditional 2-D transform thus may not be the most appropriate one to these image blocks. This well-known fact has recently triggered several attempts towards the development of directional transforms so as to better preserve the directional information in an image block. Some of these directional transforms have been applied in image coding, demonstrating a significant coding gain. This paper presents an overview of these directional transforms as well as a discussion of some existing problems and their potential solutions.

Journal ArticleDOI
TL;DR: The concept of the direction image multiresolution is discussed, which is derived as a property of the 2-D discrete Fourier transform, when it splits by 1-D transforms, and the resolution map is introduced, as a result of uniting all direction images into log2 N series.
Abstract: We discuss the concept of the direction image multiresolution, which is derived as a property of the 2-D discrete Fourier transform, when it splits by 1-D transforms. The N×N image, where N is a power of 2, is considered as a unique set of splitting-signals in paired representation, which is the unitary 2-D frequency and 1-D time representation. The number of splitting-signals is 3N−2, and they have different durations, carry the spectral information of the image in disjoint subsets of frequency points, and can be calculated from the projection data along one of 3N/2 angles. The paired representation leads to the image composition by a set of 3N−2 direction images, which defines the directed multiresolution and contains periodic components of the image. We also introduce the concept of the resolution map, as a result of uniting all direction images into log2 N series. In the resolution map, all different periodic components (or structures) of the image are packed into a N×N matrix, which can be used for image processing in enhancement, filtration, and compression

Proceedings ArticleDOI
01 Dec 2010
TL;DR: An image splicing blind detection approach based on moment features and Hilbert-Huang Transform is proposed, which extracts two groups of features from the first order histogram of the image DWT (Discrete Wavelet Transform) coefficients and the Hilbert- Huang Transform of theimage.
Abstract: Image splicing is considered to be more simple and common than other image tamper technology And the image splicing problem can be tackled as a two-class classification problem under the pattern recognition framework By analyzing the different characteristics between real images and spliced images, an image splicing blind detection approach based on moment features and Hilbert-Huang Transform is proposed This algorithm extracts two groups of features from the first order histogram of the image DWT (Discrete Wavelet Transform) coefficients and the Hilbert-Huang Transform of the image Then the features are fed into the Support Vector Machine to classify real images and spliced images Experimental results show that the average accuracy rate can achieve 858696%, and the real-time is improved

Proceedings ArticleDOI
03 Dec 2010
TL;DR: To efficiently enhance images, a multi scale top-hat transform based algorithm is proposed in this paper that could efficiently enhance the contrast and image details and produces very few noise regions.
Abstract: To efficiently enhance images, a multi scale top-hat transform based algorithm is proposed in this paper. Firstly, the multi scale white and black regions are extracted by using structuring elements with the same shape and increasing sizes. Then, two types of multi scale image features, which are multi scale white and black image regions at each scale and multi scale detail image regions between neighboring scales, are used to form the final extracted white and black image regions. Finally, the image is enhanced through enlarging the contrast between the extracted white and black image regions. Experimental results on images from different applications verified that the proposed algorithm could efficiently enhance the contrast and image details. And, the proposed algorithm produces very few noise regions. Therefore, the proposed algorithm could be widely used in different applications.

Patent
Byong Min Kang1, Hwasup Lim1
27 Jul 2010
TL;DR: In this article, an image processing apparatus, method and computer-readable medium may extract a target object area from an input color image, based on an input depth image and the input colour image.
Abstract: Provided is an image processing apparatus, method and computer-readable medium. The image processing apparatus, method and computer-readable medium may extract a target object area from an input color image, based on an input depth image and the input color image. For the above image processing, the image processing may extract a silhouette area of a target object from the input depth image and refine the silhouette area of the target object based on the input color image.

Proceedings ArticleDOI
13 Jul 2010
TL;DR: A new method of medical image enhancement that improves the visual quality of digital images as well as images that exhibits dark shadows due to limited dynamic range of imaging is presented.
Abstract: This paper presents a new method of medical image enhancement that improves the visual quality of digital images as well as images that exhibits dark shadows due to limited dynamic range of imaging. In this paper, non linear image enhancement technique is used in transform domain by the way of transform coefficient histogram matching to enhance image. Processing includes global dynamic range correction and local contrast enhancement which is able to enhance the luminance in the dark shadows keeping the overall tonality consistent with that of the input image. Logarithmic transform histogram matching is used which uses the fact that the relation between stimulus and perception is logarithmic. A measure of enhancement based on the transform is used as a tool for evaluating the performance contrast measure with respect of the proposed enhancement technique. The performance of the algorithm is compared quantitatively to classical histogram equalization using the aforementioned measure of enhancement. A number of experimental results over some x-ray and facial images are presented to show the performance of the proposed algorithm alongside classical histogram equalization.

Proceedings ArticleDOI
Jun Wang1, Ying Tan1
07 Jul 2010
TL;DR: This paper gives an applicable genetic programming(GP) approach to solve the binary image analysis and gray scale image enhancement problems by showing a section of binary image and the corresponding goal image, which automatically produces a mathematic morphological operation sequence to transform the target into the goal.
Abstract: This paper gives an applicable genetic programming(GP) approach to solve the binary image analysis and gray scale image enhancement problems. By showing a section of binary image and the corresponding goal image, this algorithm automatically produces a mathematic morphological operation sequence to transform the target into the goal. While the operation sequence is applied to the whole image, the objective of image analysis is achieved. With well-defined chromosome structure and evolution strategy, the effectiveness of evolution is promoted and more complex morphological operations can be composed in a short sequence. In addition, this algorithm is also applied to infrared finger vein gray scale images to enhance the region of interest. Whose effect is examined by an application of identity authentication, and the accuracy of authentication is promoted.

Journal ArticleDOI
TL;DR: Extensive experimental results and comparisons with some representative image watermarking methods confirm the excellent performance of the proposed method in robustness against various geometric distortions as well as common image processing operations.
Abstract: Watermarking aims to hide particular information into some carrier but does not change the visual cognition of the carrier itself. Local features are good candidates to address the watermark synchronization error caused by geometric distortions and have attracted great attention for content-based image watermarking. This paper presents a novel feature point-based image watermarking scheme against geometric distortions. Scale invariant feature transform (SIFT) is first adopted to extract feature points and to generate a disk for each feature point that is invariant to translation and scaling. For each disk, orientation alignment is then performed to achieve rotation invariance. Finally, watermark is embedded in middle-frequency discrete Fourier transform (DFT) coefficients of each disk to improve the robustness against common image processing operations. Extensive experimental results and comparisons with some representative image watermarking methods confirm the excellent performance of the proposed method in robustness against various geometric distortions as well as common image processing operations.

Journal ArticleDOI
TL;DR: This paper compared pseudo-color images to their original monochrome images, and compared all the three different types of transforms to do this and compared the post processing techniques.
Abstract: In digital image processing, image enhancement is employed to give a better look to an image. Color is one of the best ways to visually enhance an image. Pseudo-color refers to coloring an image by mapping gray scale values to a three dimensional color space. In this paper we used a pseudo-color technique in frequency domain to enhance ultrasound images. We used three different types of transforms to do this. These are the Fourier transform, Discrete Cosine transform and Walsh- Hadamard transform. After obtaining these pseudo-color images, we applied a high frequency emphasis filter or histogram stretch as a post process. In this paper we used a subjective study to compare images. First we compared pseudo-color images to their original monochrome images. Secondly, we compared all the three different types of transforms. Lastly, we compared the post processing techniques.

Patent
Kanichi Koyama1, Yukio Mori1
16 Apr 2010
TL;DR: In this article, an image processing device which uses a main image and a sub-image shot at different times to generate an output image is presented. But the main image is blurred based on a variation in the position of and the size of the specific subject between the main and the sub-images.
Abstract: There is provided an image processing device which uses a main image and a sub-image shot at different times to generate an output image. The image processing device includes a subject detection portion which detects a specific subject from each of the main image and the sub-image and detects the position and the size of the specific subject on the main image and the position and the size of the specific subject on the sub-image, and generates the output image by causing the main image to be blurred based on a variation in the position of and a variation in the size of the specific subject between the main image and the sub-image.

Proceedings ArticleDOI
11 Jul 2010
TL;DR: The experimental results show that the ripplet transform can provide efficient representation of images that contain edges and holds great potential for image denoising and image compression.
Abstract: Current image representation schemes have limited capability of representing 2D singularities (e.g., edges in an image). Wavelet transform has better performance in representing 1D singularities than Fourier transform. Recently invented ridgelet and curvelet transform achieve better performance in resolving 2D singularities than wavelet transform. To further improve the capability of representing 2D singularities, this paper proposes a new transform called ripplet transform Type II (ripplet-II). The new transform is able to capture 2D singularities along a family of curves in images. In fact, ridgelet transform is a special case of ripplet-II transform with degree 1. Ripplet-II transform can be used for feature extraction due to its efficiency in representing edges and textures. Experiments in texture classification and image retrieval demonstrate that the ripplet-II transform based scheme outperforms wavelet and ridgelet transform based approaches.

Journal ArticleDOI
TL;DR: The approach proposed combines the relevant features of the input images and produce a composite image which is rich in information content for human eye which does not introduce any distortion for applications in low light and/or non uniform lighting conditions.
Abstract: Most existing image enhancement algorithms work on a single image. Their performance is limited to the capacity of the sensor by which the image is taken. In some cases they completely fail to provide us the necessary enhancements. This paper proposes a composite image approach for enhancing still images. The approach proposed combines the relevant features of the input images and produce a composite image which is rich in information content for human eye. The input images are first decomposed into multiple resolutions by using the contourlet transform which provides a better representation than the conventional transforms. Transformed coefficients are combined with a predefined fusion rules. The resultant image is found by performing inverse contourlet transformation of the composite image. The results found are encouraging and the algorithm does not introduce any distortion for applications in low light and/or non uniform lighting conditions. The composite image also contains almost all of the salient features of the input images

Proceedings ArticleDOI
23 Oct 2010
TL;DR: In order to improve the traditional medical image fusion algorithms which often cause details loss, the source medical images are decomposed by way of lifting wavelet, and the high-low frequency weight are utilized according to different fusion rules.
Abstract: Image fusion is a technigue that integrates the information of multiple images into one image, which makes the fused image more accurate, more comprehensive and reliable. In order to improve the traditional medical image fusion algorithms which often cause details loss, the source medical images are decomposed by way of lifting wavelet, and the high-low frequency weight are utilized according to different fusion rules. Finally the target image is obtained through inverse wavelet transform. The simulation result shows that the algorithm is feasible and effective.

Journal ArticleDOI
TL;DR: An enhanced approach of remote sensing image based on orthogonal wavelet analysis and pseudo-color image processing is presented and has been greatly improved in both visual effects and noise characteristics.
Abstract: Wavelet analysis based on image enhancement technique is only applicable to black-and-white image, and pseudo-color image processing technology cannot adequately deal with some of the details information of the image. In this paper, an enhanced approach of remote sensing image based on orthogonal wavelet analysis and pseudo-color image processing is presented. Enhanced remote sensing image has been greatly improved in both visual effects and noise characteristics. The method is simple yet flexible with less calculation. Moreover, the proposed method also has the advantages of faster computing speed and operating feasibly and so on. It has great potential in research and application of remote sensing image enhancement.

Patent
05 Aug 2010
TL;DR: In this paper, a first image captured by a first camera can be aligned with at least a segment of a second image captured with a second camera, where the images have an overlapping field of view.
Abstract: A first image captured by a first camera can be aligned with at least a segment of a second image captured with a second camera, where the images have an overlapping field of view. Image characteristic values indicative of image characteristics at positions within the overlapping field of view of the first and second images are respectively determined. A difference in position between corresponding image characteristic values in the overlapping field of view in the first image and the overlapping field of view in the second image is determined. A transform is applied to the first image, adjusting an orientation of the first image relative to the second image. The first and second image can be aligned when the difference in position between corresponding image characteristic values in the first and second image is a predetermined amount.