scispace - formally typeset
Search or ask a question

Showing papers on "Top-hat transform published in 2009"


Patent
23 Sep 2009
TL;DR: In this paper, a plurality of addition means and image processing means are configured to perform second and subsequent addition processing, and generate an image of the second resolution as a processing result by performing the addition processing for a predetermined number of times.
Abstract: An image processing apparatus includes a plurality of addition means and an image processing means. The addition means performs addition processing of adding pixels of a differential image at a second resolution representing a difference between an inputted image at a first resolution and an image at the second resolution higher than the first resolution as pixels of an inputted image at the second resolution. The image processing means is configured to perform second and subsequent addition processing, and generate an image of the second resolution as a processing result by performing the addition processing for a predetermined number of times. The addition processing is performed with inputs of an image at the first resolution and an image at the second resolution obtained by an immediately preceding addition processing, which are different from each other.

146 citations


Journal ArticleDOI
TL;DR: A triple image encryption scheme by use of fractional Fourier transform is proposed, a multiple image algorithm is expanded and designed and all information of images is preserved in theory when image are decrypted with correct key.

86 citations


Journal ArticleDOI
TL;DR: Changing the appearance of an image can be a complex and non‐intuitive task, since one must take into account spatial image considerations along with the color constraints.
Abstract: Changing the appearance of an image can be a complex and non-intuitive task. Many times the target image colors and look are only known vaguely and many trials are needed to reach the desired results. Moreover, the effect of a specific change on an image is difficult to envision, since one must take into account spatial image considerations along with the color constraints. Tools provided today by image processing applications can become highly technical and non-intuitive including various gauges and knobs. In this paper we introduce a method for changing image appearance by navigation, focusing on recoloring images. The user visually navigates a high dimensional space of possible color manipulations of an image. He can either explore in it for inspiration or refine his choices by navigating into sub regions of this space to a specific goal. This navigation is enabled by modeling the chroma channels of an image's colors using a Gaussian Mixture Model (GMM). The Gaussians model both color and spatial image coordinates, and provide a high dimensional parameterization space of a rich variety of color manipulations. The user's actions are translated into transformations of the parameters of the model, which recolor the image. This approach provides both inspiration and intuitive navigation in the complex space of image color manipulations.

77 citations


Journal ArticleDOI
TL;DR: Experimental results show that the proposed lossless, robust copyright-protection scheme based on cryptography and watermarking is effective and robust against common image processing and geometric distortions.

68 citations


Patent
22 Oct 2009
TL;DR: In this article, a two-dimensional array of pixel values of a current block to be encoded is transformed into a 2D array of transform coefficients, and a scan order for scanning the transform coefficients of the two dimensions is determined depending on the scan order.
Abstract: Images are coded with higher efficiency while maintaining the same image quality. An image coding method of coding an image on a block basis, including: transforming (S 1201 ) a two-dimensional array of pixel values of a current block to be encoded, into a two-dimensional array of transform coefficients; determining (S 1202 ), depending on the two-dimensional array of the transform coefficients, a scan order for scanning the transform coefficients of the two-dimensional array; scanning (S 1203 ) the transform coefficients of the two-dimensional array sequentially according to the scan order, to generate a one-dimensional array of the transform coefficients; and coding (S 1204 ) the transform coefficients of the one-dimensional array.

67 citations


Proceedings ArticleDOI
01 Dec 2009
TL;DR: It is shown that savings of 25% in the number of arithmetic operations can easily be achieved using the proposed transform operator without noticeable degradations in the reconstructed images.
Abstract: In this paper, we propose an efficient 8×8 transform matrix for image compression by appropriately introducing some zeros in the 8×8 signed DCT matrix. We show that the proposed transform is orthogonal, which is a highly desirable property. In order to make this novel transform more attractive for recent real-time applications, we develop an efficient algorithm for its fast computation. By using this algorithm, the proposed transform requires only 18 additions to transform an 8-point sequence. Compared to the existing 8×8 approximated DCT matrices, it is shown that savings of 25% in the number of arithmetic operations can easily be achieved using the proposed transform operator without noticeable degradations in the reconstructed images. We also present simulation results using some standard test images to show the efficiency of the proposed transform in image compression.

59 citations


Journal ArticleDOI
TL;DR: In this article, an improved Otsu's method was employed for segmenting the gray images of foreign fibers, and a modified closing operation in mathematics morphology was proposed to fill up the image gaps of the wire-like foreign fibers caused by segmentation, and an area threshold method was suggested to remove the small objects generated by pseudo-foreign-fibers.

54 citations


Proceedings ArticleDOI
Marius Tico1, Kari Pulli1
07 Nov 2009
TL;DR: An image enhancement algorithm based on fusing the visual information present in two images of the same scene, captured with different exposure times to exploit the differences between the image degradations that affect the two images.
Abstract: We present an image enhancement algorithm based on fusing the visual information present in two images of the same scene, captured with different exposure times. The main idea is to exploit the differences between the image degradations that affect the two images. On one hand the short-exposed image is less affected by motion blur, whereas the long-exposed image is less affected by noise. Different fusion rules are designed for the luminance and chrominance components such that to preserve the desirable properties from each input image. We also present a method for estimating the brightness transfer function between the input images for photometric calibration of the short-exposed image with respect to the long-exposed image. As no global blur PSF is assumed, our method can deal with blur from both camera and object motions. We demonstrate the algorithm by a series of experiments and simulations.

52 citations


Patent
11 Dec 2009
TL;DR: In this paper, an image processing apparatus has a noise reduction processing control portion which controls the contents of image processing for obtaining the third image from the first image according to the noise level in the first one.
Abstract: An image processing apparatus outputs an output image by synthesizing a first image obtained by shooting, a second image obtained by shooting with an exposure time longer than the exposure time of the first image, and a third image obtained by reducing noise in the first image. The image processing apparatus has a noise reduction processing control portion which controls the contents of image processing for obtaining the third image from the first image according to the noise level in the first image.

45 citations


Journal ArticleDOI
TL;DR: This article incorporates color channel information into the classification process and shows that this leads to superior classification results, compared to luminance-channel-only-based image analysis.
Abstract: In this article, we discuss the discriminative power of a set of image features, extracted from detail subbands of the Gabor wavelet transform and the dual-tree complex wavelet transform for the purpose of computer-assisted zoom-endoscopy image classification. We incorporate color channel information into the classification process and show that this leads to superior classification results, compared to luminance-channel-only-based image analysis.

40 citations


Patent
26 Feb 2009
TL;DR: Based on a difference image of three temporally consecutive frame images, a region where a moving object is displayed and a background region are extracted from a central frame image in this paper, where the background region is blurred in this blurring processing so that the degree of blurring increases as the ratio of the moving object region in the frame image is high.
Abstract: Based on a difference image of three temporally consecutive frame images, a moving object region where a moving object is displayed and a background region are extracted from a central frame image. A processing of becoming a clear image such as contrast enhancement is performed on an image of the moving object region. On the other hand, a blurring processing such as an averaging processing is performed on an image of the background region. The image of the background region is blurred in this blurring processing so that the degree of blurring increases as the ratio of the moving object region in the frame image is high.

Journal ArticleDOI
TL;DR: This paper proposes a general framework based on the decision tree for mining and processing image data, which can be used to create new image processing methodologies, refine existing image processing methods, or act as a powerful image filter.
Abstract: Valuable information can be hidden in images, however, few research discuss data mining on them. In this paper, we propose a general framework based on the decision tree for mining and processing image data. Pixel-wised image features were extracted and transformed into a database-like table which allows various data mining algorithms to make explorations on it. Each tuple of the transformed table has a feature descriptor formed by a set of features in conjunction with the target label of a particular pixel. With the label feature, we can adopt the decision tree induction to realize relationships between attributes and the target label from image pixels, and to construct a model for pixel-wised image processing according to a given training image dataset. Both experimental and theoretical analyses were performed in this study. Their results show that the proposed model can be very efficient and effective for image processing and image mining. It is anticipated that by using the proposed model, various existing data mining and image processing methods could be worked on together in different ways. Our model can also be used to create new image processing methodologies, refine existing image processing methods, or act as a powerful image filter.

Patent
21 Jul 2009
TL;DR: In this article, the authors proposed a method of volume panorama imaging processing, which generates a volume-panorama image by subsequently splicing respective image frames from an image sequence obtained in a real-time way or stored in a medium based upon the fact that the immediately adjacent image frames have the largest correlation.
Abstract: The present invention discloses a method of volume-panorama imaging processing, which generates a volume-panorama image by subsequently splicing respective image frames from an image sequence obtained in a real-time way or stored in a medium based upon the fact that the immediately adjacent image frames have the largest correlation. The method comprises the steps of: reading the image sequence, and firstly initializing an aligned image and a spliced image; dividing the i-th image frame F i into a plurality of sub-regions; calculating a motion vector of the i-t image frame with respect to the aligned image; fitting the motion vector to calculate a transform coefficient; splicing the F i to the current spliced image based upon the transform coefficient, and updating the aligned image; entering into a self-adaptive selection of a next image frame until the end of the splicing; and outputting the current spliced image as a resultant image. Additionally, when the image F i is spliced, double-filtering architecture of selecting characteristic points through a filtering and selecting the valid motion vector of the selected characteristic points through the other filtering to reduce an alignment error may be adopted. According to the present invention, the volume-panorama imaging can be done quickly and accurately so that the reliability of images particularly meets a very high requirement of the ultrasonic iatric diagnose.

Proceedings ArticleDOI
01 Dec 2009
TL;DR: The techniques that help in improvising the quality of the image edges and in solving various complex image processing tasks such as segmentation, feature extraction, classification and image generation are dealt with.
Abstract: In this Modern era, image transmission and processing plays a major role. It would not be possible to retrieve information from satellite and medical images without the help of Image processing techniques. Image edge Enhancement is the art of examining images for identifying objects and judging their significance. The proposed work uses the concept of Artificial Bee Colony Algorithm which proved to be the most powerful unbiased optimization technique for sampling a large solution space. Because of its unbiased stochastic sampling, it was quickly adapted in image processing and thus for image edge enhancement as well. This paper deals with the techniques that help in improvising the quality of the image edges and in solving various complex image processing tasks such as segmentation, feature extraction, classification and image generation. The edge enhancement is done using hybridized smoothening filters by The Artificial Bee Colony optimization algorithm and compared it with the genetic algorithm.

Patent
09 Sep 2009
TL;DR: An image processing apparatus capable of executing filter processing with a desired blurring degree selected in accordance with an application from a multi-valued image captured an object surface, is described in this paper.
Abstract: An image processing apparatus capable of executing filter processing with a desired blurring degree selected in accordance with an application from a multi-valued image captured an object surface, the image processing apparatus comprises: a first filter processing device for executing smoothing processing on the multi-valued image, a second filter processing device for creating a reduced image reduced from the multi-valued image with an image reduction ratio, executing smoothing processing on the reduced image, and creating an enlarged image of the smoothed image enlarged with an image enlargement ratio corresponding to the image reduction ratio, and an image display device for displaying a processed image by the first filter processing device or the second filter processing device.

Patent
Shimpei Fukumoto1, Yukio Mori1
19 May 2009
TL;DR: In this paper, a merged image is generated by merging together a first image obtained by shooting with a reduced exposure time, a second image obtained using an increased exposure time and filtering out a high-frequency component from the first image.
Abstract: A merged image is generated by merging together a first image obtained by shooting with a reduced exposure time, a second image obtained by shooting with an increased exposure time, and a third image obtained by filtering out a high-frequency component from the first image. Here, a merging ratio at which the second and third images are merged together is determined by use of differential values obtained from the second and third images. Also, a merging ratio at which the first image and the second and third images (a fourth image) are merged together is determined by use of edge intensity values obtained from an image based on the first image.

Book ChapterDOI
24 Sep 2009
TL;DR: Two methods to generate a bird’s-eye image from the original input image are recalled and a modified version of the Euclidean distance transform called real orientation distance transform (RODT) is proposed.
Abstract: Lane detection and tracking is a significant component of vision-based driver assistance systems (DAS). Low-level image processing is the first step in such a component. This paper suggests three useful techniques for low-level image processing in lane detection situations: bird’s-eye view mapping, a specialized edge detection method, and the distance transform. The first two techniques have been widely used in DAS, while the distance transform is a method newly exploited in DAS, that can provide useful information in lane detection situations. This paper recalls two methods to generate a bird’s-eye image from the original input image, it also compares edge detectors. A modified version of the Euclidean distance transform called real orientation distance transform (RODT) is proposed. Finally, the paper discusses experiments on lane detection and tracking using these technologies.

01 Jan 2009
TL;DR: Fast Discrete Curvelet Transform using Wrapper algorithm based image fusion technique, has been implemented, analyzed and compared with Wavelet based Fusion Technique.
Abstract: The term fusion means in general an approach to extraction of information acquired in several domains. The goal of image fusion (IF) is to integrate complementary multi sensor, multi temporal and/or multi view information into one new image containing information the quality of which cannot be achieved otherwise. The term “quality”, its meaning and measurement depend on the particular application. In this paper, Fast Discrete Curvelet Transform using Wrapper algorithm based image fusion technique, has been implemented, analyzed and compared with Wavelet based Fusion Technique. Fusion of images taken at different resolutions, intensity and by different techniques helps physicians to extract the features that may not be normally visible in a single image by different modalities. This work aims at fusion of two images containing varied information. Proposed algorithm takes care of registration as well as fusion in a single pass. Attempt has been taken to fuse MRI with CT and MR/MR images of Preclinical data. In magnetic resonance imaging (MRI), there are three bands of images ("MRI triplet") available, which are T1-, T2- and PD-weighted images. The three images of a MRI triplet provide complementary structure information and therefore it is useful for diagnosis and subsequent analysis to combine three-band images into one. This fused image can significantly benefit medical diagnosis and also the further image processing such as, visualization (colorization), segmentation, classification and computer-aided diagnosis (CAD). This approach is is further optimized utilizing quantitative fusion metrics such as the Entropy, Difference Entropy, and Standard Deviation, image quality index (IQI) and ratio spatial frequency error (rSFe).

Proceedings ArticleDOI
29 Aug 2009
TL;DR: The presented region-based fusion approach is more robust than the traditional pixel-based techniques, where it reduces: the blurring effects, sensitivity to the misregistration problem, and noise effects in the input images.
Abstract: In the last few years image fusion has gained considerable attention, where it can provide remarkable outputs for many image applications (\emph{i.e.}, detection of hidden objects). Images with different specifications (resolution, spectral, and spatial) can be fused to produce an output image that combines the best features of all input images. The quality of the output image varies based on the application. In this paper, a new region-based image fusion technique using the Contourlet Transform (CT) is proposed. The presented fusion approach combines the visual information from a visual colored image, and some information about the hidden objects from an IR image. The fused output image is better for human and machine interpretation, where it preserves the original chromaticity of the visual input image. The input images are segmented into small regions more suitable for the proposed algorithm. The segmentation process is performed in the frequency domain. The presented region-based fusion approach is more robust than the traditional pixel-based techniques, where it reduces: the blurring effects, sensitivity to the misregistration problem, and noise effects in the input images. Experimental results demonstrate the capability of the presented fusion technique in detecting hidden weapons and objects. Moreover, the algorithm preserves very high percentage of the input image's spectral components.

Proceedings ArticleDOI
Aamer Mohamed1, F. Khellfi1, Ying Weng1, Jianmin Jiang1, Stan Ipson1 
07 Sep 2009
TL;DR: A new simple method of Discrete Cosine Transform (DCT) feature extraction that is used to accelerate the speed and decrease the storage needed in the image retrieving process and in this way improves the performance of image retrieval.
Abstract: This paper proposes a new simple method of Discrete Cosine Transform (DCT) feature extraction that is used to accelerate the speed and decrease the storage needed in the image retrieving process. Image features are accessed and extracted directly from JPEG compressed domain. This method extracts and constructs a feature vector of histogram quantization from partial DCT coefficient in order to count the number of coefficients that have the same DCT coefficient over all image blocks. The database image and query image is equally divided into a non overlapping 8X8 block pixel, each of which is associated with a feature vector of histogram quantization derived directly from discrete cosine transform DCT. Users can select any query as the main theme of the query image. The retrieved images are those from the database that bear close resemblance with the query image and the similarity is ranked according to the closest similar measures computed by the Euclidean distance. The experimental results are significant and promising and show that our approach can easily identify main objects while to some extent reducing the influence of background in the image and in this way improves the performance of image retrieval.

Proceedings ArticleDOI
30 Oct 2009
TL;DR: This article presents a wavelet transformation based multi-mode medical image fusion algorithm which combined with the edge characteristics of sub-image, makingWavelet transformation on multi-source medical image to be integrated firstly, and then set up appropriate fusion operator to make integration according to edge feature of sub images transformed.
Abstract: This article presents a wavelet transformation based multi-mode medical image fusion algorithm which combined with the edge characteristics of sub-image, making wavelet transformation on multi-source medical image to be integrated firstly, and then set up appropriate fusion operator to make integration according to edge feature of sub-images transformed and human eyes’ different sensitivity on images in HVS, and reconstruct fusion image through inverse transformation at last. Tested by the integration experiment on brain MRI-PET images, it is proved that this method can combine anatomical information and functional information together more effectively, and retain the edge characteristics of original image better. Keyword:multi-mode medical image; image fusion; wavelet transforamtion; edge feature

Patent
Masamichi Osugi1
31 Mar 2009
TL;DR: In this article, the authors propose an image search apparatus consisting of a characteristic partial image detection unit and a search unit, which detects a characteristic image of each search target image based on a dissimilarity level of a partial image at a corresponding position among a plurality of search-target images.
Abstract: An image search apparatus provides searching for a search-target image corresponding to an input image from among a plurality of search-target images. The image search apparatus includes a characteristic partial image detection unit and a search unit. The characteristic partial image detection unit detects a characteristic partial image of each search-target image based on a dissimilarity level of a partial image at a corresponding position among a plurality of search-target images. The search unit respectively calculates a level of coincidence between a characteristic partial image of each search-target image detected by the characteristic partial image detection unit and a partial image of an input image. The search unit further searches for a search-target image corresponding to an input image from among a plurality of search-target images based on the coincidence level.

Patent
28 Jul 2009
TL;DR: In this paper, a high-resolution image obtaining apparatus is proposed to divide an input image frame into a background region and foreground region and apply an optimized resolution enhancement algorithm to each region.
Abstract: Disclosed is a high resolution image obtaining apparatus and method. The high resolution image obtaining apparatus may divide an input image frame into a background region and foreground region and apply an optimized resolution enhancement algorithm to each region, thereby effectively obtaining a high resolution image frame with respect to the input image frame.

Patent
11 Aug 2009
TL;DR: In this article, an image identification method for classifying block images of input image data into one of the multiple predetermined categories according to feature quantity in each block image is proposed, which includes an image production step, an image feature quantity processing step, and a separating hyperplane processing step of learning separating hyperplanes that indicate boundaries of each category.
Abstract: An image identification method for classifying block images of input image data into one of the multiple predetermined categories according to feature quantity in each block image; the method includes an image production step of dividing image data into multiple blocks to produce block images, an image feature quantity processing step of processing the feature quantity of each block image by their color space information and frequency component, a separating hyperplane processing step of learning separating hyperplanes that indicate boundaries of each category by reading in training data image that have labeled categories for each block and processing image feature quantity for each block of an training data image, and a category classification step of classifying respective block image to a category according to the distance from the separating hyperplane of each category by executing the block image production step and the image feature quantity processing step for a newly acquired image to obtain the image feature quantity of block images.

Patent
09 Dec 2009
TL;DR: In this paper, the authors proposed a method to obtain a high-definition restoration image by suppressing noise amplification etc. due to image restoration processing, which includes steps of acquiring an image g m generated by an imaging system, generating a first image fd1 m through restoration processing of an amplitude component and a phase component of an input image, and generating a second image f d2 m that has an equal state of the phase component to that of the first image and a different states of the amplitude component from that of a different one from the input image.
Abstract: PROBLEM TO BE SOLVED: To obtain a high-definition restoration image by suppressing noise amplification etc. due to image restoration processing. SOLUTION: This image processing method includes steps of acquiring an input image g m generated by an imaging system, generating a first image fd1 m through restoration processing of an amplitude component and a phase component of an input image, generating a second image fd2 m that has an equal state of the phase component to that of the first image and a different state of the amplitude component from that of the first image through restoration processing of the phase component without restoration processing of the amplitude component of the input image, obtaining difference information S m between the first image and the second image, setting a restoration level adjustment factor μ used to adjust a restoration degree in the restoration processing, and generating a restoration adjusted image f m by composing the difference information with the second image according to the restoration level adjustment factor. COPYRIGHT: (C)2011,JPO&INPIT

Patent
07 Aug 2009
TL;DR: In this article, the first input images are shot at a shutter speed equal to or faster than the super-resolution limit shutter speed, which is the lower-limit shutter speed that enables superresolution processing to make the resolution of the output image equal or higher than that of the input images.
Abstract: A super-resolution processing portion has a high-resolution image generation portion that fuses a plurality of first input images together to generate a high-resolution image. The first input images are shot at a shutter speed equal to or faster than the super-resolution limit shutter speed, which is the lower-limit shutter speed that enables super-resolution processing to make the resolution of the output image equal to or higher than that of the input images. According to the amount of exposure, one of the following different methods for super-resolution processing is selected: a first method that yields as the output image the high-resolution image; a second method that yields as the output image a weighted added image resulting from weighted addition of the high-resolution image and an image based on an averaged image; and a third method that yields as the output image a weighted added image resulting from weighted addition of the high-resolution image and an image based on a second input image.

Proceedings ArticleDOI
Tudor Barbu1
31 Aug 2009
TL;DR: This paper provides a content-based digital image retrieval system that uses the query by example technique and the relevance feedback and a Gabor filter based image feature extraction is proposed first.
Abstract: This paper provides a content-based digital image retrieval system. Our CBIR system uses the query by example technique and the relevance feedback. A Gabor filter based image feature extraction is proposed first. Thus, 3D image feature vectors using even-symmetric 2D Gabor filters are computed for the images of a large collection and for the input image. At each step an input image is selected, from the output set obtained in the previous step, and the most similar images from the collection are retrieved.

18 May 2009
TL;DR: This report develops the discrete shearlet transform (DST) which provides efficient multiscale directional representation and shows that the implementation of the transform is built in the discrete framework based on a multiresolution analysis.
Abstract: It is now widely acknowledged that analyzing the intrinsic geometrical features of an underlying image is essentially needed in image processing. In order to achieve this, several directional image representation schemes have been proposed. In this report, we develop the discrete shearlet transform (DST) which provides efficient multiscale directional representation. We also show that the implementation of the transform is built in the discrete framework based on a multiresolution analysis. We further assess the performance of the DST in image denoising and approximation applications. In image approximation, our adaptive approximation scheme using the DST significantly outperforms the wavelet transform (up to 3.0dB) and other competing transforms. Also, in image denoising, the DST compares favorably with other existing methods in the literature.

Journal Article
HE Xiao-hai1
TL;DR: The experimental results prove that the features extracted by SIFT method have excellent adaptive and accurate characteristics for the illumination, transfer and the rotation transform and have match accuracy more than 90%, which are useful for the fields of image recognition and image reconstruction.
Abstract: With the aim to improve the stability and reliability of image matching,the Scale Invariant Feature Transform algorithm(SIFT) is applied to image feature extraction and image matching.The SIFT can find out those feature vectors in different scale spaces and can extract image features and image description with the invariantce for scale changes and rotations and the flexibility for illumination variation and affine transformation.In this paper,the SIFT method is used to get the special point of an image and its features.Then,the features are matched with the criterion of the nearest neighbor based on confidence.Finally,the features in the image with different illumination conditions,focus lengths and shooting angles are extrated and matched.The experimental results prove that the features extracted by SIFT method have excellent adaptive and accurate characteristics for the illumination,transfer and the rotation transform and have match accuracy more than 90%,which are useful for the fields of image recognition and image reconstruction.

Proceedings ArticleDOI
11 Dec 2009
TL;DR: In this work Niblack's method of segmentation is further studied and has most acceptable result out of all thresholding techniques in segmenting text documents.
Abstract: Image segmentation is a major step in image analysis and processing. Segmentation is performed through several methods. In this work Niblack's method of segmentation is further studied. It is one of the local thresholding techniques for segmentation. The output of Niblack's method is significant and has most acceptable result out of all thresholding techniques in segmenting text documents. In this work the same method is applied on images keeping one of the variables i.e. weight k of Niblack's method constant while changing the other (window size) from images to images. The output image is better segmented but the background is noisy. Improvements in the resultant images are demonstrated by applying the morphological operations of opening and closing. Opening and closing are combination of two fundamental morphological operations dilation and erosion. Dilation thickens objects in a binary image by adding pixels to the boundaries of the objects, while erosion shrinks objects in a binary image.