scispace - formally typeset
Search or ask a question

Showing papers on "Standard test image published in 1998"


Patent
13 Jan 1998
TL;DR: The image correlation method and apparatus as discussed by the authors correlates or matches a test image with a template by partitioning the template into a number of labels, determining the total number of pixels NT which form the template and determining the number of nodes Ni which form each of the labels i. The image correlation apparatus also includes comparison means for comparing the test image to the template.
Abstract: The image correlation method and apparatus correlates or matches a test image with a template. The image correlation apparatus includes an image processor for partitioning the template into a number of labels, for determining the total number of pixels NT which form the template and for determining the number of pixels Ni which form each of the labels i. The image correlation apparatus also includes comparison means for comparing the test image to the template. The comparison means determines, for each predetermined gray level j, the number of pixels of the test image Nj,i representative of a predetermined gray level j which correspond to a predetermined label i of the template. The comparison means also determines, for each predetermined gray level j, the number of pixels of the test image Nj representative of a predetermined gray level j which correspond to the template. The image correlation apparatus further includes correlation means for determining the correlation X between the test image and the template according to a predetermined equation which is based, at least in part, upon Nj,i, Nj, Ni and NT. The image correlation apparatus can also include an address generator for creating a number of relative offsets between the template and the test image. Thus, the test image can be compared to the template at each relative offset and the relative offset which provides the greatest correlation therebetween can be determined. Consequently, the test image and the template can be effectively matched such that a preselected object designated within the template can be located and identified within the test image.

130 citations


Proceedings ArticleDOI
04 Jan 1998
TL;DR: The seminal method of Swain and Ballard to discount changing illumination is extended, based on the first stage of the simplest color indexing method, which uses angular invariants between color image and edge image channels.
Abstract: Several color object recognition methods that are based on image retrieval algorithms attempt to discount changes of illumination in order to increase performance when test image illumination conditions differ from those that obtained when the image database was created. Here we extend the seminal method of Swain and Ballard to discount changing illumination. The new method is based on the first stage of the simplest color indexing method, which uses angular invariants between color image and edge image channels. That method first normalizes image channels, and then effectively discards much of the remaining information. Here we adopt the color-normalization stage as an adequate color constancy step. Further, we replace 3D color histograms by 2D chromaticity histograms. Treating these as images, we implement the method in a compressed histogram-image domain using a combination of wavelet compression and Discrete Cosine Transform (DCT) to fully exploit the technique of low-pass filtering for efficiency. Results are very encouraging, with substantially better performance than other methods tested. The method is also fast, in that the indexing process is entirely carried out in the compressed domain and uses a feature vector of only 36 or 72 values.

104 citations


Journal ArticleDOI
TL;DR: The role of bilateral symmetry in face recognition is investigated in two psychophysical experiments using a Same/Different paradigm and the hypothesis that the ability to identify mirror symmetric patterns is used for viewpoint generalization is confirmed.

97 citations


Journal ArticleDOI
TL;DR: The phase information of the encrypted image is evaluated and the reasons for its binarization are given, and it is shown that postpro- cessing of the decrypted image can improve the quality of the recovered images.
Abstract: We investigate the performance of an image encryption tech- nique that uses random-phase encoding in both the input plane and the Fourier plane, using partial information of the encrypted image. We first investigate the phase-only information of the encrypted data for decryp- tion. A binary version of the phase-only information is also considered for decryption. Binary images are well suited for optical display and practical implementation. Using partial information of the encrypted image, a re- constructed complex image is generated, which is used for decryption. Tests are performed for both gray-scale and binary images. We show that the phase information of the encrypted image is very important in the reconstruction of the decrypted image. Computer simulations show that for the images tested here, binarization of the encrypted image can re- cover the original image with low mean squared error. © 1998 Society of Photo-Optical Instrumentation Engineers. (S0091-3286(98)04302-5) noise. 3 The fault-tolerance properties of this technique were investigated in Ref. 4. In this paper, we evaluate the phase information of the encrypted image and investigate the ef- fect of its binarization on the recovered image. We give the reasons for the binarization of the encrypted image, and tackle the problems raised by it. It is shown that postpro- cessing of the decrypted image can improve the quality of the recovered images. We show, for the images tested here, that recovered images of very good quality can be obtained.

69 citations


Patent
20 May 1998
TL;DR: In this article, a method for detecting motion between a reference image and a test image by acquiring the images, aligning the images; dividing the images into blocks; masking certain blocks; differencing corresponding blocks; median filtering the differences; low-pass filtering the outputs of the median filter; generating a normalized histogram for each output of the low pass filter; calculating the distance between the noise model and each normalized histograms; comparing each distance calculated to a user-definable threshold; and determining if motion has occurred between the images if a certain number of distance calculations
Abstract: A device for and method of detecting motion between a reference image and a test image by acquiring the images; aligning the images; dividing the images into blocks; masking certain blocks; differencing corresponding blocks; median filtering the differences; low-pass filtering the outputs of the median filter; generating a normalized histogram for each output of the low-pass filter; generating a model of gaussian noise; calculating the distance between the noise model and each normalized histogram; comparing each distance calculated to a user-definable threshold; and determining if motion has occurred between the images if a certain number of distance calculations are at or above the user-definable threshold. If a scene is to be continuously monitored and no motion occurred between the previous reference image and the previous test image then a new test image is acquired and compared against the previous reference image as described above. If the scene is to be continuously monitored and motion has occurred between the previous reference image and the previous test image then replace the previous reference image with the previous test image, acquire a new test image, and compare the new test image to the new reference image as described above.

69 citations


Patent
Wenjun Zeng1
30 Sep 1998
TL;DR: In this paper, a method for embedding and extracting visually imperceptible indicia in an image was proposed, which includes testing a test image for an embedded visually perceptible indica.
Abstract: A method for embedding and extracting visually imperceptible indicia in an image includes embedding a visually imperceptible indicia in an original image; testing a test image for an embedded visually imperceptible indica; and extracting the visually imperceptible indicia from the test image to determine if the test image is a copy of the original image.

67 citations


Journal ArticleDOI
TL;DR: An algorithm to detect grey level transitions with multiple scales of resolution to improve edge detection and localisation in ultrasound images of the prostate is investigated, illustrating an edge detection method suitable as pre-processing step in interpretation of medical images.

60 citations


Journal ArticleDOI
TL;DR: The authors describe current research and development on a robotic visual servoing system for assembly of LIGA (lithography galvonoforming abforming) parts and can visually servo a 100 micron outside diameter LigA gear to a desired x,y reference position as determined from a synthetic image of the gear.
Abstract: The authors describe current research and development on a robotic visual servoing system for assembly of LIGA (lithography galvonoforming abforming) parts. The workcell consists of an AMTI robot, precision stage, long working distance microscope, and LIGA fabricated tweezers for picking up the parts. Fourier optics methods are used to generate synthetic microscope images from CAD drawings. These synthetic images are used off-line to test image processing routines under varying magnifications and depths of field. They also provide reference image features which are used to visually servo the part to the desired position. Currently, we can visually servo a 100 micron outside diameter LIGA gear to a desired x,y reference position as determined from a synthetic image of the gear.

55 citations


Journal ArticleDOI
T. Ida1, Y. Sambonsugi
TL;DR: Fractal coding was applied to image segmentation and contour detection and the proposed methods are expected to enable compressed codes to be used directly for image processing.
Abstract: Fractal coding was applied to image segmentation and contour detection. The encoding method was the same as in conventional fractal coding, and the compressed code, which we call the fractal code, was used for image segmentation and contour detection instead of image reconstruction. An image can be segmented by calculating the basin of attraction on a mapping that is a set of local maps from the domain block to the range block. The local maps are parameterized using the fractal code, and contours of the objects in the image are detected by the inverse mapping from the range block to the domain block. Some objects in the test image Lena were segmented, and the contours were detected well. The proposed methods are expected to enable compressed codes to be used directly for image processing.

54 citations


Journal ArticleDOI
TL;DR: The Centroid method is presented, which extracts and uses similarity patterns that consistently appear across all images to reduce set redundancy and achieve higher lossless compression in sets of similar images.

52 citations


Proceedings ArticleDOI
16 Aug 1998
TL;DR: An automatic mosaicing process for document images is described, using an image pyramid and sequential similarity to reduce computation time and present results for binarised document images with data captured using a digital camera.
Abstract: If it is impossible to capture all the image in one scan with the available equipment, a montage can be made from separately scanned pieces. We describe an automatic mosaicing process for document images. The image shifts are found by a correlation technique, using an image pyramid and sequential similarity to reduce computation time. Image placement and overlap is used to reject incorrect solutions. We present results for binarised document images with data captured using a digital camera.

Patent
Tadayuki Kajiwara1
17 Nov 1998
TL;DR: In this paper, a test pattern generator unit generates a test image on the photosensitive body or the intermediate transfer body, and a second density detector unit is provided in a downstream of the toner transfer section with respect to the rotation direction of the photo-sensitive body or intermediate-transfer body.
Abstract: An image forming apparatus performs the gamma correction based on an image density on a photosensitive body or an intermediate transfer body. In the image forming apparatus, a test pattern generator unit generates a test image on the photosensitive body or the intermediate transfer body. A first density detector unit is provided in an upstream of a toner transfer section with respect to a rotation direction of the photosensitive body or the intermediate transfer body, and detects an image density on the photosensitive body or the intermediate transfer body. A second density detector unit is provided in a downstream of the toner transfer section with respect to the rotation direction of the photosensitive body or the intermediate transfer body, and detects the image density on the photosensitive body or the intermediate transfer body. A corrector unit corrects image data. A control unit calculates correction data for use in correcting the image data based on image density data outputted from the first and second density detector units to set the correction data in the corrector unit.

Journal ArticleDOI
TL;DR: This work investigates under what general conditions illumination change can be described using a simple linear transform among RGB channels, for a multi-colored object, and adduce a different underlying principle than that usually suggested.

Patent
17 Jul 1998
TL;DR: In this article, a computer-aided method of detecting regions of interest in a digital image optimizes and adapts a computer aided scheme for detecting regions in images, which is based on global image characteristics.
Abstract: A computerized method of detecting regions of interest in a digital image optimizes and adapts a computer aided scheme for detecting regions of interest in images. The optimization is based on global image characteristics. For each image in a database of images having known regions of interest, global image features are measured and an image characteristic index is established based on these global image features. All the images in the database are divided into a number of image groups based on the image characteristic index of each image in the database and the CAD scheme is optimized for each image group. Once the CAD scheme is optimized, to process a digital image, an image characteristics based classification criteria is established for that image, and then global image features of the digitized image are determined. The digitized image is then assigned an image characteristics rating based on the determined global image features, and the image is assigned to an image group based on the image rating. Then regions of interest depicted in the image are determined using a detection scheme adapted for the assigned image group.

Patent
Hu Shane Ching-Feng1
05 Nov 1998
TL;DR: In this paper, a high precision sub-pixel spatial alignment of digital images, one from a reference video signal and another from a corresponding test video signal, uses an iterative process and incorporates spatial resampling along with basic correlation and estimation of fractional pixel shift.
Abstract: A high precision sub-pixel spatial alignment of digital images, one from a reference video signal and another from a corresponding test video signal, uses an iterative process and incorporates spatial resampling along with basic correlation and estimation of fractional pixel shift. The corresponding images from the reference and test video signals are captured and a test block is overlaid on them at the same locations to include texture from the images. FFTs are performed within the test block in each image, and the FFTs are cross-correlated to develop a peak value representing a shift position between the images. A curve is fitted to the peak and neighboring values to find the nearest integer pixel shift position. The test block is shifted in the test image by the integer pixel shift position, and the FFT in the test image is repeated and correlated with the FFT from the reference image. The curve fitting is repeated to obtain a fractional pixel shift position value that is combined with the integer pixel shift value to update the test block position again in the test image. The steps are repeated until an end condition is achieved, at which point the value of the pixel shift position for the test block in the test image relative to the reference image is used to align the two images with high precision sub-pixel accuracy.

Patent
19 May 1998
TL;DR: In this paper, the authors proposed a method and system for generating enhanced binary image data from greyscale input image data, which includes the steps of (a) receiving first image, the first image data being Greyscale image data defining an input image, (b) performing a high frequency boost operation on the first Image Data to produce second Image Data, (c) performing linear interpolation operation on second image data to produce third image Data, the third Image Data having a resolution higher than the resolution of the second Image data, (d) performing contrast enhancement operation on
Abstract: An image processing method and system for generating enhanced binary image data from greyscale input image data. The method includes the steps of (a) receiving first image data, the first image data being greyscale image data defining an input image, (b) performing a high frequency boost operation on the first image data to produce second image data, (c) performing a linear interpolation operation on the second image data to produce third image data, the third image data having a resolution higher than the resolution of the second image data, (d) performing a contrast enhancement operation on the third image data to produce fourth image data, and (e) thresholding the fourth image data to produce fifth image data, the fifth image data being binary image data defining an output image. The techniques find application, for example, in over-the-desk scanning of documents, and in video-conferencing.

Proceedings ArticleDOI
21 Jun 1998
TL;DR: In tests on a classification task using a data set of over 1000 images, PBSIM shows significantly higher accuracy than algorithms based upon color histograms, as well as previously reported results for another approach based upon bloblike features.
Abstract: We present a new algorithm called PBSIM for computing image similarity, based upon a novel method of extracting bloblike features from images. In tests on a classification task using a data set of over 1000 images, PBSIM shows significantly higher accuracy than algorithms based upon color histograms, as well as previously reported results for another approach based upon bloblike features.

Proceedings ArticleDOI
16 Aug 1998
TL;DR: In this paper a new combination of clustering of colors, manipulating spectral color encoding and decoding for multispectral images is presented, based on extracting relevant color information, and some quantitative quality measures for multi- spectral color images are presented.
Abstract: Image compression has been one of the mainstream research topics in image processing. The research usually focuses on compressing images that are visible to humans. Images are usually gray-level images or RGB color images. Advances in technology enable one to make the detailed processing of spectral color features in the images. Therefore, compression of images with many spectral color channels, called multispectral images, is required. Many methods used in traditional lossy image compression can be reused also in the compression of multispectral images. In this paper a new combination of clustering of colors, manipulating spectral color encoding and decoding for multispectral images is presented. The approach is based on extracting relevant color information. Furthermore, some quantitative quality measures for multispectral images are presented.

Patent
21 Dec 1998
TL;DR: In this article, a method of automatically compressing and decompressing a digital image that is comprised of the following steps is described: acquisition of digital images through an image acquisition system, generating a look-up-table (companding function) based upon noise characteristics of the image acquired system, applying the companding function to the image, processing the image using a lossless compression algorithm, reconstructing the image with the associated decompression algorithm, and applying the inverse of the combinatorial function.
Abstract: A method of automatically compressing and decompressing a digital image that is comprised of the following steps: acquiring a digital image through an image acquisition system; generating a look-up-table (companding function) based upon noise characteristics of the image acquisition system; applying the companding function to the image; processing the image using a lossless compression algorithm; reconstructing the image using the associated decompression algorithm; and applying the inverse of the companding function to the image.

Patent
30 Jun 1998
TL;DR: In this paper, a method for automatically classifying test images based on their similares with a dictionary of example target and non-target images was proposed, which operates by receiving a test image and then initializing variables for an iteration count and for linear expansion of the test image.
Abstract: A method for automatically classifying test images based on their similares with a dictionary of example target and non-target images. The method operates by receiving a test image and then initializing variables for an iteration count and for the linear expansion of the test image. The test image is then projected onto each one of the target and non-target images in the dictionary, wherein a maximum scaling coefficient is selected for each iteration. A residue is then generated, and the linear expansion of the test image is increased until a predetermined number of iterations have been performed. Once this predetermined number of iterations have been performed, the sum of the scaling coefficients belonging to the target examples in the dictionary is compared to the sum of the scaling coefficients belonging to the non-target examples in the dictionary to determine whether the image is a target signal or a non-target signal.

Patent
14 Jul 1998
TL;DR: In this article, a memory is provided for storing control parameters used to adjust operations of the printer; desired image characteristic parameters and previously measured image parameters, and a processor is responsive to a performance assessment procedure for causing the print engine to create a toned test image on the photoreceptor.
Abstract: An image forming apparatus in accordance with the invention includes a print engine with a photoreceptor, a laser exposure device for creating an image on the photoreceptor and one or more toning stations for toning the photoreceptor, after imaging. A system for enabling adjustment of the apparatus performance characteristics further includes a sensor for detecting characteristics of a toned image on the photoreceptor. A memory is provided for storing control parameters used to adjust operations of the printer; desired image characteristic parameters and previously measured image parameters. A processor is responsive to a performance assessment procedure for causing the print engine to create a toned test image on the photoreceptor. The processor then compares signals that are indicative of characteristic parameters of the toned test image with desired image characteristic parameters from the memory. Thereafter, in accordance with the comparison, the processor adjusts printer control parameters to bring the characteristics of the printed image closer to those which are dictated by the desired image characteristic parameters.

Patent
18 Jun 1998
TL;DR: In this article, the problem of improving compressibility and image quality with respect to an image processor which compresses and outputs image data after correcting it is addressed. But the problem is not addressed in this paper.
Abstract: PROBLEM TO BE SOLVED: To improve compressibility and image quality with respect to an image processor which compresses and outputs image data after correcting it. SOLUTION: Inputted image data are divided into a character area and a non-character area through character area discrimination (S205 and S207). The character area is subjected to character compression (S217) after character area processing (S215) such as resolution conversion, character correction and binarization. As for the non-character area, a characteristic in a local area is discriminated (S209) and a macro area for compression including the local area is discriminated based on the discriminated characteristic (S211). A line drawing area is subjected to line drawing processing (S219) and lossless (Lossless) compression (S223) according to the characteristic the macro area (S213) and a photographic area is subjected to photographic area processing (S223) and lossy (Lossy) compression (S225). After that, compressed data are integrated (S227) and outputted (S229).

Patent
26 Nov 1998
TL;DR: In this article, the problem of obtaining a print result with high quality independently of a difference between printer models by automatically conducting image data processing that takes account of model-dependent characteristics as an image data acquisition means is addressed.
Abstract: PROBLEM TO BE SOLVED: To obtain a print result with high quality independently of a difference between printer models by automatically conducting image data processing that takes account of model-dependent characteristics as an image data acquisition means SOLUTION: Characteristics, eg, peculiarity on a specific processing operation, of an image data acquisition means (in this case, a digital camera) are checked in advance, and image data processing contents taking the characteristic of the digital camera into account corresponding to the name of a model of the digital camera are set to an image data processing contents storage section 13 Then an image data read section 11 reads image data obtained by a digital camera, a model discrimination section 12 discriminates the name of the model, an image data processing section 14 selects image data processing contents corresponding to the discrimination result, processes the image data depending on the image data processing contents and a print section processing section 15 conducts print processing Furthermore, the image data processing conducted herein signifies image data correction processing such as color correction corresponding to the model and a magnification reduction processing

Patent
30 Sep 1998
TL;DR: In this paper, the authors proposed a method and system for quickly comparing a target image with candidate images in a database, and for extracting those images in the database that best match the target image.
Abstract: A method and system for quickly comparing a target image (110) with candidate images in a database, and for extracting those images in the database that best match the target image. The method uses a fundamental comparison technique (170) based on the decomposition of the images into 'blobs'. A given image is modified so as to reduce detail. Cohesive regions of the reduced-detail image are transformed into uniform-color blobs. Statistics are generated (15) for each such blob, characterizing, for example, its area, color, location and shape, and also, optionally, measures of the texture of the corresponding area in the original image. An image-similarity score is computed for any pair of images from the blob-specific image statistics. The image-similarity measure is computed by placing the blobs of the target image in one-to-one correspondence (510) with blobs of the candidate image, generating blob-similarity scores (520) over these paired blobs from the pre-computed blob-specific statistics of the images, and generating an overall image similarity score (600) as a function of the blob-similarity scores.

Proceedings ArticleDOI
12 Oct 1998
TL;DR: In this article, the results obtained with Wiener-type filters are compared to those obtained through the use of a multiscale spatially adaptive filter, and the degraded test image is obtained by numerical turbulent wavefront simulation.
Abstract: Atmospheric turbulence imposes a strong limit for observation on long propagation paths. For standard video frequencies, the image of a distant object observed through turbulence is blurred. The degradation extent depends on the turbulence strength, which is characterized by the value of the Fried parameter r/sub 0/. Knowing r/sub 0/ allows us to estimate the turbulence transfer function. The image can then be processed by means of a classical deconvolution filter. Here, the results obtained with Wiener-type filters are compared to those obtained through the use of a multiscale spatially adaptive filter. The degraded test image is obtained by numerical turbulent wavefront simulation.

Proceedings ArticleDOI
24 Apr 1998
TL;DR: The proposed CBIR approach has significantly increased the accuracy in obtaining results for image retrieval, and was able to distinguish between pictures that fooled previous CBIR engines.
Abstract: Content-based image retrieval (CBIR) enables a user to extract an image, based on a query, from a database containing a vast amount of pictures. This concept may be applied to many fields of interest including forensic science and image archiving. Current CBIR systems, however, are inaccurate. The purpose of this research project was to improve the accuracy of CBIR. The image's structural properties were examined to distinguish one image from another. By examining the specific gray level of an image, a gradient can be computed at each pixel. Pixels with a magnitude larger than the thresholds are assigned a value of 1. These binary digits are added across the horizontal, vertical, and diagonal directions to compute three projections. These vectors are then compared with the vectors of the image to be matched using the Euclidean distance formula. These numbers are then stored in a bookmark so that the image needs only be examined once. A program has been developed for Matlab on a Sun Sparc Computer with Unix Open Windows that performs this method of projecting gradients. Three databases were amassed for the testing of the proposed system's accuracy: 82 digital camera pictures, 1000 photographic images, and a set of object orientated photos. The program was tested with 100% accuracy with all submitted images to the database, and was able to distinguish between pictures that fooled previous CBIR engines. More importantly, though, was the program's ability to find certain similar scenarios in the database. This CBIR approach has significantly increased the accuracy in obtaining results for image retrieval.

Patent
11 Aug 1998
TL;DR: In this article, a control action for compensating image quality by selecting an area where a test patch image formed in order to always maintain the formed image quality to a fixed condition is stably formed.
Abstract: PROBLEM TO BE SOLVED: To execute a control action for compensating image quality by selecting an area where a test patch image formed in order to always maintain the formed image quality to a fixed condition is stably formed SOLUTION: In order to maintain the image quality to the fixed condition and image by the test patch image 19 is formed By detecting the density of the image 19, the developing bias voltage of one of process means, for example, the developing device 4(4a-4d) being the density control means arranged in a image forming device by which the reference (standard) image quality can be obtained is controlled A test image is formed at the whole circumference of a transfer drum 5 on the fixed condition by toner Then, density of the test image is detected by a density detection sensor 16 Based on the detected result, the area where the test patch image is formed is selected by a CPU 6 As the reference of this selection, the area where toner concentration exhibits an excellent value is selected so that the forming action of the test patch image is stabilized Thus, the control action for compensating the image quality is more enhanced


Patent
12 Mar 1998
TL;DR: In this article, a method for characterizing a response function of an output device is proposed, which comprises the steps of producing a test image on the output device with a set of one or more test patches having known code values; obtaining one or multiple captured images of the test image using a digital camera; determining colorimetric values for each of the tested patches in the captured images using a known response function.
Abstract: A method for characterizing a response function of an output device, the method comprises the steps of producing a test image on the output device with a set of one or more test patches having known code values; obtaining one or more captured images of the test image using a digital camera; determining colorimetric values for each of the test patches in the captured images using a known response function of the digital camera; and determining a response function of the output device which relates the known code values to the determined colorimetric values.

Patent
15 Jun 1998
TL;DR: In this article, a profile correction part 10 reads in information from a correction history information storage part 9 and display past color outputs at the grating points on the display device 1.
Abstract: PROBLEM TO BE SOLVED: To correct a color conversion table more efficiently by referring to correction history information as to a specific color for correcting the color conversion table of a color converting means, so that a specific color input image matches that color of an output image. SOLUTION: An operator compares a test image on a display device 1 with its printed matter to select unmatched color between both images. A profile correction part 10 reads in information from a correction history information storage part 9 and display past correction history information, regarding color outputs at the grating points on the display device 1. While considering past correction history information, the operator sets an output value considered to be optimum at the grating point. The profile correction part 10 additionally stores the corrected grating point and its output information in the correction history information storage part 9. Consequently, a new correction value can be set by referring to the variation history up to the last time, so that the operation is facilitated. COPYRIGHT: (C)2000,JPO