scispace - formally typeset
Search or ask a question

Showing papers on "Standard test image published in 1990"


Journal ArticleDOI
P R Lennard1
06 Sep 1990-Nature
TL;DR: The availability of user-friendly scientific image analysis software for the Macintosh II has made the application of digital imaging techniques both practical and cost-effective in many areas of research.
Abstract: The availability of user-friendly scientific image analysis software for the Macintosh II has made the application of digital imaging techniques both practical and cost-effective in many areas of research.

66 citations


Patent
03 Oct 1990
TL;DR: In this article, an automatic defect correction drive utilizing feedback in an image display device is presented. But, the defect correction is performed using a test image constituted of bright points of known positions distributed on a screen, which is then analyzed via an image acquisition device, to deduce from it the scanning, focusing and amplitude corrections to be applied to the display, so that the test pixels displayed on the screen have their expected positions and characteristics.
Abstract: An automatic defect correction drive utilizing feedback in an image display device. According to the present invention, during an acquisition phase, a test image constituted of bright points of known positions distributed on a screen is displayed. This displayed image is then analyzed via an image acquisition device, to deduce from it the scanning, focusing and amplitude corrections to be applied to the display, so that the test pixels displayed on the screen have their expected positions and characteristics. These corrections are then interpolated for the intermediate pixels between test points, and then, during a continuation phase, these corrections are updated.

56 citations


Journal ArticleDOI
TL;DR: In this paper, edge-preserving smoothing techniques are compared by considering a test image which contains a central disk-shaped region with a step or a ramp edge against a uniform background.

46 citations


Patent
15 Mar 1990
TL;DR: An image processing apparatus having a third-order nonlinear stimulated photon echo medium is described in this article, which can store large numbers of images in the form of a Fourier transformed pattern by spectral modulation.
Abstract: An image processing apparatus having a third-order nonlinear stimulated photon echo medium (1) The photo echo medium (1) can store large numbers of images in the form of a Fourier transformed pattern by spectral modulation The spectral modulation is carried out by sending optical pulses and an optical pulse train having (or not having) image information to the medium (1) so that the population in the ground and excited states are modulated after the passage of the pulse trains and pulses The Fourier transformed pattern is converted back to temporal modulation, consisting of a sequence of echo pulses that reproduce the original data pulse train By using the apparatus, ultrafast operation such as convolution and correlation between a number of reference images and a test image can be achieved

19 citations


Patent
25 May 1990
TL;DR: In this paper, a color correction device is used to correct the color properly to a color signal which is output by the reading device, and a control device 1101 is provided which allows the colour correction device to perform different color correction processing depending upon the routine copy image reading mode or a test pattern reading mode.
Abstract: PURPOSE: To enable reading of a density variation unit with sufficiently high accuracy, whatever color test image may be used, without sacrificing color reproducibility even in the routine copying mode by performing different chromatic correction processing at the time of routine copy reading from that at the time of test image reading or vice versa. CONSTITUTION: A reading device 1014 which reads a color copy and outputs a color signal for recording consists of a light source which emits light to the surface of a medium for recording and a sensor which receives the reflected light. A density variation correction device 1020 corrects the drive parameters of a recording head during a recording process in accordance with a density variation which is read from a test pattern. A color correction device 1017 which corrects the color properly to a color signal which is output by the reading device 1014, supplies an output from the device 1017 to a recording head 1001 when in the routine copying mode. In addition, a control device 1101 is provided which allows the color correction device to perform different color correction processing depending upon the routine copy image reading mode or a test pattern reading mode. COPYRIGHT: (C)1992,JPO&Japio

12 citations


Proceedings ArticleDOI
27 Nov 1990
TL;DR: In this article, a defect-detection method based on image processing technologies is presented for automatic inspection of color-printed matter, where a reference pattern containing allowable ranges is expressed in an index space which is a three-dimensional flag table constructed from a gray-level axis and two planar coordinate axes.
Abstract: The authors present a novel defect-detection method based on image processing technologies, aimed at automatic inspection of color-printed matter. In this method, a reference pattern containing allowable ranges is expressed in an index space which is a three-dimensional flag table constructed from a gray-level axis and two planar coordinate axes. Each pixel of the test image is inspected by taking its three-dimensional address in the index space and then referring to the corresponding flag in the index space. This method allows images to be inspected a high speed using human-like judgment criteria. The validity of the method is demonstrated by experiments using a prototype for inspecting the printed surface of prepaid cards. >

9 citations


Journal ArticleDOI
TL;DR: This paper shows the existence of one implementation of a repeated median filter which achieves the same degree of smoothing but consistently yield less edge distortion for binary images and demonstrates that the algorithm yields an output image which is significantly closer to the original image than the outputs of the repeated standard or regular recursive algorithms.

9 citations


Proceedings ArticleDOI
05 Nov 1990
TL;DR: The JPEG algorithm for compression of grayscale and color images was implemented in software and tested on nine IS0 color test images and the performance of these codes were compared with the proposed default Huffman code.
Abstract: The JPEG algorithm for compression of grayscale and color images was implemented in software and tested on nine IS0 color test images. Image-optimized Huffman codes were generated for each image and the performance of these codes were compared with the proposed default Huffman code as well as with a code optimized for the average of the nine images. Statistics were generated for the run lengths and bits of significance required to encode the quantized DC and AC components of the Discrete Cosine Transforms (DCTs) for the Luminance and Chrominance image arrays. Statistics were also formed to determine the fraction of the code used to locate the positions of the non-zero spectral samples as opposed to the values of those samples.

9 citations


Patent
22 Jan 1990
TL;DR: In this paper, a multiplicity oframes or image fields are output from a camera which scans repeated occurrences of the event in essentially random order, and selected data representing individual portions of frames are used to construct a composite image of the high speed event.
Abstract: The invention is a method of imaging a high speed event. A multiplicity oframes, or image fields, are output from a camera which scans repeated occurrences of the event. Selected data representing individual portions of frames are accumulated in essentially random order. The selected data are used to construct a composite image of the high speed event.

7 citations


Proceedings ArticleDOI
01 Sep 1990
TL;DR: A novel algorithm has been developed to filter edge noise from the difference images that has reduced edge noise by 98% over the unfiltered image and can be implemented using off-the-shelf hardware.
Abstract: A digital machine-inspection system is being developed at Oak Ridge National Laboratory to detect flaws on printed graphic images. The inspection is based on subtraction of a digitized test image from a reference image to determine the location, number, extent, and contrast of potential flaws. When performing subtractive analysis on the digitized information, two sources of errors in the amplitude of the difference image can develop: (1) spatial misregistration of the reference and test sample, or (2) random fluctuations in the printing process. Variations in printing and registration between samples will generate topological artifacts related to surface structure, which is referred to as edge noise in the difference image. Most feature extraction routines require that the difference image be relatively free of noise to perform properly. A novel algorithm has been developed to filter edge noise from the difference images. The algorithm relies on the a priori assumption that edge noise will be located near locations having a strong intensity gradient in the reference image. The filter is based on the structure of the reference image and is used to attenuate edge features in the difference image. The filtering algorithm, consisting of an image multiplication, a global intensity threshold, and an erosion/dilation, has reduced edge noise by 98% over the unfiltered image and can be implemented using off-the-shelf hardware.© (1990) COPYRIGHT SPIE--The International Society for Optical Engineering. Downloading of the abstract is permitted for personal use only.

6 citations


Proceedings ArticleDOI
01 May 1990
TL;DR: In this paper, the impact of the order of acquisition of different views on the L2 norm of the image-domain reconstruction error is determined for band-limited temporal variation, and a novel technique for lowering the sampling rate requirement while preserving image quality is proposed and investigated.
Abstract: This paper addresses the tomographic imaging of time-varying distributions, when the temporal variation during acquisition of the data is high, precluding Nyquist rate sampling. This paper concentrates on the open (and hitherto unstudied) problem of nonperiodic temporal variation, which cannot he reduced to the time-invariant case by synchronous acquisition. The impact of the order of acquisition of different views on the L2 norm of the image-domain reconstruction error is determined for band-limited temporal variation. Based on this analysis, a novel technique for lowering the sampling rate requirement while preserving image quality is proposed and investigated. This technique involves an unconventional projection sampling order which is designed to minimize the L2 image-domain reconstruction error of a representative test image. A computationally efficient design procedure reduces the image data into a Grammian matrix which is independent of the sampling order. Further savings in the design procedure are realized by using a Zernike polynomial series representation for the test image. To illustrate the approach, reconstructions of a computer phantom using the best and conventional linear sampling orders are compared, showing a seven-fold decrease in the error norm by using the best scheme. The results indicate the potential for efficient acquisition and tomographic reconstruction of time-varying data. Application of the techniques are foreseen in X-ray computer tomography and magnetic resonance imaging.© (1990) COPYRIGHT SPIE--The International Society for Optical Engineering. Downloading of the abstract is permitted for personal use only.

Proceedings ArticleDOI
01 Sep 1990
TL;DR: In this article, a test image which contains a central disk-shaped region with a step or a ramp edge against a uniform background is compared by considering a median-based filtering technique.
Abstract: Median-based filtering techniques are compared by considering a test image which contains a central disk-shaped region with a step or a ramp edge against a uniform background. Free parameters are the amplitude of Gaussian noise added the edge slope and the number of filtering iterations. The quantitative comparison measure is the normalized squared error between the filtered noisy image and the noise-free image on the fiat image regions and on the transition region separately.© (1990) COPYRIGHT SPIE--The International Society for Optical Engineering. Downloading of the abstract is permitted for personal use only.

Patent
02 Apr 1990
TL;DR: In this paper, a system and method for processing image data in a graphics display system using parallel processing is described. But it is based on a kernel function to enhance selected features and assist in image analysis.
Abstract: A system and method for processing image data in a graphics display system using parallel processing. A source image represented as an array of image pixel values is con­volved by a kernel function to enhance selected features and assist in image analysis. Bi-linear interpolation is imple­mented to preserve picture quality when a source image is enlarged or reduced. A series of processing elements (150) connected in parallel are used to speed image transformation and interpolation. The application of convolution is divided between the processors so that the processing with adjacent pixels overlaps and output image pixels are generated in fewer machine cycles. A pipeline architecture continually processes values from the parallel processors through validation and conversion to generate the final image.

Proceedings ArticleDOI
01 Sep 1990
TL;DR: As part of the development of a real-time JR target processor test bed, a number of image processing algorithms were developed, simulated in software, and evaluated for implementation.
Abstract: As part of the development of a real-time JR target processor test bed, a number of image processing algorithms were developed, simulated in software, and evaluated for implementation. Algorithms performing image pre-processing, target localization, segmentation and target/clutter discrimination were evaluated using an IR image data base. The algorithms selected are being implemented on the test bed using commercially available board-level components, and are capable of pmcessing imagery at real-time rates (30 frames/see).© (1990) COPYRIGHT SPIE--The International Society for Optical Engineering. Downloading of the abstract is permitted for personal use only.

Patent
13 Dec 1990
TL;DR: In this paper, a test picture with 4:3 aspect ratio is compared with stored values of time, and if no agreement is found the deflection current is adjusted. Control pulses may be called up from memory.
Abstract: The camera forms one image corresp. to an aspect ratio of 4:3 and another image corresp. to a ratio of 16:9. CCD images are distorted optically, e.g. by cylindrical lens, so that video intervals between known points are the same for both. With a picture tube, a test picture with 4:3 aspect ratio is compared with stored values of time, and if no agreement is found the deflection current is adjusted. Control pulses may be called up from memory. USE - In EDTV or HDTV cameras with picture tubes of CCD image sensors.

Patent
01 Oct 1990
TL;DR: In this article, the authors proposed a method to automatically correct defects through feedback in an image display apparatus, which consists, in an acquisition phase, in displaying a test image consisting of bright points with known positions distributed over the screen and in analysing this displayed image via an image recall device.
Abstract: In order automatically to correct defects through feedback in an image display apparatus, the invention consists, - in an acquisition phase, in displaying a test image consisting of bright points with known positions distributed over the screen and in analysing this displayed image via an image recall device in order to deduce therefrom the corrections for scanning, for focusing and for amplitude to be applied to the display means, in order that the test pixels displayed on the screen have the expected positions and characteristics, and interpolating these corrections for the intermediate pixels between test points. - in a tracking phase, in updating these corrections. The invention applies in particular to the displaying of images by projection or back-projection possibly from collections of projectors, and to displaying on tubes.

Proceedings ArticleDOI
01 Jul 1990
TL;DR: A new decomposition method using image splitting and gray-level remapping has been proposed for image compression, particularly for images with high contrast resolution, which has been found to produce a low mean-square-error and a high compression ratio.
Abstract: A new decomposition method using image splitting and gray-level remapping has been proposed for image compression, particularly for images with high contrast resolution. The effects of this method are especially evident in our radiological image compression study. In our experiments, we tested the impact of this decomposition method on image compression by employing it with two coding techniques on a set of clinically used CT images and several laser film digitized chest radiographs. One of the compression techniques used was full-frame bit-allocation in the discrete cosine transform domain, which has been proven to be an effective technique for radiological image compression. The other compression technique used was vector quantization with pruned tree-structured encoding, which through recent research has also been found to produce a low mean-square-error and a high compression ratio. The parameters we used in this study were mean-square-error and the bit rate required for the compressed file. In addition to these parameters, the difference between the original and reconstructed images will be presented so that the specific artifacts generated by both techniques can be discerned by visual perception.

Proceedings ArticleDOI
01 Aug 1990
TL;DR: Under this system, distributed processing in the image compression processor and the image reconstruction displays reduces the load on the host computer, and supplies an environment where the control routines for PACS and the hospital information system (HIS) can co-operate.
Abstract: We previously developed an image reconstruction display for reconstructing images compressed by our hybrid compression algorithm. The hybrid algorithm, which improves image quality, applies Discrete Cosine Transform coding (DCT) and Block Truncation Coding (BTC) adaptively to an image, according to its local properties. This reconstruction display receives the compressed data from the host computer through a BMC channel and quickly reconstructs good quality images using a pipeline-based microprocessor. This paper describes a prototype of a system for compression and reconstruction of medical images. It also describes the architecture of the image compression processor, one of the components of the system. This system consists of the image compression processor, a host main-frame computer and reconstruction displays. Under this system, distributed processing in the image compression processor and the image reconstruction displays reduces the load on the host computer, and supplies an environment where the control routines for PACS and the hospital information system (HIS) can co-operate. The compression processor consists of a maximum of four parallel compression units with communication ports. In this architecture, the hybrid algorithm, which includes serial operations, can be processed at high speed by communicating the internal data. In experiments, the compression system proved effective: the compression processor compressed a 1k x 1k image in about 2 seconds using four compression units. The three reconstruction displays showed the image at almost the same time. Display took less than 7 seconds for the compressed image, compared with 28 seconds for the original image.© (1990) COPYRIGHT SPIE--The International Society for Optical Engineering. Downloading of the abstract is permitted for personal use only.

Journal ArticleDOI
TL;DR: If a test image and a complementary stored image together with complementarytest image and stored image are compared optically, the emerging light intensity is minimum for the best match (i.e., shortest Hemming distance) and this allows the suggested architecture to do efficient parallel comparison and to point out the best matched test image.
Abstract: A simple optoelectronic architecture is defined with capabilities of matching a test image (1-D or 2-D) with stored images It is shown that if a test image and a complementary stored image together with complementary test image and stored image are compared optically, the emerging light intensity is minimum for the best match (ie, shortest Hemming distance) This allows the suggested architecture to do efficient parallel comparison and to point out the best matched test image If the image is presented in a two color scheme (say red–blue) and the memory images are stored in the complimentary color scheme transparency (ie, blue–red), then a white light source allows comparison and detection of best matched image without the need for a separate set of complimentary images A time-varying light intensity (or a time-varying thresholding voltage) source and an integrating-threshold device which switches its state when light input falls below a threshold value are two ingredients, which are used to select the best match in parallel Grey level detection can also be implemented by the scheme The architecture is also ideally suited to finding the closeness of match of two images

Proceedings ArticleDOI
17 Jun 1990
TL;DR: The suggested architecture carries out efficient parallel comparison and points out the best matched test image using a TV screen and transparencies and is ideally suited to finding the closeness of match of two images for quality control-type operations.
Abstract: While true and complement images have the same information content, it is advantageous to retain both kinds of images for associative memory. This allows the implementation of a very simple optical computer which can perform real-time image matching. If a test image and a complementary stored image is compared optically with a complementary test image and a stored image, the emerging light intensity is proportional to the Hamming distance between the images. The suggested architecture then carries out efficient parallel comparison and points out the best matched test image using a TV screen and transparencies. A time-varying light-intensity (or a time-varying thresholding voltage) source and a thresholding device are used to select the best match in parallel. The architecture is also ideally suited to finding the closeness of match of two images for quality control-type operations. It can also be used for providing a feedback to search for the best match

Proceedings ArticleDOI
01 Mar 1990
TL;DR: In this article, a 2D local operator is described for computing the local curvature of intensity isocontours in a digital image, which directly estimates the average local curvatures of the isointensity contours, and does not require the explicit detection of edges.
Abstract: A 2-D local operator is described for computing the local curvature of intensity isocontours in a digital image. The operator directly estimates the average local curvature of the isointensity contours, and does not require the explicit detection of edges. In a manner similar to the Hueckel operator, a series of 2D basis functions defined over a circular local neighborhood extract a set of coefficients from the image at each point of investigation. These coefficients describe an approximation to a circular arc assumed to pass through the neighborhood center, and the curvature is taken as the inverse of the estimated arc radius. The optimal set of basis functions for approximating this particular target pattern is shown to be the Fourier series. Discretization of the continuous basis functions can create anisotropy problems for the local operator; however, these problems can be overcome either by using a set of correction functions, or by choosing a discrete function which closely approximates the circular neighborhood. The method is validated using known geometric shapes and is shown to be accurate in estimating both curvature and the orientation of the isocontours. When applied to a test image the curvature operator provides regional curvature measurements compatible with visible edges in the image.

Proceedings ArticleDOI
09 Aug 1990
TL;DR: A two-level system for the segmentation of texture images into regions of common textural properties is described, which includes a purely numeric texture analyzer and a knowledge-based segmentor that uses rules derived from knowledge of the image-forming process to arrive at a segmentation.
Abstract: A two-level system for the segmentation of texture images into regions of common textural properties is described. The first level is a purely numeric texture analyzer that uses texture energy measures to transform the image into feature measure planes. The second level is a knowledge-based segmentor that uses rules derived from knowledge of the image-forming process to arrive at a segmentation. Two different control schemes that can be used to guide the segmentation process are described. These are based on parallel region growing and iterative quadtree splitting, respectively. An illustration of the performance of the system with both control schemes on a real test image is presented

Proceedings ArticleDOI
01 Jul 1990
TL;DR: A series of digital test images have been developed for display on either a GE CT9800 or a GE Signa MRI scanner, captured with each of five commercially available image capture systems, and the resultant images digitally transferred on floppy disk to a PC1286 computer containing Optimast' image analysis software.
Abstract: As Picture Archiving and Communication System (PACS) technology has matured, video image capture has become a common way of capturing digital images from many modalities. While digital interfaces, such as those which use the ACR/NEMA standard, will become more common in the future, and are preferred because of the accuracy of image transfer, video image capture will be the dominant method in the short term, and may continue to be used for some time because of the low cost and high speed often associated with such devices. Currently, virtually all installed systems use methods of digitizing the video signal that is produced for display on the scanner viewing console itself. A series of digital test images have been developed for display on either a GE CT9800 or a GE Signa MRI scanner. These images have been captured with each of five commercially available image capture systems, and the resultant images digitally transferred on floppy disk to a PC1286 computer containing Optimast' image analysis software. Here the images can be displayed in a comparative manner for visual evaluation, in addition to being analyzed statistically. Each of the images have been designed to support certain tests, including noise, accuracy, linearity, gray scale range, stability, slew rate, and pixel alignment. These image capture systems vary widely in these characteristics, in addition to the presence or absence of other artifacts, such as shading and moire pattern. Other accessories such as video distribution amplifiers and noise filters can also add or modify artifacts seen in the captured images, often giving unusual results. Each image is described, together with the tests which were performed using them. One image contains alternating black and white lines, each one pixel wide, after equilibration strips ten pixels wide. While some systems have a slew rate fast enough to track this correctly, others blur it to an average shade of gray, and do not resolve the lines, or give horizontal or vertical streaking. While many of these results are significant from an engineering standpoint alone, there are clinical implications and some anatomy or pathology may not be visualized if an image capture system is used improperly.

Patent
07 Mar 1990
TL;DR: In this article, the authors proposed a test printing method to prevent the consumption of an ink sheet and recording paper in large quantities by test printing by performing the heating due to the second heating means corresponding to a part of signal data selected by a signal selection means in the test recording of an image.
Abstract: PURPOSE:To prevent the consumption of an ink sheet and recording paper in large quantities by test printing by performing the heating due to the second heating means corresponding to a part of signal data selected by a signal selection means in the test recording of an image. CONSTITUTION:A large-sized printer 100 and a small-sized printer 200 are respectively loaded with ink sheets 30, 230 and recording papers 32, 232 are also set to the respective printers. When an image NTSC signal is inputted from a terminal 10, the image signal is converted to an RGB signal by a decoder 14 and an image is displayed on a CRT 46 through a frame memory 18, a D/A converter 43 and an encoder 44. When test printing indication is inputted from an input operation part 36 by an operator, a switch 54 is changed over to the small-sized printer 200 and the data of the RGB signal is successively read to form a signal for printing a small image through a color correction circuit 20 and a gradation control part 24 and respective color inks are superposed by a thermal head 228 to form a test image. Therefore, when a large size print is obtained, it can be prevented that the ink sheet and the recording paper are used in large quantities by test printing.

01 Nov 1990
TL;DR: In this article, a hierarchical space-time image representation is discussed, and its properties are used in order to define a spectrum based model for the class of simple linear features, such as straight lines and edges.
Abstract: A hierarchical space-time image representation - the Multiresolution Fourier Transform (MFT) - is discussed, and its properties are used in order to define a spectrum based model for the class of simple linear features, such as straight lines and edges. The model is used to derive a method of identifying these features and estimating their parameters, ie position and orientation. Results are presented for this process using a test image. The effect of white noise is considered, and a correlation measure is defined to show the effect of the noise upon the model. A method of further improving the results for a noisy image, by smoothing with oriented ellipses is also considered.

Proceedings ArticleDOI
01 Mar 1990
TL;DR: An algorithm has been developed which allows a simple optoelectronic architecture to have capabil-ities of matching a test image (1-D or 2-D) with stored images and it is shown that if a test images and a complementary stored image together with complementary test image and stored image are compared optically, the emerging light intensity is minimum for the best match.
Abstract: An algorithm has been developed which allows a simple optoelectronic architecture to have capabil-ities of matching a test image (1-D or 2-D) with stored images. It is shown that if a test image and a complementary stored image together with complementary test image and stored image are compared optically, the emerging light intensity is minimum for the best match. The suggested architecture then carries out efficient parallel comparison, and points out the best matched test image using a TV screen and transparencies. A time varying light intensity (or a time varying thresholding voltage) source and a device which switches its state when light input falls below a threshold value like a night-light are two ingredients, which are used to select the best match in parallel. The architecture is also ideally suited to finding the closeness of match of two images for quality control type operations.

Proceedings ArticleDOI
03 Apr 1990
TL;DR: The results show that the third system, texture-energy measurement augmented by a knowledge-based quadtree-splitting process, provides a better segmentation of the test image than the other two.
Abstract: Three knowledge-based texture image segmentation systems are described, and their performance on a real test image is analyzed. The image is a small piece of a seismic section and the objective is to segment it into zones of common signal texture. The first system is based on a run length statistics algorithm extended by a decision process which incorporates heuristic rules to influence the segmentation. The second and third systems are based on texture energy measurement algorithms augmented by two different knowledge-based classification processes. The knowledge-based process of the second system is controlled by a parallel region growing scheme, and that of the third system is controlled by an iterative quadtree-splitting scheme. The results show that the third system, texture-energy measurement augmented by a knowledge-based quadtree-splitting process, provides a better segmentation of the test image than the other two. >

Patent
18 Sep 1990
TL;DR: In this article, a test original obtained by drawing parallel lines with equal intervals on a transparent sheet is read out by a transmission type scanner and stored in a RAM of a computer and the positional information of parallel lines of the test image is obtained from the stored information.
Abstract: PURPOSE:To completely remove distortion and to obtain an accurate original image by detecting the characteristic information of an image receiving means and correcting the image of the original based upon the detected result CONSTITUTION:A test original obtained by drawing parallel lines with equal intervals on a transparent sheet is read out by a transmission type scanner and stored in a RAM of a computer and the positional information of parallel lines of the test image is obtained from the stored information After selecting the positional information most close to the center of the image out of the stored information, a deviation value is determined, the positional information of an original parallel lines is estimated by applying least square method and a correction value is calculated from these detecting position information and the estimated position information Then, an errors between two points close to both the ends of the test original is calculated, the deviation value is changed, a deviation value minimizing a difference between respective correction values and a difference between errors is calculated and the estimated position information is obtained by applying the least square method again to use the most optimum correction value as the correcting characteristic information of the scanner Thus, correct positional information can be obtained

01 Jan 1990
TL;DR: A two-level system for the segmentation of texture images into regions of common textural properties is described which is a purely numeric Texture Analyzer and a Knowledge-Based Segmentor which applies heuristic rules, derived from knowledge of the image forming process, to arrive at a segmentation.
Abstract: In this paper we describe a two-level system for the segmentation of texture images into regions of common textural properties. The first level is a purely numeric Texture Analyzer which uses texture energy measures to transform the image into feature measure planes. The second level is a Knowledge-Based Segmentor which applies heuristic rules, derived from knowledge of the image forming process, to arrive Qt a segmentation. We discuss two different control schemes that can be used to uide the segmentation process. These are based on paraflel region growing and iterative quadtree splitting respectively. An illustration of the performance of the system with both control schemes on a real test image is also presented.