scispace - formally typeset
Search or ask a question

Showing papers on "Standard test image published in 1995"


Proceedings ArticleDOI
01 Jul 1995
TL;DR: Two problems associated with the detection and classification of motion in image sequences obtained from a static camera are considered, and an algorithm based on hysteresis thresholding is shown to give acceptably good results over a number of test image sets.
Abstract: The paper considers two problems associated with the detection and classification of motion in image sequences obtained from a static camera. Motion is detected by differencing a reference and the "current" image frame, and therefore requires a suitable reference image and the selection of an appropriate detection threshold. Several threshold selection methods are investigated, and an algorithm based on hysteresis thresholding is shown to give acceptably good results over a number of test image sets. The second part of the paper examines the problem of detecting shadow regions within the image which are associated with the object motion. This is based on the notion of a shadow as a semi-transpare nt region in the image which retains a (reduced contrast) representation of the underlying surface pattern, texture or grey value. The method uses a region growing algorithm which uses a growing criterion based on a fixed attenuation of the photometric gain over the shadow region, in comparison to the reference image.

375 citations


Patent
07 Jun 1995
TL;DR: In this paper, a method for processing an image, consisting of a foreground and a background, to produce a highly compressed and accurate representation of the image, including the steps of scanning the image to create a digital image, comparing the digital image against a codebook of stored digital images, matching the digital images with one of the stored digital image of the codebook, producing an index code identifying the background of the recorded digital image as having matched the digital input image, and subtracting the cached digital image from the original digital image to produce another digital image representing the foreground of the archived digital
Abstract: A method for processing an image, consisting of a foreground and a background, to produce a highly compressed and accurate representation of the image, including the steps of scanning the image to create a digital image of the image, comparing the digital image against a codebook of stored digital images; matching the digital image with one of the stored digital images of the codebook; producing an index code identifying the background of the stored digital image as having matched the digital image; subtracting the stored digital image from the digital image to produce a second digital image representing the foreground of the stored digital image; and storing the second digital image with the index code. Techniques are also provided to enable merge/purge of the database(s) thereby created.

283 citations


Patent
27 Mar 1995
TL;DR: In this article, a standard DOS-FAT data file structure was proposed to handle not only compressed image data but also original image data which was obtained in a digital electronic still camera by a lot of types of personal computers.
Abstract: Image data are recorded in a memory card in a standard DOS-FAT data file structure. When the compression mode is set, original image data is compressed in accordance with a JPEG system, and compressed image data obtained and fixed information in a JPEG header are written in the memory card so as to form a JPEG file. When the uncompression mode is set, the original image data and fixed information in a TIFF header are written into the memory card so as to form a TIFF file. Accordingly, it becomes possible to handle not only compressed image data but also original image data which is obtained in a digital electronic still camera by a lot of types of personal computers.

106 citations


Patent
22 Feb 1995
TL;DR: In this article, a method for Golden Template Comparison (GTC) is provided that can be used to efficiently perform flaw and defect detection on a two-dimensional test image that is at least rotated and/or scaled and or sub-pixel translated.
Abstract: A method for Golden Template Comparison (GTC) is provided that can be used to efficiently perform flaw and defect detection on a two-dimensional test image that is at least rotated and/or scaled and/or sub-pixel translated. Run-time inspection speed and accuracy is substantially improved by retreiving a golden template image that is rotated and/or scaled and/or translated in a manner substantially similar to the test image. This is accomplished by storing, in an array, a varied plurality of golden template images, each golden template image being characterized by a different combination of at least rotation and/or scale and/or sub-pixel translation. The array is indexed by the respective quantized rotation and/or quantized scale and/or sub-pixel translation of each version of the golden template image. The array can be either one-dimensional or multi-dimensional. At run-time, the values of the rotation and/or scale and/or sub-pixel translation of each test image are measured, and then quantized, thereby providing a unique index into the multi-dimensional array of reference and threshold images. The reference and threshold images stored at the memory location corresponding to the index are retrieved and then used for comparison with the test image to provide a difference image to be analyzed for flaws or defects.

86 citations


Book ChapterDOI
10 Aug 1995
TL;DR: This paper describes a protocol for systematically evaluating the performance of dashed-line detection algorithms that includes a test image generator which creates random line patterns subject to prespecified constraints.
Abstract: This paper describes a protocol for systematically evaluating the performance of dashed-line detection algorithms. It includes a test image generator which creates random line patterns subject to prespecified constraints. The generator also outputs ground truth data for each line in the image. The output of the dashed line detection algorithm is then compared to these ground truths and evaluated using a set of criteria.

46 citations


Patent
Tomonari Yamauchi1, Kazuya Yamada1, Taro Terao1, Takashi Nagao1, Toshiya Yamada1 
06 Jan 1995
TL;DR: An image edit processing apparatus for editing digital images, and an image output apparatus for printing out the processed digital image is described in this article, where image data is read by an image scanner, and such editorial jobs as enlargement/reduction, change of resolution, rotation, composition, and correction of tone or brightness are carried out for the thus read image data by a personal computer or work station or in print service, so that the edit processing is efficiently carried out at high speed.
Abstract: An image edit processing apparatus for editing digital images, and an image output apparatus for printing out the processed digital image. In the apparatus, image data is read by an image scanner, and such editorial jobs as enlargement/reduction, change of resolution, rotation, composition, and correction of tone or brightness are carried out for the thus read image data by a personal computer or work station or in print service, so that the edit processing is efficiently carried out at high speed.

45 citations


Patent
02 Nov 1995
TL;DR: In this article, a computer-aided method of detecting regions of interest in a digital image optimizes and adapts a computer aided scheme for detecting regions in images, which is based on global image characteristics.
Abstract: A computerized method of detecting regions of interest in a digital image optimizes and adapts a computer aided scheme for detecting regions of interest in images. The optimization is based on global image characteristics. For each image in a database of images having known regions of interest, global image features are measured and an image characteristic index is established based on these global image features. All the images in the database are divided into a number of image groups based on the image characteristic index of each image in the database and the CAD scheme is optimized for each image group. Once the CAD scheme is optimized, to process a digital image, an image characteristics based classification criteria is established for that image, and then global image features of the digitized image are determined. The digitized image is then assigned an image characteristics rating based on the determined global image features, and the image is assigned to an image group based on the image rating. Then regions of interest depicted in the image are determined using a detection scheme adapted for the assigned image group.

41 citations


Patent
George Stephen Zabele1
17 Nov 1995
TL;DR: In this paper, a system and methods for controlling print quality of a printed product of a printer, which may comprise an image acquisition unit for acquiring an electronic test image of printing on the printed product after printing by the printer, is presented.
Abstract: A system and methods for controlling print quality of a printed product of a printer, which may comprise an image acquisition unit for acquiring an electronic test image of printing on the printed product after printing by the printer, an image processor for comparing the test image of the printing with a prototype image of desired printing and determining a match between the printing on the test image and the printing on the prototype image, and means for generating an alarm representative of a print quality problem when the match between the printing on the test image and the printing on the prototype image does not satisfy a predetermined condition. The printed product is not required to include any reference marks.

36 citations


Patent
Dan S. Bloomberg1
15 Dec 1995
TL;DR: In this paper, an efficient image processing technique automatically analyzes an image scanned at 300 or greater dpi and measures an image characteristic of the input image from which it is possible to determine whether the image has ever been previously scanned or printed at low resolution at some time in its history.
Abstract: An efficient image processing technique automatically analyzes an image scanned at 300 or greater dpi and measures an image characteristic of the input image from which it is possible to determine whether the image has ever been previously scanned or printed at low resolution at some time in its history. The technique is effective in classifying an image that was at one time embodied in paper form and scanned at a vertical resolution of 100 dpi or less, such as a facsimile document scanned in standard mode, or at 200 pixels/inch (referred to as "fine fax mode".) The technique performs measurements on the pixels included in the vertical or horizontal edges of symbols contained in the input image, and produces a distribution of the measurements. A numerical interpretation of the measurement distribution data is used to classify the image. The invention is computationally efficient because it may be applied to only a small percentage (e.g., 7%) of a document image as long as the subimage selected contains symbols such as characters. The invention may be incorporated into a document image management system where identification of documents that contain the artifacts of low resolution document images could be used to improve subsequent processing of the image, such as, for example, in an OCR system.

34 citations


Proceedings ArticleDOI
28 Mar 1995
TL;DR: Variable-rate tree-structured VQ is applied to the coefficients obtained from an orthogonal wavelet decomposition, which makes the decision not to code vectors from the higher bands based on a distortion/rate tradeoff rather than a strict thresholding criterion.
Abstract: Variable-rate tree-structured VQ is applied to the coefficients obtained from an orthogonal wavelet decomposition. After encoding a vector, we examine the spatially corresponding vectors in the higher subbands to see whether or not they are "significant", that is, above some threshold. One bit of side information is sent to the decoder to inform it of the result. When the higher bands are encoded, those vectors which were earlier marked as insignificant are not coded. An improved version of the algorithm makes the decision not to code vectors from the higher bands based on a distortion/rate tradeoff rather than a strict thresholding criterion. Results of this method on the test image "Lena" yielded a PSNR of 30.15 dB at 0.174 bits per pixel.

26 citations


Journal ArticleDOI
TL;DR: The optically generated joint Fourier transform (JFT) of a test image and a reference image is processed using a new method: the JFT is recorded twice, and strong correlation peaks are obtained, and correlations within the test image are suppressed.
Abstract: The optically generated joint Fourier transform (JFT) of a test image and a reference image is processed using a new method: the JFT is recorded twice. In the second recording the reference image is phase shifted by π with respect to the first recording. The two JFT’s are subtracted and binarized with a threshold of zero. Strong correlation peaks are obtained, and correlations within the test image are suppressed. Some results of optical implementation are presented, using a ferroelectric liquid crystal display with 128 × 128 pixels for data input. The phase shift of the reference was implemented by the contrast-inverted reference input on the binary light-modulating device. Processing of the JFT is done by a CCD camera, a frame grabber, and a personal computer.

Proceedings ArticleDOI
31 Jan 1995
TL;DR: This paper presents results from the experience with CANDID (comparison algorithm for navigating digital image databases), which was designed to facilitate image retrieval by content using a query-by-example methodology.
Abstract: This paper presents results from our experience with CANDID (comparison algorithm for navigating digital image databases), which was designed to facilitate image retrieval by content using a query-by-example methodology. A global signature describing the texture, shape, or color content is first computed for every image stored in a database, and a normalized similarity measure between probability density functions of feature vectors is used to match signatures. This method can be used to retrieve images from a database that are similar to a user-provided example image. Results for three test applications are included.© (1995) COPYRIGHT SPIE--The International Society for Optical Engineering. Downloading of the abstract is permitted for personal use only.

Patent
Toru Kasamatsu1
28 Sep 1995
TL;DR: In this paper, an image detection and background processing device and method having a image reader for reading a whole document and converting the read document to a digital image data of multi value is presented.
Abstract: An image detection and background processing device and method having a image reader for reading a whole document and converting the read document to a digital image data of multi value. The image forming apparatus detects the density of a background in the document based on the digital image data obtained by the image reader, sets a reference value based on the background density as well as the distribution of the digital image data, and distinguishes an image area of the document from the background by comparing the reference value and the image data so as to execute a process concerning the distinguished image area (for example, an automatic magnification selection process or an automatic paper selection process). The apparatus and method includes analysis capability for determining whether the background area can be accurately discriminated from the image area based on image data.

Patent
21 Mar 1995
TL;DR: In this paper, a method of calibrating a color image reproduction system in the field is described, which includes a color input scanner for generating a color digital image, a digital image processor for applying a color transform to the color digital images, and a digital color printer for printing the transformed digital images.
Abstract: A method of calibrating a color image reproduction system in the field is provided. The color image reproduction system includes a color input scanner for generating a color digital image, a digital image processor for applying a color transform to the color digital image to produce a transformed color digital image and, and a digital color printer for printing the transformed color digital image. The calibration method includes the steps of: a) providing a set of calibration tools developed on a representative system, the tools including a scanning target, a first reference file recording the response of a scanner to the scanning target in the representative system, an image file for producing a test pattern, and a second reference file recording the response of the scanner in the representative system to the test pattern printed on the printer in the representative system; and b) employing the calibration tools to calibrate the color image reproduction system in the field, by: 1) scanning the scanning target in the color scanner to produce a first test file; 2) employing the first test file and the first reference file, to generate a scanner calibration table; 3) sending the image file to the color printer to produce a second test pattern; 4) scanning the second test pattern in the color scanner and processing the output of the color scanner through the scanner calibration table to produce a second test file; and 5) employing the second test file and the second reference file to generate a printer calibration table.

Book ChapterDOI
01 Jan 1995
TL;DR: This chapter presents a different classification method using archetypes, where the method for generating a set of archetypes is described, and the archetypes are used to classify ranges and domains in an iterated transformation image compression encoder.
Abstract: Determining a good set of transformations that encode an image well is time consuming because for each range an extensive search through the candidate domains is required ([47]). The purpose of classification is to reduce the number of domains that have to be checked in order to find an acceptable covering. References [11] and [44] use a classification scheme based on the idea that by orienting blocks in a canonical form (based on brightness), and then subdividing these primary classes further by the location of strong edges, it should be possible to find good coverings with minimal computation. In fact, this type of classification method performs quite well. In this chapter, a different classification method using archetypes is presented (see [11]). The method for generating a set of archetypes is described, and the archetypes are then used to classify ranges and domains in an iterated transformation image compression encoder. Fidelity versus encoding time data are presented, and compared with a more conventional classification scheme.

01 Jan 1995
TL;DR: It is concluded from this study that it is preferable to use the highest resolution available and to compress the image to the maximum extent allowed given the constraints of accuracy and image file size.
Abstract: The effects of JPEG compression on automated DTM extraction via the approach of feature-based matching are investigated. JPEG lossy compression involves a truncation of higher spatial frequency data, a process which influences the accuracy of computation of grey-value gradients in the feature determination phases of image matching. Resulting accuracy effects on DTM heights obtained via the MATCH-T software system are investigated using a single stereomodel of 1:18,000 scale aerial photography at digital image resolutions of 15, 30, 45 and 60 µm. Heighting errors are computed for a range of compression ratios from 2:1 to about 40:1, illustrating that the impact of compression on DTM accuracy can be significant. It is concluded from this study that it is preferable to use the highest resolution available and to compress the image to the maximum extent allowed given the constraints of accuracy and image file size.

Patent
31 Aug 1995
TL;DR: In this article, an image forming apparatus includes an image bearing member for carrying a toner image, image forming unit for forming an image, and a density detecting unit for detecting a density of the toner test image transferred to the transfer material carrying member.
Abstract: An image forming apparatus includes an image bearing member for carrying a toner image; an image forming unit for forming a toner test image on the image bearing member; a transfer material carrying member, for carrying a transfer material, wherein the test toner image is transferred onto a transfer material carried on the transfer material carrying member or onto the transfer material carrying member; and a density detecting unit for detecting a density of the toner test image transferred to the transfer material carrying member. A transfer intensity is smaller when the toner test image for density detection is transferred onto the transfer material carrying member than when the toner test image is transferred onto the transfer material tarried an the transfer material carrying member. The transfer intensity also changes depending on whether the transferred toner test image is the first color toner test image or the second toner test image, and depending on an ambient condition sensor.

Proceedings ArticleDOI
09 May 1995
TL;DR: The proposed measure is capable of differentiating between blocks not only according to block pixel values but also according to their distribution within the block, which leads to a much better image segmentation and consequently to higher image compression ratios with lower image degradation.
Abstract: This paper is concerned with segmenting light intensity images for the sake of compressing them using lossy compression techniques. Among the most commonly used techniques for image segmentation is quad-tree partitioning. In this technique, block variance based criteria are usually used to measure the smoothness of the segmented blocks and to consequently classify them. Block variance, however, does not consider the pixel value distribution within the block. Instead of using the block variance as a segmentation and classification measure, we propose using the mean squared deviation from the neighboring pixels mean. The proposed measure is capable of differentiating between blocks not only according to block pixel values but also according to their distribution within the block. This leads to a much better image segmentation and consequently to higher image compression ratios with lower image degradation. The results show the superiority of the proposed measure over the block variance measure.

Journal ArticleDOI
TL;DR: Non-ROC study designs that are highly sensitive to small differences among similar images can be used to select processing algorithms for digital image compression.

Proceedings Article
01 Jan 1995
TL;DR: An automatic method for detecting breast tumors in scintmammograms using Kohonen's "novelty filter" and classifying non-tumor images as "normal" or "diffuse increased uptake" mammograms is proposed.
Abstract: 99mTc-sestamibi scintmammograms provide a powerful non-invasive means for detecting breast cancer at early stages. This paper describes an automatic method for detecting breast tumors in such mammograms. The proposed method not only detects tumors but also classifies non-tumor images as "normal" or "diffuse increased uptake" mammograms. The detection method makes use of Kohonen's "novelty filter". In this technique an orthogonal vector basis is created from a normal set of images. Test images presented to the detection method are described as a linear combination of the images in the vector basis. Assuming that the image basis is representative of normal patterns, then it can be expected that there should be no major differences between a normal test image and its corresponding linear combination image. However, if the test image presents an abnormal pattern, then it is expected that the "abnormalities" will show as the difference between the original test image and the image built from the vector basis. In other words, the existing abnormality cannot be explained by the set of normal images and comes up as a "novelty." An important part of the proposed method are the steps taken for standardizing images before they can be used as part of the vector basis. Standardization is the keystone to the success of the proposed method, as the novelty filter is very sensitive to changes in shape and alignment.

Patent
21 Sep 1995
TL;DR: In this article, a process for calibrating a laser beam scanning control is described, where a light-sensitive medium is irradiated at predetermined positions by a laser-beam in order to generate a test image, then partial digital images of sections of the test image are generated and the digital partial images are assembled into a total digital image.
Abstract: A process is disclosed for calibrating a laser beam scanning control. A light-sensitive medium (5) is irradiated at predetermined positions by a laser beam (2) in order to generate a test image (20), then partial digital images of sections (21) of the test image (20) are generated and the digital partial images are assembled into a total digital image of the test image (20). The data for correcting the laser beam (2) scanning control (4) are calculated on the basis of a comparison between real positions of the laser beam (2) on the total digital image and predetermined set co-ordinates.

Patent
31 Mar 1995
TL;DR: In this paper, an inspection system and method through which the actual operation of an image processing apparatus can be dealt with satisfactorily, and in which the image processing equipment can be inspected in a short period of time.
Abstract: Disclosed are an inspection system and method through which the actual operation of an image processing apparatus can be dealt with satisfactorily, and in which the image processing apparatus can be inspected in a short period of time. A work station sends part of a control program, which has been stored in a hard disk, to an interface. In accordance with the control program, the interface sets the operating mode and an inspection image processing area of an item under inspection and requests the work station for test image data to be inputted to the item under inspection. From plural items of image data that have been stored in the hard disk, the work station loads image data requested by the interface, as well as reference data corresponding to this image data, to the interface and a reference-data memory. The interface inputs this test image data to the item under inspection and sends image data, which has been outputted by the item under inspection, to the work station. The work station compares the image data outputted by the item under inspection with the reference data stored in the reference-data memory and judges whether the item under inspection is acceptable or defective.

Proceedings ArticleDOI
17 Apr 1995
TL;DR: It is suggested that the ability of two previously proposed measures of image quality, mean square error (MSE) and normalized nearest neighbor difference (NNND), to determine the best compression algorithm may lead to erroneous conclusions in evaluations and/or optimizations if image compression algorithms.
Abstract: Image quality associated with image compression has been either arbitrarily evaluated through visual inspection, loosely defined in terms of some subjective criteria such as image sharpness or blockiness, or measured by arbitrary measures such as the mean square error between the uncompressed and compressed image The present paper psychophysically evaluated the effect of three different compression algorithms (JPEG, full-frame, and wavelet) on human visual detection of computer-simulated low-contrast lesions embedded in real medical image noise from patient coronary angiogram Performance identifying the signal present location as measure by d' index of detectability decreased for all three algorithms by approximately 30% and 62% for the 16:1 and 30:1 compression rations respectively We evaluated the ability of two previously proposed measures of image quality, mean square error (MSE) and normalized nearest neighbor difference (NNND), to determine the best compression algorithm The MSE predicted significantly higher image quality for the JPEG algorithm in the 16:1 compression ratio and for both JPEG and full-frame for the 30:1 compression ratio The NNND predicted significantly high image quality for the full-frame algorithm for both compassion rations These findings suggest that these two measures of image quality may lead to erroneous conclusions in evaluations and/or optimizations if image compression algorithms© (1995) COPYRIGHT SPIE--The International Society for Optical Engineering Downloading of the abstract is permitted for personal use only

Proceedings ArticleDOI
17 Apr 1995
TL;DR: A new finite state VQ (FSVQ) scheme is proposed to make use of the correspondence between the image interblock correlation and the geometrical closeness of the codevectors in the ordered super codebook to significantly reduce the computational complexity at the encoder and preserves the advantages of a simple VQ decoder.
Abstract: The new interframe video coding algorithm is presented using the topological ordering property of a self-organizing vector quantization (VQ). This algorithm utilizes the Kohonen learning algorithm to train a super VQ codebook which transforms the statistical characteristics of the training motion-compensated frame difference video signals into a 2D topologically ordered array. A new finite state VQ (FSVQ) scheme is proposed to make use of the correspondence between the image interblock correlation and the geometrical closeness of the codevectors in the ordered super codebook. A small state codebook is dynamically predicted purely based on the positions of codevectors used to encode the neighboring image blocks in the current frame as well as in the previous frame. Thus, this new FSVQ significantly reduces the computational complexity at the encoder and preserves the advantages of a simple VQ decoder. The experimental results show that the prediction accuracy ranges from 70 to 95%, depending on the moving information in a frame. It achieves an average bit rate of 0.082 bits per pixel with high image quality (37.86 dB) for the standard test image sequence `Miss America.' This algorithm is amenable to VLSI implementation because of its simple design, low memory requirement, and low computational complexity.© (1995) COPYRIGHT SPIE--The International Society for Optical Engineering. Downloading of the abstract is permitted for personal use only.

Proceedings ArticleDOI
21 Nov 1995
TL;DR: This paper considers the problem of segmenting 2D objects from intensity fovea images based on learning and applies the Karhunen-Loeve projection to the training set to obtain a set of eigenvectors and construct a space decomposition tree to achieve logarithmic retrieval time complexity.
Abstract: In this paper, we consider the problem of segmenting 2D objects from intensity fovea images based on learning. During the training, we apply the Karhunen-Loeve projection to the training set to obtain a set of eigenvectors and also construct a space decomposition tree to achieve logarithmic retrieval time complexity. The eigenvectors are used to reconstruct the test fovea image. Then we apply a spring network model to the reconstructed image to generate a polygon mask. After applying the mask to the test image, we search the space decomposition tree to find the nearest neighbor to segment the object from background. The system is tested to segment 25 classes of different hand shapes. The experimental results show 97% correct rate for the hands presented in the training (because of the background effect) and 93% correct rate for the hands that have not been used in the training phase.

Proceedings ArticleDOI
09 May 1995
TL;DR: Two new face recognition systems are proposed using auto-associative backpropagation neural network feature extractors on facial regions in conjunction with key facial structure measurements and a third proposed system combines the first two systems using confidence measurements to select a best match.
Abstract: Two new face recognition systems are proposed using auto-associative backpropagation neural network feature extractors on facial regions in conjunction with key facial structure measurements. A third proposed system combines the first two systems using confidence measurements to select a best match. A Cottrell/Fleming face recognition network and a structural face data network are also implemented and evaluated. A training set of 60 images and a test set of 12 images, acquired under uncontrolled conditions, were used to evaluate system performances. The first two proposed systems correctly selected 67 percent of the training images when presented with the test image set. The third proposed system achieved a recognition rate of 75 percent. By comparison, the Cottrell/Fleming network and the structural data network achieved recognition rates of 25 percent and 8 percent, respectively.

Patent
10 Nov 1995
TL;DR: In this paper, the same original is read independently of a device reading an original image by generating a correction value based on an image signal resulting from reading a test image and setting the value to a color correction means corresponding to the image forming means forming the test image.
Abstract: PURPOSE:To arrange density and color of output images when a same original is read independently of a device reading an original image by generating a correction value based on an image signal resulting from reading a test image and setting the value to a color correction means corresponding to the image forming means forming the test image. CONSTITUTION:When a copying machine is calibrated, a pattern generator 1161 outputs a prescribed pattern. Gradation correction devices 1164-1167 are made up of, e.g. an LUT comprising a RAM To correct an output characteristic of the image output device. A control section 1165 sets a correction parameter of a density correction circuit based on the result of reading a prescribed color test image by a read section and prints out a color test image outputted from the pattern generator 1161 and the correction value of the gradation correction devices 1164-1167 based on an image signal resulting from reading a printed color test image. Thus, the correction is implemented without notifying a test image from which copying machine outputs.

Patent
Hans-E. Dipl.-Phys. Korth1
23 Aug 1995
TL;DR: In this paper, the first image data from a transmitter as a reference image is digitized and compared with the reference image to determine the image differences, which are then sorted in a priority list according to the perceptual image information.
Abstract: The method includes the steps of digitizing a first image data from a transmitter as a reference image. The data of a following image are digitizing and comparing with the reference image, to determine the image differences. The image differences are sorted in a priority list, according to the perceptual image information. The image differences are transmitted to a receiver in order of their priority, and the actual image at the receiver and transmitter is updated.

11 May 1995
TL;DR: If LaRC continues the research effort begun this summer, it may be one of the first organizations to develop an integrated approach to imaging and could serve as a model for other organizations in government and the private sector.
Abstract: An electronic photography facility has been established in the Imaging & Photographic Technology Section, Visual Imaging Branch, at the NASA Langley Research Center (LaRC). The purpose of this facility is to provide the LaRC community with access to digital imaging technology. In particular, capabilities have been established for image scanning, direct image capture, optimized image processing for storage, image enhancement, and optimized device dependent image processing for output. Unique approaches include: evaluation and extraction of the entire film information content through scanning; standardization of image file tone reproduction characteristics for optimal bit utilization and viewing; education of digital imaging personnel on the effects of sampling and quantization to minimize image processing related information loss; investigation of the use of small kernel optimal filters for image restoration; characterization of a large array of output devices and development of image processing protocols for standardized output. Currently, the laboratory has a large collection of digital image files which contain essentially all the information present on the original films. These files are stored at 8-bits per color, but the initial image processing was done at higher bit depths and/or resolutions so that the full 8-bits are used in the stored files. The tone reproduction of these files has also been optimized so the available levels are distributed according to visual perceptibility. Look up tables are available which modify these files for standardized output on various devices, although color reproduction has been allowed to float to some extent to allow for full utilization of output device gamut.

Patent
24 Jul 1995
TL;DR: In this paper, a method and apparatus for recognizing meandering of a web is presented, where all pixel addresses and pixel data of a printed matter serving as a reference are received and it is determined whether each pixel represents a region in which an abrupt change in density occurs as an edge.
Abstract: According to a method and apparatus for recognizing meandering of a web, all pixel addresses and all pixel data of a printed matter serving as a reference are received and it is determined whether each pixel represents a region in which an abrupt change in density occurs as an edge. The pixel address and pixel data of each pixel determined as an edge are stored as the reference image data. A difference value between pixel data of test image data obtained from the web on which a test object is printed and pixel data of the reference data at the pixel addresses of the reference data and the corresponding pixel addresses of the test image data in a one-to-one correspondence is calculated. A meandering state of the web is recognized on the basis of each difference value.