scispace - formally typeset
Search or ask a question

Showing papers on "Subpixel rendering published in 1996"


Journal ArticleDOI
TL;DR: A novel observation model based on motion compensated subsampling is proposed for a video sequence and Bayesian restoration with a discontinuity-preserving prior image model is used to extract a high-resolution video still given a short low-resolution sequence.
Abstract: The human visual system appears to be capable of temporally integrating information in a video sequence in such a way that the perceived spatial resolution of a sequence appears much higher than the spatial resolution of an individual frame. While the mechanisms in the human visual system that do this are unknown, the effect is not too surprising given that temporally adjacent frames in a video sequence contain slightly different, but unique, information. This paper addresses the use of both the spatial and temporal information present in a short image sequence to create a single high-resolution video frame. A novel observation model based on motion compensated subsampling is proposed for a video sequence. Since the reconstruction problem is ill-posed, Bayesian restoration with a discontinuity-preserving prior image model is used to extract a high-resolution video still given a short low-resolution sequence. Estimates computed from a low-resolution image sequence containing a subpixel camera pan show dramatic visual and quantitative improvements over bilinear, cubic B-spline, and Bayesian single frame interpolations. Visual and quantitative improvements are also shown for an image sequence containing objects moving with independent trajectories. Finally, the video frame extraction algorithm is used for the motion-compensated scan conversion of interlaced video data, with a visual comparison to the resolution enhancement obtained from progressively scanned frames.

1,058 citations


Book ChapterDOI
01 Jan 1996
TL;DR: This paper compares the suitability and efficacy of five algorithms for determining the peak position of a line or light stripe to subpixel accuracy in terms of accuracy, robustness and computational speed.
Abstract: This paper compares the suitability and efficacy of five algorithms for determining the peak position of a line or light stripe to subpixel accuracy. The algorithms are compared in terms of accuracy, robustness and computational speed. In addition to empirical testing, a theoretical comparison is also presented to provide a framework for analysis of the empirical results.

191 citations


Book ChapterDOI
01 Dec 1996
TL;DR: Here, images are precomputed for twigs and branches at various levels in the hierarchical structure of a tree, and adaptively combined depending on the position of the new viewpoint.
Abstract: Chen and Williams [2] show how precomputed z-buffer images from different fixed viewing positions can be reprojected to produce an image for a new viewpoint. Here, images are precomputed for twigs and branches at various levels in the hierarchical structure of a tree, and adaptively combined depending on the position of the new viewpoint. The precomputed images contain multiple z levels to avoid missing pixels in the reconstruction, subpixel masks for antialiasing, and colors and normals for shading after reprojection.

123 citations


Proceedings ArticleDOI
18 Jun 1996
TL;DR: It is shown that the cross power spectrum of two images, containing subpixel shifts, is a polyphase decomposition of a Dirac delta function.
Abstract: A method of registering images at subpixel accuracy has been proposed, which does not resort to interpolation. The method is based on the phase correlation method and is remarkably robust to correlated noise and uniform variations of luminance. We have shown that the cross power spectrum of two images, containing subpixel shifts, is a polyphase decomposition of a Dirac delta function. By estimating the sum of polyphase components one can then determine sub-pixel shifts along each axis.

115 citations


Patent
28 Oct 1996
TL;DR: In this paper, a plurality of position sensitive detector elements arranged to receive an image are used to determine the total light intensity within a pixel and the centroid of light intensity in a subpixel.
Abstract: An image detection and pixel processing system includes a plurality of position sensitive detector elements arranged to receive an image. Each position sensitive detector element provides information for determining both a total light intensity value within the position sensitive detector element and a centroid of light intensity indicative of light intensity position within the position sensitive detector element. An image processing assembly receives information from the plurality of position detector elements with the image processing assembly relating a pixel and its encompassed subpixel area to each corresponding position detector element. The total light intensity within the pixel and the centroid of light intensity within the subpixel is determined, with the image processing assembly rendering each subpixel area as an edge when magnitude of the centroid of light intensity is large.

88 citations


Journal ArticleDOI
TL;DR: In this article, a feedforward neural network model based on the multilayer perceptron structure and trained using the backpropagation algorithm responds to subpixel class composition in both simulated and real data.

73 citations


Patent
17 Dec 1996
TL;DR: In this paper, a digital micromirror device has an array of mirrors that correspond in number to the number of subpixels of the dielectric filter each of the mirrors provide "on" and "off" positions for selectively transmitting a desired color of light from the mirror to an image screen.
Abstract: A projection apparatus provides a projection engine that supplies polarized light to a dielectric filter (or diffraction grating) The dielectric filter provides an array of pixels that each pass selected colors of light and reflect other colors Each pixel is subpixelated (for example, red, green, and blue subpixels) so that a single subpixel passes a selected color (for example, red) and reflects the other colors (for example, green and blue) A digital micromirror device has an array of mirrors that correspond in number to the number of subpixels of the dielectric filter Each of the mirrors provide "on" and "off" positions for selectively transmitting a desired color of light from the mirror to an image screen The image screen receives light reflected by selected of the mirrors of the array of micromirrors when the selected mirrors are in the "on" position The micromirror device can be controlled with a computer, television, signal, video signal, or the like

61 citations


Journal ArticleDOI
Lawrence O'Gorman1
TL;DR: The main conclusion of this paper is that, to achieve better precision, measurement of a straight-edged region should be made at an angle askew to the sampling axis and this should be at a certain length that is a function of this skew angle.
Abstract: The precision by which a region is located or measured on the image plane is limited by the sampling density. In this paper, the worst-case precision errors are determined for calculating the average image location of an edge, line, and straight-edged region. For each case, it is shown how the worst-case error can be minimized as a function of the geometric parameters. These results can be used to determine the worst case error by which the location of a known shape is measured. Another application is to design shapes for use in registration, such as fiducial marks used in electronic assembly. The main conclusion of this paper is that, to achieve better precision, measurement of a straight-edged region should be made at an angle askew to the sampling axis (not 0, 45, or 90 degrees) and this should be at a certain length that is a function of this skew angle.

53 citations


Patent
28 Oct 1996
TL;DR: In this article, a pixel and its encompassed subpixel area are associated with a plurality of macrodetectors, and the image processing assembly is capable of rendering each subpixel as an edge when magnitude of the centroid of light intensity is greater than a predetermined threshold.
Abstract: of EP0771102An image detection and pixel processing system (10) includes a plurality of detector elements (22) for receiving an image. The detector elements are subdivided into a plurality of macrodetectors, with each macrodetector constituting four or more detector elements, and with each macrodetector providing information for determining both a total light intensity value within the macrodetector and a centroid of light intensity indicative of light intensity position within the macrodetector. An image processing assembly (30) receives information from the plurality of macrodetectors, with the image processing assembly relating a pixel and its encompassed subpixel area to each corresponding macrodetector, and further determining the total light intensity within the pixel and the centroid of light intensity within the subpixel. The image processing assembly is capable of rendering each subpixel area as an edge when magnitude of the centroid of light intensity is greater than a predetermined threshold.

52 citations


Journal ArticleDOI
Xiaoming Li1, C.A. Gonzales1
TL;DR: This locally quadratic functional model decomposes the motion estimation optimization at subpixel resolutions into a two-stage pipelinable processes: full-search at full-pixel resolution and interpolation at any subpixel resolution.
Abstract: Accurate motion estimation is essential to effective motion compensated video signal processing, and subpixel resolutions are required for high quality applications. It is observed that around the optimum point of the motion estimation process the error criterion function is well modeled as a quadratic function with respect to the motion vector offsets. This locally quadratic functional model decomposes the motion estimation optimization at subpixel resolutions into a two-stage pipelinable processes: full-search at full-pixel resolution and interpolation at any subpixel resolution. Practical approximation formulas lead to the explicit computations of both motion vectors and error criterion functional values at subpixel resolutions.

49 citations


Journal ArticleDOI
TL;DR: It was found that for this tracking application, extremely accurate subpixel stabilization was a requirement for proper operation and in this application, Algorithm 3 performed significantly better than the other two algorithms.
Abstract: This paper compares three image stabilization algorithms when used as preprocessors for a target tracking application These algorithms vary in computational complexity, accuracy, and ability Algorithm 1 is capable of only pixel-level realignment of imagery, while Algorithms 2 and 3 are capable of full subpixel stabilization with respect to translation, rotation, and scale The algorithms are evaluated on their performance in the stabilization of one synthetic forward looking infrared (FLIR) data set and two real FLIR imagery data sets The evaluation tools incorporated include mean absolute error of the output data set and the overall performance of an automatic target acquisition system (developed at the Army Research Laboratory) that uses the algorithms as a front end preprocessor We found that for this tracking application, extremely accurate subpixel stabilization was a requirement for proper operation We also found that in this application, Algorithm 3 performed significantly better than the other two algorithms

Proceedings ArticleDOI
18 Jun 1996
TL;DR: While reconstruction based on edge brightness and contrast alone introduces significant artifact, restitution of the local blur signal is shown to produce perceptually accurate reconstructions and it is shown that local scale control allows excellent precision even for highly blurred edges.
Abstract: We have recently proposed a scale-adaptive algorithm for reliable edge detection and blur estimation. The algorithm produces a contour code which consists of estimates of position, brightness, contrast and blur for each edge point in the image. Here we address two questions: 1. Can scale adaptation be used to achieve precise localization of blurred edges? 2. How much of the perceptual content of an image is carried by the 1-D contour code? We report an efficient algorithm for subpixel localization, and show that local scale control allows excellent precision even for highly blurred edges. We further show how local scale control can quantitatively account for human visual acuity of blurred edge stimuli. To address the question of perceptual content, we report an algorithm for inverting the contour code to reconstruct an estimate of the original image. While reconstruction based on edge brightness and contrast alone introduces significant artifact, restitution of the local blur signal is shown to produce perceptually accurate reconstructions.

Journal ArticleDOI
TL;DR: The authors present a new approach to automated optical inspection (AOI) of circular features that combines image fusion with subpixel edge detection and parameter estimation that shows greatest improvement over traditional methods in cases where the feature size is small relative to the resolution of the imaging device.
Abstract: The authors present a new approach to automated optical inspection (AOI) of circular features that combines image fusion with subpixel edge detection and parameter estimation. In their method, several digital images are taken of each part as it moves past a camera, creating an image sequence. These images are fused to produce a high-resolution image of the features to be inspected. Subpixel edge detection is performed on the high-resolution image, producing a set of data points that is used for ellipse parameter estimation. The fitted ellipses are then back-projected into 3-space in order to obtain the sizes of the circular features being inspected, assuming that the depth is known. The method is accurate, efficient, and easily implemented. The authors present experimental results for real intensity images of circular features of varying sizes. Their results demonstrate that their algorithm shows greatest improvement over traditional methods in cases where the feature size is small relative to the resolution of the imaging device.

Patent
16 Feb 1996
TL;DR: In this paper, an image-forming device consisting of a pixel clock generator and a clock period corrector was proposed to prevent the interference between the correction period of main scanning magnification variation and a screen period effectively.
Abstract: PROBLEM TO BE SOLVED: To provide an image-forming device capable of preventing the interference between the correction period of main scanning magnification variation and a screen period effectively. SOLUTION: The image-forming device forms a latent image by performing exposure scanning on the surface of a photoreceptor in a main scanning direction by the modulation of light beams. The image-forming device comprises a subpixel clock generator 32 for generating a subpixel clock having a fixed period, a pixel clock generator 33 for generating the pixel clock for prescribing pixel width in the main scanning direction, and a clock period corrector 36 for correcting the period of the pixel clock generated by the pixel clock generator 33. The pixel clock generator 33 uses a plurality of subpixel clocks generated by the subpixel clock generator 32 to generate pixel clocks. The clock period corrector 36 increases and decreases the number of subpixel clocks applied to the generation of the pixel clock by a prescribed number to the preset reference number of subpixel clocks for correcting the period of the pixel clock, and can change the prescribed number of subpixel clocks arbitrarily. COPYRIGHT: (C)2007,JPO&INPIT

Journal ArticleDOI
TL;DR: This article provides a global registration method that is robust in the presence of noise and local distortions between pairs of images, using a two-stage approach, comprising an optional Fourier phase-matching method to carry out preregistration, followed by an iterative procedure.

Patent
05 Aug 1996
TL;DR: In this article, an antialiasing system is implemented in a graphics system of a computer, which is configured to receive from steppers (edge and span) new color values and new depth dimensions at a plurality of subpixel locations.
Abstract: An antialiasing system is implemented in a graphics system of a computer. A memory control is associated with graphics system for controlling a frame buffer. The antialiasing system is situated in the memory control and is configured to receive from steppers (edge and span) new color values and new depth dimensions z at a plurality of subpixel locations. In turn, the antialiasing system analyzes color data pertaining to each pixel in the frame buffer, and if necessary, updates the color data. The color data is unique and minimizes memory requirements and accesses. Specifically, the color data includes a current display value that corresponds to the pixel, a reference color value that corresponds to one subpixel location, a reference depth dimension that corresponds with the one subpixel location, and reconstruction indicia that correspond with other subpixel locations and that can be utilized to derive respective depth dimensions and colors for the other subpixel locations. The reconstruction indicia include a hint and a depth dimension difference. The hint indicates color informationm, and the depth dimension difference represents a difference between the reference depth dimension and a respective depth dimension.

Patent
16 Sep 1996
TL;DR: In this article, a single polarization state of light is transmitted from the backlighting structure to section of the LCD panel where both spatial intensity and spectral filtering of the transmitted polarized light simultaneously occurs on a subpixel basis.
Abstract: An LCD panel employing a novel scheme of systemic light recycling. A single polarization state of light is transmitted from the backlighting structure to section of the LCD panel where both spatial intensity and spectral filtering of the transmitted polarized light simultaneously occurs on a subpixel basis. At each subpixel location, spectral bands of light not transmitted to the display surface during spectral filtering, are reflected without absorption back along the projection axis into the backlighting structure. At a subcomponent level within the LCD panel, spectral components of transmitted polarized light not used at any particular subpixel structure location are effectively reflected either directly or indirectly back into the backlighting structure.

Proceedings ArticleDOI
07 May 1996
TL;DR: DCT-based techniques to estimate subpel motion at different desired levels of accuracy in the DCT domain without interpolation are developed, enabling simplification of the heavily loaded feedback loop of conventional hybrid video coder design, resulting in a high-throughput, low-complexity fully DCT- based coder.
Abstract: Inter-pixel interpolation is required for most subpixel motion estimation schemes but it undesirably increases the overall complexity and data flow and deteriorates the estimation accuracy. We develop DCT-based techniques to estimate subpel motion at different desired levels of accuracy in the DCT domain without interpolation by establishing subpel sinusoidal orthogonal principles and showing that subpixel motion information is preserved in the DCT of a shifted signal under some conditions in the form of pseudo phases. Though applicable to other areas, the resulting algorithms from these techniques for video coding are flexible and scalable with very low complexity O(N/sup 2/) compared to O(N/sup 4/) for block matching methods. Importantly, DCT-based methods enable simplification of the heavily loaded feedback loop of conventional hybrid video coder design, resulting in a high-throughput, low-complexity fully DCT-based coder. Finally, simulation results show a comparable performance of the proposed algorithms with block matching methods.

Journal ArticleDOI
TL;DR: In this paper, the authors presented a detector with high spatial resolution that uses an intensified CCD to observe the light output of a thin scintillator, which can be chosen to be sensitive either to neutrons or X-rays.
Abstract: We present a detector with high spatial resolution that uses an intensified CCD to observe the light output of a thin scintillator, which can be chosen to be sensitive either to neutrons or X-rays. The CCD is read out at high pixel rates and the resulting video signal is analyzed in real-time to obtain the coordinates of the centers of the scintillation events. A fast centroiding algorithm based on digital signal processing techniques results in a resolution of 60 μm. More sophisticated algorithms have been tested to yield subpixel resolution (40μm). Count rates of about 50 000 neutrons per second on a sensitive area of 25 mm diameter have been achieved.

Proceedings ArticleDOI
16 Sep 1996
TL;DR: It is shown that for the ARL ATA application, extremely accurate subpixel stabilization is a requirement for proper operation and it is concluded that algorithm 3 performs significantly better than the other two algorithms.
Abstract: This paper considers the characterization of three different image stabilization algorithms when used as a preprocessor for a computer vision application. These algorithms vary in computational complexity, accuracy, and performance. The first algorithm (developed by the Army Research Laboratory (ARL)) is capable of image alignment to an accuracy of one pixel. Algorithms 2 and 3 (developed by the University of Maryland) are capable of full subpixel stabilization with respect to translation, rotation, and scale. The evaluation tools incorporated include mean square error of the output data set and the overall performance of an automatic target acquisition (ATA) system (developed at ARL) that uses the algorithms as a front-end preprocessor. We show that for the ARL ATA application, extremely accurate subpixel stabilization is a requirement for proper operation. Based on experiments, we conclude that algorithm 3 performs significantly better than the other two algorithms.

Patent
24 Sep 1996
TL;DR: In this article, a histogram of the image is generated and the actual background and black reference values are determined, and a gray level value representing a pixel is received and interpolated to generate subpixel gray level values which correspond to a second resolution.
Abstract: A method and system implements a high addressability characteristic into an error diffusion process. A histogram of the image is generated and the actual background and black reference values are determined. A gray level value representing a pixel is received. The gray level value has a first resolution which corresponds to an original input resolution. The gray level value is interpolated to generate subpixel gray level values which correspond to a second resolution. The second resolution is higher than the first resolution and corresponds to the high addressability characteristic. A threshold circuit thresholds the interpolated gray level value and generates an error value as a result of the threshold using the determined background and black reference values. The error value has a resolution corresponding to the first resolution. A portion of the error value is diffused to adjacent pixels on a next scanline.

Journal ArticleDOI
TL;DR: Canon's first commercial FLC display as discussed by the authors is a color panel with 15′ (38 cm) diagonal with a resolution of 1240 × 1024 picture elements, each such element (230 pm × 230 pm) can produce 16 different colours due to its subdivision in four parts.
Abstract: Having presented a range of powerful FLC display prototypes (among them a 24′ monochrome, a 21′ colour and several different 15′ screens) Canon Inc. in Tokyo is now manufacturing their first commercial FLC product. It is a colour panel with 15′ (38 cm) diagonal with a resolution of 1240 × 1024 picture elements. Each such element (230 pm × 230 pm) can produce 16 different colours due to its subdivision in four parts. When writing a picture a large number of hues can be simulated (32000 or 26000 are stated for the two different versions marketed) by a graphic so-called error diffusion technique. In general this gives a very good rendition of colour images but in certain cases the differently coloured single dots, which can be seen when the observer is very close to the screen, may be disturbing. The origin of this inconvenience is of course the fact that each subpixel only has two states; it cannot produce a continuous grey scale.

Proceedings ArticleDOI
06 Nov 1996
TL;DR: In this paper, the authors describe an on-going study effort whose objective is to demonstrate the unique attributes and added contributions of hyperspectral data in comparison with simulated multi-spectral data sets for detection, discrimination, material identification, functional identification and abundance estimation problems.
Abstract: Land cover and land use classification, and area estimation of spatially resolved objects have been successfully derived from remotely sensed imagery such as Landsat multispectral data for over 20 years. For subpixel objects, multispectral instruments may not provide sufficient distinct spectral information to reliably decompose a pixel into substances which make up that pixel. Hyperspectral imaging instruments with their large number of registered spectral bands may provide a capability to produce improved detection and classification of spatially resolved objects. These instruments are also designed to allow more reliable spectral decomposition of a pixel into pure substances, therefore permitting subpixel target material detection and abundance estimation. This paper describes an on-going study effort whose objective is to demonstrate the unique attributes and added contributions of hyperspectral data in comparison with simulated multispectral data sets for detection, discrimination, material identification, functional identification and abundance estimation problems. Methodologies used to perform these nonliteral exploitation tasks are described in this paper. Also presented in this paper are results obtained from applying these exploitation techniques to the Hyperspectral Digital Imagery Collection Experiment (HYDICE) sensor data and the simulated multispectral data sets for small to subpixel targets against a desert background. Performance comparison is made in terms of detection success rate, false alarms, and the number of correctly identified targets. These performance measures are also presented in this paper. This study is currently being extended to data collected by the HYDICE sensor in other types of background environment such as the forests to allow an assessment of the sensitivity of complex background on the relative utility of hyperspectral and multispectral data for key exploitation tasks.© (1996) COPYRIGHT SPIE--The International Society for Optical Engineering. Downloading of the abstract is permitted for personal use only.

Proceedings ArticleDOI
25 Aug 1996
TL;DR: This paper has presented an algorithm to be able to implement simultaneous detection of multiple shapes based on standardized cost function and similarity between instances, taking advantage of genetic algorithm with "population search".
Abstract: Detecting specific shape from image is an important problem in computer vision. A minimal subset is the smallest number of points (pixels) necessary to define an unique instance of a geometric primitive. To extract certain type of geometric primitives genetic algorithm has been studied. However in that method, it doesn't go far enough to detection accuracy, convergent speed and simultaneous detection of multiple shapes. In this paper, we proposed a new approach that improves detection accuracy and convergent speed for geometric shapes by the combination of genetic algorithm and subpixel accuracy (GA&SA). We also presented an algorithm to be able to implement simultaneous detection of multiple shapes based on standardized cost function and similarity between instances, taking advantage of genetic algorithm with "population search". In addition we have confirmed these practical usefulness through some experiments.

Proceedings ArticleDOI
TL;DR: The applied analysis spectral analytical process (AASAP) as discussed by the authors is a suite of algorithms that perform environmental correction, signature derivation, and subpixel classification, and it has been used to detect stands of Loblolly Pine in a landsat TM scene that contained a variety of species of southern yellow pine.
Abstract: An effective process for the automatic classification of subpixel materials in multispectral imagery has been developed. The applied analysis spectral analytical process (AASAP) isolates the contribution of specific materials of interest (MOI) within mixed pixels. AASAP consists of a suite of algorithms that perform environmental correction, signature derivation, and subpixel classification. Atmospheric and sun angle correction factors are extracted directly from imagery, allowing signatures produced from a given image to be applied to other images. AASAP signature derivation extracts a component of the pixel spectra that is most common to the training set to produce a signature spectrum and nonparametric feature space. The subpixel classifier applies a background estimation technique to a given pixel under test to produce a residual. A detection occurs when the residual falls within the signature feature space. AASAP was employed to detect stands of Loblolly Pine in a landsat TM scene that contained a variety of species of southern yellow pine. An independent field evaluation indicated that 85% of the detections contained over 20% Loblolly, and that 91% of the known Loblolly stands were detected. For another application, a crop signature derived from a scene in Texas detected occurrences of the same crop in scenes from Kansas and Mexico. AASAP has also been used to locate subpixel occurrences of soil contamination, wetlands species, and lines of communications.

Proceedings ArticleDOI
08 Mar 1996
TL;DR: In this article, a semi-automated building assessment method (SABAM) was proposed for estimating building edges with sub-pixel accuracy, which is based on an earlier manual point method which determined building height using shadow length analysis.
Abstract: This paper describes a semi-automated building assessment method (SABAM) for estimating building edges with sub-pixel accuracy. The semi-automated approach is based on an earlier manual point method which determined building height using shadow length analysis. The manual method was then semi-automated using a sub-pixel edge detection algorithm to obtain more precise building edges and reduce human interpretation. Edge locations have been evaluated to within 1/100th of a pixel using gradient descent.

Patent
19 Jun 1996
TL;DR: In this article, a system and method for processing image data converts a pixel of image data having a first resolution to a plurality of subpixels, the plurality of pixels representing a second resolution, the second resolution being higher than the first resolution.
Abstract: A system and method for processing image data converts a pixel of image data having a first resolution to a plurality of subpixels, the plurality of subpixels representing a second resolution, the second resolution being higher than the first resolution The plurality of subpixels are thresholded to generate a group of subpixel values for each pixel and a threshold error value It is then determined if the group of subpixel values from the thresholding process produce a pattern containing an isolated subpixel If the group of subpixel values from the thresholding process produce a pattern containing an isolated subpixel, the group of subpixel vales is modified to produce a pattern without an isolated subpixel The modification process produces a subpixel error value which is diffused in the slowscan direction to adjacent pixels

Proceedings ArticleDOI
TL;DR: In this article, the effect of the shape and size of the sensing element, the optics, and the viewed object on the performance of an IR sensor system is analyzed for small unresolved or partially resolved objects.
Abstract: For small unresolved or partially resolved objects, the predic- tion of the performance of an infrared (IR) sensor system depends not only on the optics and the atmosphere, but also on the geometry of the sensing elements within the detector. The apparent scintillation caused by subpixel displacements of the detector from the optimum position is investigated. The effect is linked to the system point spread function (PSF) as well as the shape and size of the object. The effect of the shape and size of the sensing element, the optics, and the viewed object are simulated. The simulation also evaluates the effect that vibrations or pointing accuracy have on the performance of the sensor system for different types of targets. Results are presented that show that the varia- tion in contrast of objects due to the shape and size of the active portion of a detector and the position of the target projection on the detector can be significant, and should be taken into account when estimating system performance. Although these effects are most significant in wide field of view surveillance systems, narrow field of view systems are also de- scribed and evaluated. © 1997 Society of Photo-Optical Instrumentation Engineers. Subject terms: scintillation; unresolved; subpixel; fill factor; contrast.

Journal ArticleDOI
01 Aug 1996
TL;DR: In this paper, the authors proposed an image-processing-based approach towards the development of a super-high-resolution image acquisition system, aiming at a particular class of application where a user indicates a region of interest on an observed image in advance, and applies a prototypal temporal integration imaging method.
Abstract: The authors propose an image-processing-based approach towards the development of a super-high-resolution image acquisition system. Imaging methods based on this approach can be classified into two main categories: a spatial integration imaging method and a temporal integration imaging method. With regard to the spatial integration imaging method, the authors have recently presented a method for acquiring an improved-resolution image by integrating multiple images taken simultaneously with multiple different cameras. They develop their work, aiming at a particular class of application where a user indicates a region of interest (ROI) on an observed image in advance, and apply a prototypal temporal integration imaging method. The prototypal temporal integration imaging method does not involve global image segmentation, but uses a subpixel registration algorithm which describes an image warp within the ROI, with subpixel accuracy, as a deformation of quadrilateral patches. The method then performs a subpixel registration by warping an observed image with the warping function recovered from the deformed quadrilateral patches. Experimental simulations demonstrate that the temporal integration imaging is promising as a basic means of high resolution imaging.

Proceedings ArticleDOI
14 Oct 1996
TL;DR: This paper presents an accurate 3D measurement method by a stereo camera system with a digital image processing subpixel technique, and the 3D position of the object is determined by the photogrammetric principle.
Abstract: Accurate measurement of 3D position of an object is very important for industrial quality control, robot vision and 3D movement analysis etc. Close range photogrammetry is one of the common techniques for this purpose. Coupled with digital image processing techniques, close range photogrammetry has been developing very quickly in the last decade and has more and more applications. In this paper, we present an accurate 3D measurement method by a stereo camera system with a digital image processing subpixel technique. With the method, the stereo camera system is accurately calibrated and rectified with a rectangular grid. The specified object masks are extracted accurately and automatically by a pattern recognition method. Finally, the object masks on each camera image are matched automatically and the 3D position of the object is determined by the photogrammetric principle.