scispace - formally typeset
Search or ask a question

Showing papers on "Subpixel rendering published in 1995"


Journal ArticleDOI
TL;DR: The algorithm is based on a source model emphasizing the visual integrity of detected edges and incorporates a novel edge fitting operator that has been developed for this application, and produces an image of increased resolution with noticeably sharper edges and lower mean-squared reconstruction error than that produced by linear techniques.
Abstract: In this paper, we present a nonlinear interpolation scheme for still image resolution enhancement. The algorithm is based on a source model emphasizing the visual integrity of detected edges and incorporates a novel edge fitting operator that has been developed for this application. A small neighborhood about each pixel in the low-resolution image is first mapped to a best-fit continuous space step edge. The bilevel approximation serves as a local template on which the higher resolution sampling grid can then be superimposed (where disputed values in regions of local window overlap are averaged to smooth errors). The result is an image of increased resolution with noticeably sharper edges and, in all tried cases, lower mean-squared reconstruction error than that produced by linear techniques. >

492 citations


Journal ArticleDOI
TL;DR: A subpixel addressing mechanism (called linear interpolation) is utilized for intermediate pixel addressing in the differentiation step, which results in improved accuracy of corner localization and reduced computational complexity.

207 citations


Patent
18 Oct 1995
TL;DR: In this paper, a method and apparatus for interpolating pixels to obtain subpels for use by a video decompression processor is described, where a prediction area is defined from which subpels are necessary to decompress a portion of a video image.
Abstract: A method and apparatus are disclosed for interpolating pixels to obtain subpels for use by a video decompression processor. A prediction area is defined from which subpels are necessary to decompress a portion of a video image. Instead of reading all of the pixels from the prediction area and then processing them together to perform the necessary interpolation, portions of the pixel data are read and simultaneously averaged using in-place computation in order to reduce hardware requirements. Rounding of subpixel results is achieved using the carry input of conventional adders to add a binary "1" to the averaged pixels, which are subsequently truncated to provide the interpolated subpels.

182 citations


Patent
24 Mar 1995
TL;DR: In this article, a relatively coarse resolution is used for subpixel correction, so that the DDAs (and other functional blocks) do not have to perform a full multiply: instead they merely perform simple add operations (addition of partial products) to derive the necessary offset from the delta-X values, using a proportionality constant provided by the rasterizer.
Abstract: A graphics processing system in which sub-pixel correction is implemented in a new and more economical way A half-pixel offset is originally imposed on both the X and Y axes, so the sub-pixel correction value may be positive or negative A relatively coarse resolution is used for subpixel correction, so that the DDAs (and other functional blocks) do not have to perform a full multiply: instead they merely perform simple add operations (addition of partial products) to derive the necessary offset from the delta-X values, using a proportionality constant provided by the rasterizer The DDAs preferably include a "guard band" in their calculations, so that values which exceed the maximum (eg 255, for 8 bits of subpixel resolution) do not wrap to zero; instead the output is held at the maximum until the computed value comes down below the maximum

168 citations


Patent
19 Jun 1995
TL;DR: In this article, a system for rendering visual images that combines sophisticated anti-aliasing and pixel blending techniques with control pipelining in hardware embodiment is presented, where a highly-parallel rendering pipeline performs sophisticated polygon edge interpolation, pixel blending and pixel rendering operations in hardware.
Abstract: A system for rendering visual images that combines sophisticated anti-aliasing and pixel blending techniques with control pipelining in hardware embodiment. A highly-parallel rendering pipeline performs sophisticated polygon edge interpolation, pixel blending and anti-aliasing rendering operations in hardware. Primitive polygons are transformed to subpixel coordinates and then sliced and diced to create "pixlink" elements mapped to each pixel. An oversized frame buffer memory allows the storage of many pixlinks for each pixel. Z-sorting is avoided through the use of a linked-list data object for each pixlink vector in a pixel stack. Because all image data values for X, Y, Z, R, G, B and pixel coverage A are maintained in the pixlink data object, sophisticated blending operations are possible for anti-aliasing and transparency. Data parallelism in the rendering pipeline overcomes the processor efficiency problem arising from the computation-intensive rendering algorithms used in the system of this invention. Single state machine control is made possible through linked data/control pipelining.

126 citations


Patent
Johji Mamiya1
14 Dec 1995
TL;DR: In this article, a method and apparatus for displaying an enlarged image on a liquid crystal display apparatus capable of displaying colors is described, which can enlarge an image at an arbitrary ratio and display the outline of the enlarged image smoothly.
Abstract: A method and apparatus is provided for displaying an enlarged image on a liquid crystal display apparatus capable of displaying colors, and in particular, a liquid crystal display method and apparatus that can enlarge an image at an arbitrary ratio and display the outline of the enlarged image smoothly. On a display panel of a color liquid crystal display apparatus in which display dots each comprising an array of three subpixels displaying red (R), green (G), and blue (B) are arranged in a matrix, three pieces of raw-direction original display brightness data to be displayed in three subpixels are extended and subjected to predetermined weights of brightness to form enlarged display brightness data. This data is sequentially output to the subpixels to extend the original image in the raw direction of the display panel before display.

109 citations


Journal ArticleDOI
TL;DR: In this paper, the authors introduce and analyze techniques for the reduction of aliasing signal energy in a staring infrared imaging system, referred to as microscanning, exploit subpixel shifts between time frames of an image sequence.
Abstract: We introduce and analyze techniques for the reduction of aliased signal energy in a staring infrared imaging system. A standard staring system uses a fixed two-dimensional detector array that corresponds to a fixed spatial sampling frequency determined by the detector pitch or spacing. Aliasing will occur when sampling a scene containing spatial frequencies exceeding half the sampling frequency. This aliasing can significantly degrade the image quality. The aliasing reduction schemes presented here, referred to as microscanning, exploit subpixel shifts between time frames of an image sequence. These multiple images are used to reconstruct a single frame with reduced aliasing. If the shifts are controlled, using a mirror or beam steerer for example, one can obtain a uniformly sampled microscanned image. The reconstruction in this case can be accomplished by a straightforward interlacing of the time frames. If the shifts are uncontrolled, the effective sampling may be nonuniform and reconstruction becomes more complex. A sampling model is developed and the aliased signal energy is analyzed for the microscanning techniques. Finally, a number of experimental results are presented that illustrate the perlormance of the microscanning methods.

89 citations


Journal ArticleDOI
TL;DR: In this paper, a Gaussian filter is used to spatially degrade a set of fine spatial resolution images [based on, e.g., Landsat Thematic Mapper (TM)], each of which represents the spatial structure and extent of a land cover type.

81 citations


Proceedings ArticleDOI
08 May 1995
TL;DR: In this article, a piecewise spatial warping technique is used to calculate a set of correction lookup tables which store the row and column subpixel spatial shifts, based on a reference grid image.
Abstract: An algorithm has been developed to correct spatial distortion in image intensifier-based projections for three-dimensional (3D) angiographic reconstruction. A piecewise spatial warping technique is used to calculate a set of correction lookup tables which store the row and column subpixel spatial shifts, based on a reference grid image. Pixel amplitudes in the corrected image are determined from bilinear interpolation of the four surrounding pixels in the observed image. The method has been tested using an x-ray imaging chain with a 30-cm image intensifier positioned at various angular orientations and x-ray source distances. Prior to distortion correction, the maximum error between observed and expected reference point locations was found to be 14 mm. After correction, the maximum and mean errors were 0.23 mm and 0.053 mm, respectively.© (1995) COPYRIGHT SPIE--The International Society for Optical Engineering. Downloading of the abstract is permitted for personal use only.

80 citations


Patent
27 Oct 1995
TL;DR: In this paper, the pixel and its encompassed subpixel area are associated with a plurality of macrodetectors and the image processing assembly is able to render each subpixel as an edge when magnitude of the centroid of light intensity is greater than a predetermined threshold.
Abstract: An image detection and pixel processing system includes a plurality of detector elements for receiving an image. The detector elements are subdivided into a plurality of macrodetectors, with each macrodetector constituting four or more detector elements, and with each macrodetector providing information for determining both a total light intensity value within the macrodetector and a centroid of light intensity indicative of light intensity position within the macrodetector. An image processing assembly receives information from the plurality of macrodetectors, with the image processing assembly relating a pixel and its encompassed subpixel area to each corresponding macrodetector, and further determining the total light intensity within the pixel and the centroid of light intensity within the subpixel. The image processing assembly is capable of rendering each subpixel area as an edge when magnitude of the centroid of light intensity is greater than a predetermined threshold.

73 citations


Journal ArticleDOI
TL;DR: An algorithm is presented that exploits spatiotemporal coherence between frames to significantly decrease the rendering time of ray traced animations, with significant savings occur with reflective and refractive objects.
Abstract: Reprojection techniques can create approximate ray traced animation frames. Extending an existing algorithm yields exact frames and full ray tracing, with up to 92 percent savings in rendering time. We present an algorithm that exploits spatiotemporal coherence between frames to significantly decrease the rendering time of ray traced animations. This method produces inferred ray traced images of any scene that can be ray traced using a point sampled method. The images created by the algorithm are not approximated frames created from weighted averages of other frames, nor are they frames patched together from near frame pixel values. The algorithm guarantees that a color seen in a subpixel would be returned by a ray passing somewhere through that subpixel, but not necessarily though the center. This algorithm efficiently creates frames of any view that can be ray traced. While the savings increase with the complexity of the rendered objects and the preponderance of diffuse objects, significant savings occur with reflective and refractive objects. However, the technique requires the ray tracing method to be point sample oriented. >

Patent
18 Jul 1995
TL;DR: In this article, a method of displaying characters on a pixel oriented grayscale display device having a predetermined pixel resolution employing parametric, geometric glyph descriptors is disclosed, which supports a client process that passes a request for a particular font and a physical character height for the displayed characters as well as the physical resolution expressed in pixels for unit length.
Abstract: A method of displaying characters on a pixel oriented grayscale display device having a predetermined pixel resolution employing parametric, geometric glyph descriptors is disclosed. The process supports a client process that passes a request for a particular font and a physical character height for the displayed characters as well as the physical resolution expressed in pixels for unit length. A character space height value in pixels is determined and compared to selected values to determine whether the character space height in physical pixels falls into one of three distinct ranges. If within the smallest range, no hinting or grid fitting is performed and the physical pixel coordinates of a scaled glyph descriptor are scan converted using subpixel coordinates. The on subpixels within each pixel are counted to provide a grayscale value for illuminating that particular pixel. If the character space height is in the highest range, the same process is performed after the scaled glyph descriptor is hinted to physical pixel boundaries. Character space heights in the mid range are hinted to the physical pixel boundary but scan converted using a conventional scan converter for the physical pixel space. The on pixels that result from this scan conversion are then illuminated to the maximum grayscale value while off pixels from the conversion are left off. The values of the variables that define the ranges are user selectable and may be varied in response to other parameters.

Patent
10 Apr 1995
TL;DR: In this article, a bistable ferroelectric liquid crystal display with uniformly spaced greyscale levels is presented, where each pixel is blanked then strobed two or more times in each frame time; the relative times between blanking and strobing, at least four different time periods, are varied to give the desired greyscales.
Abstract: The invention provides a ferroelectric liquid crystal display with uniformly spaced greyscale levels. The invention uses a bistable ferroelectric liquid crystal display formed by a layer of chiral smectic liquid crystal material between two cell walls. The walls carry e.g. line and column electrodes to give an x,y matrix of addressable pixels, and are surface treated to provide bistable operation. Each pixel may be divided into subpixels thereby giving spatial weighting for greyscale. Temporal weighting of greyscale is obtained by switching a pixel to a dark state for time T1 and a light state for time T2. When T1 and T2 are not equal, four different greyscales are obtainable; i.e. dark, dark grey, light grey, and light. The present invention provides a required uniform spacing of greyscale levels by addressing each pixel two or more times in one frame time. Each pixel is blanked then strobed, two or more times in each frame time; the relative times between blanking and strobing, at least four different time periods, are varied to give the desired greyscale levels. The temporal and spatial weighting may be combined to increase the number of obtainable greyscales. Further, the relative intensity between adjacent subpixels may be adjusted to vary the apparent size of the smallest subpixel; this is useful when subpixel size is near to manufacturing limits.

Proceedings ArticleDOI
21 Nov 1995
TL;DR: In this article, the authors compare two classes of algorithms for estimating subpixel, rigid-body translation between two images: optical flow and block matching, and show that these two algorithms are equivalent for subpixel displacements.
Abstract: We compare two classes of algorithms for estimating subpixel, rigid-body translation between two images. One class is based on optical flow. Optical flow algorithms determine translations between images from estimates of spatial and temporal derivatives of brightness. The other class is based on block matching. Block matching algorithms determine translations between images by minimizing the difference between shifted (warped) versions of the original images. We show that these two classes of algorithms are equivalent for subpixel displacements. Specifically, we show that all block matching algorithms that use bilinear interpolation can be recast into equivalent optical flow formulations, and that all algorithms based on optical flow using first-order derivative estimators can be recast into equivalent block matching formulations.

Journal ArticleDOI
TL;DR: Digital image correlation is used to find the disparities between corresponding points in a pair of images, for each of these models, with subpixel accuracy, in a parallel optical-axis model and a converging optical- axis model.

Patent
Kalluri R. Sarma1
28 Sep 1995
TL;DR: In this article, a multi-domain halftone grayscale active matrix liquid crystal display utilizing a mono-grap configuration for achieving a very wide viewing angle at a low-cost is presented.
Abstract: A multi-domain halftone grayscale active matrix liquid crystal display utilizing a mono-grap configuration for achieving a very wide viewing angle at a low-cost. The substrate of the display has areas having a rub or an alignment of different directions from the direction of the rub or alignment in other adjacent areas. Each pixel of the gray scale matrix has a plurality of subpixels such that each subpixel turns on at a different voltage than that of the other subpixels. The areas involving different rubs or alignments on the substrates may be areas of pixels or subpixels. At least one compensating retardation layer is formed on the display. The technique of enabling the combining of multiple domains and compensation with halftone grayscale pixels results in a display having significantly wider viewing angles than either a multi-domain display or a halftone grayscale display, whether compensated for or not.

Patent
Kei Kawase1
25 Jul 1995
TL;DR: In this article, a computer graphics system is described for generating pixel data corresponding to a plurality of pixels to be displayed, which includes a frame buffer having entries associated with each of the pixels.
Abstract: A computer graphics system is disclosed for generating pixel data corresponding to a plurality of pixels to be displayed. The computer graphics system includes a frame buffer having entries associated with each of the pixels. Each of the entries includes a subpixel data field, a plurality of display data fields, and a control field. For each entry the sub-pixel data field stores data corresponding to a set of sub-pixels, at least one of the plurality of display data fields stores data determined by filtering of the data of the sub-pixel data field of the entry, and the control field stores data representing a relationship between the sub-pixel data field of the entry and each of the plurality of display data fields of the entry.

Proceedings ArticleDOI
05 Jul 1995
TL;DR: In this paper, the authors describe three image matching algorithms designed specifically to process images taken with large base-to-height ratios. They include a new match score, a subpixel interpolation scheme, and a multi-resolution unwarping technique.
Abstract: When a terrain elevation map is computed from widely separated images the perspective distortion may result in a large number of false matches and poor reconstruction accuracy. This paper describes three image matching algorithms designed specifically to process images taken with large base-to-height ratios. They include a new match score, a subpixel interpolation scheme, and a multi-resolution unwarping technique. The algorithms are incorporated into a stereo analysis package and the system is tested by processing a single pair of high altitude images with a base-to-height ratio of 0.63 and a sequence of simulated images with base-to-height ratios that varied between 0.25 and 2.25. Analysis of the simulated data show that when these techniques are implemented the reconstruction accuracy remains independent of the base-to-height ratio.© (1995) COPYRIGHT SPIE--The International Society for Optical Engineering. Downloading of the abstract is permitted for personal use only.

Journal ArticleDOI
TL;DR: By adding uniformly distributed independent random noise it is shown that estimation bias may be removed and that the estimation variance is inversely proportional to the length of the line segment.
Abstract: This paper concerns the problem of obtaining subpixel estimates of the locations of straight edges in binary digital images using dithering. By adding uniformly distributed independent random noise it is shown that estimation bias may be removed and that the estimation variance is inversely proportional to the length of the line segment. The sensitivity to incorrect dither amplitude is calculated, and implementation is discussed. >

Journal ArticleDOI
TL;DR: In this article, a spectral mixture analysis approach and a low probability detection routine based on orthogonal subspace projection were used to detect two different volcanic tuff units, one basaltic and one rhyolitic, in two different scenes of data measured by the airborne visible/infrared imaging spectrometer (AVIRIS).
Abstract: High spectral resolution imagery produced by imaging spectrometers enables the discrimination of geologic materials whose surface expression is subpixel in scale. Moreover, the use of such data makes it possible to distinguish materials which are characterized only by subtle differences in the spectral continuum. We define the “continuum” as the reflectance or radiance spanning the space between spectral features. The capability to distinguish subpixel targets will prove invaluable in studies of the geology of the Earth and planets from airborne and spaceborne imaging spectrometers. However, subpixel targets can only be uniquely identified in a truly optimal sense through the application of data reduction techniques that model the spectral contribution of both target and background materials. Two such techniques are utilized herein. They are a spectral mixture analysis approach and a low probability detection routine based on orthogonal subspace projection. These techniques were applied to the problem of detecting two different volcanic tuff units, one basaltic and one rhyolitic, in two different scenes of data measured by the airborne visible/infrared imaging spectrometer (AVIRIS). These tuff units have limited exposures from an overhead perspective and have spectral signatures which differ from those of background materials only in terms of subtle slope changes in the reflectance continuum. Of the two approaches, it was found that the low probability detection algorithm was more effective in highlighting those pixels that contained the target tuff units while suppressing the response of undesired background materials.

Patent
31 Jul 1995
TL;DR: In this paper, a grey level value representing a pixel is interpolated to generate subpixel grey level values which correspond to a second resolution, and a threshold circuit thresholds the interpolated grey-level value.
Abstract: A method and system implements a high addressability characteristic into an error diffusion process. A grey level value representing a pixel is received. The grey level value is interpolated to generate subpixel grey level values which correspond to a second resolution. A threshold circuit thresholds the interpolated grey level value. In parallel to the interpolation circuit and threshold circuit is an error circuit which generates a plurality of possible error values. One of the plurality of possible error values is selected based on the number of subpixels exceeding a threshold value. A portion of the selected error value is then diffused to adjacent pixels on a next scanline.

Patent
11 Jan 1995
TL;DR: A color/texture interpolator (CTI) as mentioned in this paper is an x-stepper circuit for receiving input data defining a vector to be rendered and generating respective pixel addresses of pixels composing the vector; a filter memory for storing predetermined filter values addressed according to the vector's minor axis subpixel addresses and slope.
Abstract: A color/texture interpolator (CTI) for use in rendering antialiased vectors in a computer graphics system comprises: an x-stepper circuit for receiving input data defining a vector to be rendered and generating respective pixel addresses of pixels composing the vector; a filter memory for storing predetermined filter values addressed according to the vector's minor axis subpixel addresses and slope; and a color interpolator for generating a color value for each pixel composing the vector.

Proceedings ArticleDOI
20 Jun 1995
TL;DR: A new general purpose algorithm that allows the optimal geometric match between contours to be determined, that is the transformation yielding a minimal deformation is obtained, and, subpixel precision matching can be achieved.
Abstract: This paper introduces a new general purpose algorithm that allows the optimal geometric match between contours to be determined, that is the transformation yielding a minimal deformation is obtained. The algorithm relies only on the geometric properties of the contours and does not call for any other constraint, so that it is particularly suitable when no parameterization of title deformation is available or desirable. Contour deformation is explicitly incorporated in the computation, allowing for a thorough use of all geometric information available. Moreover, no discretization is involved in the computation, resulting in two main advantages: first, the algorithm is robust to differences in the segmentation of contours and allows the matching of polygonal approximations of contours with very little loss of precision, second, subpixel precision matching can be achieved. >

Proceedings ArticleDOI
22 May 1995
TL;DR: In this article, the authors introduce and analyze techniques for the reduction of aliasing power in a staring infrared imaging system, which is an optical technique utilizing subpixel shifts between multiple time frames in an image sequence.
Abstract: In this paper, we will introduce and analyze techniques for the reduction of aliased power in a staring infrared imaging system. A standard staring system uses a fixed two dimensional detector array which implies a fixed spatial sampling frequency corresponding to the detector to detector spacing. Aliasing will occur when sampling a scene containing spatial frequencies exceeding half of this sampling frequency. Most natural scenes are not band limited and aliasing can significantly degrade the quality and utility of the resulting image. The alias reduction schemes presented here are based upon microscanning, which is an optical technique utilizing subpixel shifts between multiple time frames in an image sequence. These multiple images are used to reconstruct a single frame with reduced aliasing, The microscanning techniques presented here are divided into the categories of controlled and uncontrolled techniques. If the microscanning is controlled, using a microscan mirror or beam steerer for example, one can obtain a uniformly sampled microscanned image. The reconstruction in this case is a relatively simple task. In an uncontrolled case, the sampling may be nonuniform and the reconstruction becomes more complicated. Experimental results are presented which illustrate that the quality of a microscanned image can be dramatically improved over a non-microscanned image as well as demonstrate the utility of the applied algorithms.

Journal ArticleDOI
TL;DR: In this article, a virtual camera subpixel detector is proposed for surface investigation by means of the reflection high energy electron diffraction (RHEED) method, where the RHEED intensity diffraction patterns are approximated by analytical functions which fit the simple diffraction intensity profiles well.

Journal ArticleDOI
TL;DR: In this article, a new method is presented to estimate the abundance of different cover types within individual pixels, and several linear and non-linear multivariate calibration techniques are compared with respect to their ability to establish a relation between pixel values and fractions of ground cover.
Abstract: This paper is concerned with subpixel modelling of land cover estimation. A new method is presented to estimate the abundance of different cover types within individual pixels. Several linear and non-linear multivariate calibration techniques are compared with respect to their ability to establish a relation between pixel values and fractions of ground cover. The method is demonstrated using Landsat-TM imagery and data from Dutch heathlands. By using an optimization procedure for matching field data to image data, a solution was found for the problem of positioning errors in the training set formation.


Journal ArticleDOI
01 Oct 1995
TL;DR: In this paper, a curve is represented as a set of interconnected Hermite splines forming a snake generated from the subpixel edge information that minimizes the global energy functional integral over the set.
Abstract: One approach to the detection of curves at subpixel accuracy involves the reconstruction of such features from subpixel edge data points. A new technique is presented for reconstructing and segmenting curves with subpixel accuracy using deformable models. A curve is represented as a set of interconnected Hermite splines forming a snake generated from the subpixel edge information that minimises the global energy functional integral over the set. While previous work on the minimisation was mostly based on the Euler-Lagrange transformation, the authors use the finite element method to solve the energy minimisation equation. The advantages of this approach over the Euler-Lagrange transformation approach are that the method is straightforward, leads to positive m-diagonal symmetric matrices, and has the ability to cope with irregular geometries such as junctions and corners. The energy functional integral solved using this method can also be used to segment the features by searching for the location of the maxima of the first derivative of the energy over the elementary curve set.

Proceedings ArticleDOI
22 May 1995
TL;DR: In this work the RBF network is first trained with the given image, satisfying the constraint of the gray value at each pixel, and each pixel is then divided into subpixels.
Abstract: Image interpolation using radial basis function (RBF) neural networks is accomplished. In this work the RBF network is first trained with the given image, satisfying the constraint of the gray value at each pixel. With the desired magnification ratio, each pixel is then divided into subpixels. The subpixel gray values are calculated using the trained network. Two dimensional Gaussian basis functions are used as the neurons in the hidden layer.

Journal ArticleDOI
TL;DR: A local-neighborhood pixel-based adaptive algorithm to track image features, both spatially and temporally, over a sequence of monocular images and it is shown how the algorithm has been used to extract the Focus of Expansion and to compute the time-to-contact using real image sequences of unstructured, unknown environments.