Topic
Subpixel rendering
About: Subpixel rendering is a research topic. Over the lifetime, 3885 publications have been published within this topic receiving 82789 citations.
Papers published on a yearly basis
Papers
More filters
••
TL;DR: In this paper, the point spread function (PSF) and crosstalk (CTK) measurements of focal plane CMOS Active Pixel Sensor (APS) arrays were performed using sub-micron spot light stimulation.
Abstract: This paper presents the pioneer use of our unique Sub-micron Scanning System (SSS) for point spread function (PSF) and crosstalk (CTK) measurements of focal plane CMOS Active Pixel Sensor (APS) arrays. The system enables the combination of near-field optical and atomic force microscopy measurements with the standard electronic analysis. This SSS enables full PSF extraction for imagers via sub-micron spot light stimulation. This is unique to our system. Other systems provide Modulation Transfer Function (MTF) measurements, and cannot acquire the true PSF, therefore limiting the evaluation of the sensor and its performance grading. A full PSF is required for better knowledge of the sensor and its specific faults, and for research - to enable better optimization of pixel design and imager performance.
In this work based on the thorough scanning of different “L” shaped active area pixel designs (the responsivity variation measurements on a subpixel scale) the full PSF was obtained and the crosstalk distributions of the different APS arrays are calculated. The obtained PSF points out the pronounced asymmetry of the diffusion within the array, mostly caused by the certain pixel architecture and the pixels arrangement within the array. We show that a reliable estimate of the CTK in the imager is possible; the PSF use for the CTK measurements enables not only its magnitude determination (that can be done by regular optical measurements), but also to discover its main causes, enabling the design optimization per each potential pixel application.
15 citations
••
30 Oct 1995
TL;DR: A new algorithm for deinterlacing of interlaced scanned video sequences is presented, based on local spectra analysis of even and odd image fields, which demonstrates that the resulting frames contain less spatial aliasing and sharper edges than frames generated using commonly used approaches such as median filtering and linear interpolation.
Abstract: A new algorithm for deinterlacing of interlaced scanned video sequences is presented. The proposed algorithm is based on local spectra analysis of even and odd image fields. The local spectral analysis is performed based on overlapped block decomposition and motion compensation. The aliasing relationship of matching blocks in the interlaced fields and the sampling lattice information obtained by subpixel registration are utilized to reconstruct deinterlaced frames. Simulation results demonstrate that the resulting frames contain less spatial aliasing and sharper edges than frames generated using commonly used approaches such as median filtering and linear interpolation.
15 citations
•
21 Feb 2007
TL;DR: In this article, a liquid crystal display is modeled as isolating each pixel region to be two subpixel region as each one being formed by transistor and liquid crystal capacity as well as storage capacity, coupling transistor in two subpixels separately to different scanning line and coupling one of two said transistors to data line through transistor set between two adjacent scanning lines for generating two different voltages in pixel
Abstract: A liquid crystal display is featured as isolating each pixel region to be two subpixel region as each one being formed by transistor and liquid crystal capacity as well as storage capacity, coupling transistor in two subpixels separately to different scanning line and coupling one of two said transistors to data line through transistor set between two adjacent scanning lines for generating two different voltages in pixel.
14 citations
•
30 Mar 1994TL;DR: In this article, a system for rendering visual images that combines sophisticated anti-aliasing and pixel blending techniques with control pipelining in hardware embodiment is presented, where a highly-parallel rendering pipeline performs sophisticated polygon edge interpolation, pixel blending and pixel rendering operations in hardware.
Abstract: A system for rendering visual images that combines sophisticated anti-aliasing and pixel blending techniques with control pipelining in hardware embodiment. A highly-parallel rendering pipeline performs sophisticated polygon edge interpolation, pixel blending and anti-aliasing rendering operations in hardware. Primitive polygons are transformed to subpixel coordinates and then sliced and diced to create "pixlink" elements mapped to each pixel. An oversized frame buffer memory allows the storage of many pixlinks for each pixel. Z-sorting is avoided through the use of a linked-list data object for each pixlink vector in a pixel stack. Because all image data values for X, Y, Z, R, G, B and pixel coverage A are maintained in the pixlink data object, sophisticated blending operations are possible for anti-aliasing and transparency. Data parallelism in the rendering pipeline overcomes the processor efficiency problem arising from the computation-intensive rendering algorithms used in the system of this invention. Single state machine control is made possible through linked data/control pipelining.
14 citations
••
TL;DR: This work proposes a method for hardware accelerated volume rendering of medical data sets to multiview lenticular displays, offering interactive manipulation throughout, based on buffering GPU-accelerated direct volume rendered visualizations of the individual views from their respective focal spot positions, and composing the output signal for the multiv view lenticular screen in a second pass.
Abstract: The generation of multiview stereoscopic images of large volume rendered data demands an enormous amount of calculations. We propose a method for hardware accelerated volume rendering of medical data sets to multiview lenticular displays, offering interactive manipulation throughout. The method is based on buffering GPU-accelerated direct volume rendered visualizations of the individual views from their respective focal spot positions, and composing the output signal for the multiview lenticular screen in a second pass. This compositing phase is facilitated by the fact that the view assignment per subpixel is static, and therefore can be precomputed. We decoupled the resolution of the individual views from the resolution of the composited signal, and adjust the resolution on-the-fly, depending on the available processing resources, in order to maintain interactive refresh rates. The optimal resolution for the volume rendered views is determined by means of an analysis of the lattice of the output signal for the lenticular screen in the Fourier domain.
14 citations