scispace - formally typeset
Search or ask a question
Topic

Subpixel rendering

About: Subpixel rendering is a research topic. Over the lifetime, 3885 publications have been published within this topic receiving 82789 citations.


Papers
More filters
Proceedings ArticleDOI
TL;DR: In this paper, the point spread function (PSF) and crosstalk (CTK) measurements of focal plane CMOS Active Pixel Sensor (APS) arrays were performed using sub-micron spot light stimulation.
Abstract: This paper presents the pioneer use of our unique Sub-micron Scanning System (SSS) for point spread function (PSF) and crosstalk (CTK) measurements of focal plane CMOS Active Pixel Sensor (APS) arrays. The system enables the combination of near-field optical and atomic force microscopy measurements with the standard electronic analysis. This SSS enables full PSF extraction for imagers via sub-micron spot light stimulation. This is unique to our system. Other systems provide Modulation Transfer Function (MTF) measurements, and cannot acquire the true PSF, therefore limiting the evaluation of the sensor and its performance grading. A full PSF is required for better knowledge of the sensor and its specific faults, and for research - to enable better optimization of pixel design and imager performance. In this work based on the thorough scanning of different “L” shaped active area pixel designs (the responsivity variation measurements on a subpixel scale) the full PSF was obtained and the crosstalk distributions of the different APS arrays are calculated. The obtained PSF points out the pronounced asymmetry of the diffusion within the array, mostly caused by the certain pixel architecture and the pixels arrangement within the array. We show that a reliable estimate of the CTK in the imager is possible; the PSF use for the CTK measurements enables not only its magnitude determination (that can be done by regular optical measurements), but also to discover its main causes, enabling the design optimization per each potential pixel application.

15 citations

Proceedings ArticleDOI
30 Oct 1995
TL;DR: A new algorithm for deinterlacing of interlaced scanned video sequences is presented, based on local spectra analysis of even and odd image fields, which demonstrates that the resulting frames contain less spatial aliasing and sharper edges than frames generated using commonly used approaches such as median filtering and linear interpolation.
Abstract: A new algorithm for deinterlacing of interlaced scanned video sequences is presented. The proposed algorithm is based on local spectra analysis of even and odd image fields. The local spectral analysis is performed based on overlapped block decomposition and motion compensation. The aliasing relationship of matching blocks in the interlaced fields and the sampling lattice information obtained by subpixel registration are utilized to reconstruct deinterlaced frames. Simulation results demonstrate that the resulting frames contain less spatial aliasing and sharper edges than frames generated using commonly used approaches such as median filtering and linear interpolation.

15 citations

Patent
21 Feb 2007
TL;DR: In this article, a liquid crystal display is modeled as isolating each pixel region to be two subpixel region as each one being formed by transistor and liquid crystal capacity as well as storage capacity, coupling transistor in two subpixels separately to different scanning line and coupling one of two said transistors to data line through transistor set between two adjacent scanning lines for generating two different voltages in pixel
Abstract: A liquid crystal display is featured as isolating each pixel region to be two subpixel region as each one being formed by transistor and liquid crystal capacity as well as storage capacity, coupling transistor in two subpixels separately to different scanning line and coupling one of two said transistors to data line through transistor set between two adjacent scanning lines for generating two different voltages in pixel.

14 citations

Patent
30 Mar 1994
TL;DR: In this article, a system for rendering visual images that combines sophisticated anti-aliasing and pixel blending techniques with control pipelining in hardware embodiment is presented, where a highly-parallel rendering pipeline performs sophisticated polygon edge interpolation, pixel blending and pixel rendering operations in hardware.
Abstract: A system for rendering visual images that combines sophisticated anti-aliasing and pixel blending techniques with control pipelining in hardware embodiment. A highly-parallel rendering pipeline performs sophisticated polygon edge interpolation, pixel blending and anti-aliasing rendering operations in hardware. Primitive polygons are transformed to subpixel coordinates and then sliced and diced to create "pixlink" elements mapped to each pixel. An oversized frame buffer memory allows the storage of many pixlinks for each pixel. Z-sorting is avoided through the use of a linked-list data object for each pixlink vector in a pixel stack. Because all image data values for X, Y, Z, R, G, B and pixel coverage A are maintained in the pixlink data object, sophisticated blending operations are possible for anti-aliasing and transparency. Data parallelism in the rendering pipeline overcomes the processor efficiency problem arising from the computation-intensive rendering algorithms used in the system of this invention. Single state machine control is made possible through linked data/control pipelining.

14 citations

Journal ArticleDOI
Daniel Ruijters1
TL;DR: This work proposes a method for hardware accelerated volume rendering of medical data sets to multiview lenticular displays, offering interactive manipulation throughout, based on buffering GPU-accelerated direct volume rendered visualizations of the individual views from their respective focal spot positions, and composing the output signal for the multiv view lenticular screen in a second pass.
Abstract: The generation of multiview stereoscopic images of large volume rendered data demands an enormous amount of calculations. We propose a method for hardware accelerated volume rendering of medical data sets to multiview lenticular displays, offering interactive manipulation throughout. The method is based on buffering GPU-accelerated direct volume rendered visualizations of the individual views from their respective focal spot positions, and composing the output signal for the multiview lenticular screen in a second pass. This compositing phase is facilitated by the fact that the view assignment per subpixel is static, and therefore can be precomputed. We decoupled the resolution of the individual views from the resolution of the composited signal, and adjust the resolution on-the-fly, depending on the available processing resources, in order to maintain interactive refresh rates. The optimal resolution for the volume rendered views is determined by means of an analysis of the lattice of the output signal for the lenticular screen in the Fourier domain.

14 citations


Network Information
Related Topics (5)
Pixel
136.5K papers, 1.5M citations
91% related
Image processing
229.9K papers, 3.5M citations
89% related
Image segmentation
79.6K papers, 1.8M citations
85% related
Convolutional neural network
74.7K papers, 2M citations
84% related
Wavelet
78K papers, 1.3M citations
82% related
Performance
Metrics
No. of papers in the topic in previous years
YearPapers
202387
2022209
2021120
2020179
2019189
2018263