scispace - formally typeset
Search or ask a question
Topic

Subpixel rendering

About: Subpixel rendering is a research topic. Over the lifetime, 3885 publications have been published within this topic receiving 82789 citations.


Papers
More filters
Journal ArticleDOI
TL;DR: A method for overcoming the pixel-limited resolution of digital imagers is presented, which combines optical point-spread function engineering with subpixel image shifting and places an optimized pseudorandom phase mask in the aperture stop of a conventional imager.
Abstract: We present a method for overcoming the pixel-limited resolution of digital imagers. Our method combines optical point-spread function engineering with subpixel image shifting. We place an optimized pseudorandom phase mask in the aperture stop of a conventional imager and demonstrate the improved performance that can be achieved by combining multiple subpixel shifted images. Simulation results show that the pseudorandom phase-enhanced lens (PRPEL) imager achieves as much as 50% resolution improvement over a conventional multiframe imager. The PRPEL imager also enhances reconstruction root-mean-squared error by as much as 20%. We present experimental results that validate the predicted PRPEL imager performance.

70 citations

Proceedings ArticleDOI
17 Oct 2005
TL;DR: Application of the proposed measure in a number of benchmark stereo pair images reveals its superiority over existing correlation-based techniques used for subpixel accuracy.
Abstract: The invariance of the similarity measure in photometric distortions as well as its capability in producing subpixel accuracy are two desired and often required features in most stereo vision applications. In this paper we propose a new correlation-based measure which incorporates both mentioned requirements. Specifically, by using an appropriate interpolation scheme in the candidate windows of the matching image, and using the classical zero mean normalized cross correlation function, we introduce a suitable measure. Although the proposed measure is a nonlinear function of the sub-pixel displacement parameter, its maximization results in a closed form solution, resulting in reduced complexity for its use in matching techniques. Application of the proposed measure in a number of benchmark stereo pair images reveals its superiority over existing correlation-based techniques used for subpixel accuracy.

69 citations

Journal ArticleDOI
TL;DR: It was found that the proposed HNN with an FSRM method can separate more real changes from noise and produce more accurate LCCD results than the state-of-the-art methods.
Abstract: In this paper, a new subpixel resolution land cover change detection (LCCD) method based on the Hopfield neural network (HNN) is proposed. The new method borrows information from a known fine spatial resolution land cover map (FSRM) representing one date for subpixel mapping (SPM) from a coarse spatial resolution image on another, closer date. It is implemented by using the thematic information in the FSRM to modify the initialization of neuron values in the original HNN. The predicted SPM result was compared to the original FSRM to achieve subpixel resolution LCCD. The proposed method was compared with the original unmodified HNN method as well as six state-of-the-art methods for LCCD. To explore the effect of uncertainty in spectral unmixing, which mainly originates from spectral separability in the input, coarse image, and the point spread function (PSF) of the sensor, a set of synthetic multispectral images with different class separabilities and PSFs was used in experiments. It was found that the proposed LCCD method (i.e., HNN with an FSRM) can separate more real changes from noise and produce more accurate LCCD results than the state-of-the-art methods. The advantage of the proposed method is more evident when the class separability is small and the variance in the PSF is large, that is, the uncertainty in spectral unmixing is large. Furthermore, the utilization of an FSRM can expedite the HNN-based processing required for LCCD. The advantage of the proposed method was also validated by applying to a set of real Landsat-Moderate Resolution Imaging Spectroradiometer (MODIS) images.

69 citations

Patent
18 Jul 1995
TL;DR: In this article, a method of displaying characters on a pixel oriented grayscale display device having a predetermined pixel resolution employing parametric, geometric glyph descriptors is disclosed, which supports a client process that passes a request for a particular font and a physical character height for the displayed characters as well as the physical resolution expressed in pixels for unit length.
Abstract: A method of displaying characters on a pixel oriented grayscale display device having a predetermined pixel resolution employing parametric, geometric glyph descriptors is disclosed. The process supports a client process that passes a request for a particular font and a physical character height for the displayed characters as well as the physical resolution expressed in pixels for unit length. A character space height value in pixels is determined and compared to selected values to determine whether the character space height in physical pixels falls into one of three distinct ranges. If within the smallest range, no hinting or grid fitting is performed and the physical pixel coordinates of a scaled glyph descriptor are scan converted using subpixel coordinates. The on subpixels within each pixel are counted to provide a grayscale value for illuminating that particular pixel. If the character space height is in the highest range, the same process is performed after the scaled glyph descriptor is hinted to physical pixel boundaries. Character space heights in the mid range are hinted to the physical pixel boundary but scan converted using a conventional scan converter for the physical pixel space. The on pixels that result from this scan conversion are then illuminated to the maximum grayscale value while off pixels from the conversion are left off. The values of the variables that define the ranges are user selectable and may be varied in response to other parameters.

69 citations

Journal ArticleDOI
TL;DR: Wang et al. as discussed by the authors proposed a super-resolution-based change detection network (SRCDNet), which employs a super resolution (SR) module containing a generator and a discriminator to directly learn SR images through adversarial learning and overcome the resolution difference between bi-temporal images.
Abstract: Change detection, which aims to distinguish surface changes based on bi-temporal images, plays a vital role in ecological protection and urban planning. Since high resolution (HR) images cannot be typically acquired continuously over time, bi-temporal images with different resolutions are often adopted for change detection in practical applications. Traditional subpixel-based methods for change detection using images with different resolutions may lead to substantial error accumulation when HR images are employed; this is because of intraclass heterogeneity and interclass similarity. Therefore, it is necessary to develop a novel method for change detection using images with different resolutions, that is more suitable for HR images. To this end, we propose a super-resolution-based change detection network (SRCDNet) with a stacked attention module. The SRCDNet employs a super resolution (SR) module containing a generator and a discriminator to directly learn SR images through adversarial learning and overcome the resolution difference between bi-temporal images. To enhance the useful information in multi-scale features, a stacked attention module consisting of five convolutional block attention modules (CBAMs) is integrated to the feature extractor. The final change map is obtained through a metric learning-based change decision module, wherein a distance map between bi-temporal features is calculated. The experimental results demonstrate the superiority of the proposed method, which not only outperforms all baselines -with the highest F1 scores of 87.40% on the building change detection dataset and 92.94% on the change detection dataset -but also obtains the best accuracies on experiments performed with images having a 4x and 8x resolution difference. The source code of SRCDNet will be available at this https URL.

69 citations


Network Information
Related Topics (5)
Pixel
136.5K papers, 1.5M citations
91% related
Image processing
229.9K papers, 3.5M citations
89% related
Image segmentation
79.6K papers, 1.8M citations
85% related
Convolutional neural network
74.7K papers, 2M citations
84% related
Wavelet
78K papers, 1.3M citations
82% related
Performance
Metrics
No. of papers in the topic in previous years
YearPapers
202387
2022209
2021120
2020179
2019189
2018263