scispace - formally typeset
Search or ask a question

Showing papers on "Subpixel rendering published in 2010"


Journal ArticleDOI
TL;DR: In this paper, the impact of image filtering and skipping features detected at the highest scales on the performance of SIFT operator for SAR image registration is analyzed based on multisensor, multitemporal and different viewpoint SAR images.
Abstract: The SIFT operator's success for computer vision applications makes it an attractive alternative to the intricate feature based SAR image registration problem. The SIFT operator processing chain is capable of detecting and matching scale and affine invariant features. For SAR images, the operator is expected to detect stable features at lower scales where speckle influence diminishes. To adapt the operator performance to SAR images we analyse the impact of image filtering and of skipping features detected at the highest scales. We present our analysis based on multisensor, multitemporal and different viewpoint SAR images. The operator shows potential to become a robust alternative for point feature based registration of SAR images as subpixel registration consistency was achieved for most of the tested datasets. Our findings indicate that operator performance in terms of repeatability and matching capability is affected by an increase in acquisition differences within the imagery. We also show that the proposed adaptations result in a significant speed-up compared to the original SIFT operator.

140 citations


Journal ArticleDOI
TL;DR: The new SinMod method extracts motion from magnetic resonance imaging (MRI)-tagged (MRIT) image sequences by performing better with respect to accuracy of displacement detection, noise reduction, and avoidance of artifacts.
Abstract: The new SinMod method extracts motion from magnetic resonance imaging (MRI)-tagged (MRIT) image sequences. Image intensity in the environment of each pixel is modeled as a moving sine wavefront. Displacement is estimated at subpixel accuracy. Performance is compared with the harmonic-phase analysis (HARP) method, which is currently the most common method used to detect motion in MRIT images. SinMod can handle line tags, as well as speckle patterns. In artificial images (tag distance six pixels), SinMod detects displacements accurately (error < pixels). Effects of noise are suppressed effectively. Sharp transitions in motion at the boundary of an object are smeared out over a width of 0.6 tag distance. For MRIT images of the heart, SinMod appears less sensitive to artifacts, especially later in the cardiac cycle when image quality deteriorates. For each pixel, the quality of the sine-wave model in describing local image intensity is quantified objectively. If local quality is low, artifacts are avoided by averaging motion over a larger environment. Summarizing, SinMod is just as fast as HARP, but it performs better with respect to accuracy of displacement detection, noise reduction, and avoidance of artifacts.

138 citations


Patent
Kang Hoon1
20 Oct 2010
TL;DR: In this article, an image display device consists of a plurality of pixels configured to display a 2D image or a 3D image, and a driving circuit configured to apply a data voltage in a two-dimensional (2D) image format and a luminance compensation voltage (LVC) in a three-dimensional image format.
Abstract: An image display device includes an image display panel including a plurality of pixels configured to display a 2D image or a 3D image, a driving circuit configured to apply a data voltage in a 2D image format or a data voltage in a 3D image format to the image display panel, a controller configured to control the driving circuit in a 2D mode for displaying the 2D image or in a 3D mode for displaying the 3D image, and a patterned retarder configured to convert light from the image display panel to alternately have a first polarization and a second polarization, wherein each pixel includes first to fourth subpixels, and the data voltage in the 2D image format is applied to the first to third subpixels and a luminance compensation voltage is applied to the fourth subpixel in the 2D mode, while the data voltage in the 3D image format is applied to the first to third subpixels and a dark gray voltage is applied to the fourth subpixel in the 3D mode.

133 citations


Journal ArticleDOI
TL;DR: The proposed registration scheme has been tested using data from the Compact High Resolution Imaging Spectrometer (CHRIS) onboard the Project for On-Board Autonomy (Proba) satellite and demonstrates that the proposed method works well in areas with little variation in topography.
Abstract: Subpixel image registration is the key to successful image fusion and superresolution enhancement of multiangle satellite data. Multiangle image registration poses two main challenges: 1) Images captured at large view angles are susceptible to resolution change and blurring, and 2) local geometric distortion caused by topographic effects and/or platform instability may be important. In this paper, we propose a two-step nonrigid automatic registration scheme for multiangle satellite images. In the first step, control points (CPs) are selected in a preregistration process based on the scale-invariant feature transform (SIFT). However, the number of CPs obtained in this first step may be too few and/or CPs may be unevenly distributed. To remediate these problems, in a second step, the preliminary registered image is subdivided into chips of 64 × 64 pixels, and each chip is matched with a corresponding chip in the reference image using normalized cross correlation (NCC). By doing so, more CPs with better spatial distribution are obtained. Two criteria are applied during the generation of CPs to identify outliers. Selected SIFT and NCC CPs are used for defining a nonrigid thin-plate-spline model. The proposed registration scheme has been tested using data from the Compact High Resolution Imaging Spectrometer (CHRIS) onboard the Project for On-Board Autonomy (Proba) satellite. Experimental results demonstrate that the proposed method works well in areas with little variation in topography. Application in areas with more pronounced relief would require the use of orthorectified image data in order to achieve subpixel registration accuracy.

94 citations


Patent
08 Jun 2010
TL;DR: In this article, the pixel centers of the pixels in each pixel group were arranged in a regular two-dimensional array having one dimension parallel to the first direction, and the pixels within a pixel group are separated by an inter-pixel separation in the first dimension.
Abstract: A display, including a substrate having a display area including first and second non-overlapping pixel groups and a gutter located between the first and second pixel groups, the gutter having a dimension in a first direction separating the first and second pixel groups, and each pixel group includes a plurality of pixels, each pixel having three or more differently colored sub-pixels; and wherein the pixel centers of the pixels in each pixel group are arranged in a regular two-dimensional array having one dimension parallel to the first direction, and wherein the pixels within a pixel group are separated by an inter-pixel separation in the first direction; and one or more electrical elements arranged within the gutter, each subpixel being connected to one of the one or more electrical elements, wherein the gutter dimension is greater than the inter-pixel separation, so that artifacts in a displayed image are reduced.

92 citations


Journal ArticleDOI
TL;DR: This paper proposes hybrid endmembers selective detectors in which different kinds of endmembers are used according to different pixels to ensure that the true composition of end members in each pixel is applied in the detection procedure.
Abstract: Subpixel target detection is a challenge in hyperspectral image analysis. As the spatial resolution of hyperspectral imagery is usually limited, subpixel targets only occupy part of the pixel area. In such cases, the spatial characteristics of the targets are hard to acquire, and the only information we can use comes from spectral characteristics. Several kinds of method based on spectral characteristics have been proposed in the past. One is the linear unmixing method, which can provide the abundances of different endmembers in the hyperspectral imagery, including the target abundance. Another focuses on providing statistically reliable rules to separate subpixel targets from their backgrounds. Recently, hybrid detectors combining the aforementioned two methods were put forward, which cannot only figure out the quantitative information of the endmembers but also put this quantitative information into an adaptive matched subspace detector or adaptive cosine/coherent estimate detector to separate the target pixels from the background with statistically reliable rules. However, in these methods, all the endmembers are used to construct the statistical rule, while in most cases only some of the endmembers are actually contained in the pixels. This paper proposes hybrid endmembers selective detectors in which different kinds of endmembers are used according to different pixels to ensure that the true composition of endmembers in each pixel is applied in the detection procedure. Three different types of hyperspectral data were used in our experiments, and our proposed hybrid endmember selective detectors showed better performances than the current hybrid detectors in all the experiments.

89 citations


Proceedings ArticleDOI
13 Jun 2010
TL;DR: A new segmentation-based approach for disparity optimization in stereo vision by segmenting either the left color image or the calculated texture image and it is shown that this modification significantly reduces the memory consumption by nearly constant matching quality and thus enables embedded realization.
Abstract: This paper introduces a new segmentation-based approach for disparity optimization in stereo vision. The main contribution is a significant enhancement of the matching quality at occlusions and textureless areas by segmenting either the left color image or the calculated texture image. The local cost calculation is done with a Census-based correlation method and is compared with standard sum of absolute differences. The confidence of a match is measured and only non-confident or non-textured pixels are estimated by calculating a disparity plane for the corresponding segment. The quality of the local optimized matches is increased by a modified Semi-Global Matching (SGM) step with subpixel accuracy. In contrast to standard SGM, not the whole image is used for disparity optimization but horizontal stripes of the image. It is shown that this modification significantly reduces the memory consumption by nearly constant matching quality and thus enables embedded realization. Using the Middlebury ranking as evaluation criterion, it is shown that the proposed algorithm performs well in comparison to the pure Census correlation. It reaches a top ten rank if subpixel accuracy is supposed. Furthermore, the matching quality of the algorithm, especially of the texture-based plane fitting, is shown on two real-world scenes where a significant enhancement could be achieved.

88 citations


Patent
28 Oct 2010
TL;DR: In this article, a pixel circuit comprises a charge store and a read out circuit for each subpixel region, which is configured to select a plurality of subpixel elements from different pixels that correspond to the same waveband for simultaneous reading.
Abstract: In various exemplary embodiments, optically sensitive devices comprise a plurality of pixel regions Each pixel region includes an optically sensitive layer over a substrate and has subpixel regions for separate wavebands A pixel circuit comprises a charge store and a read out circuit for each subpixel region Circuitry is configured to select a plurality of subpixel elements from different pixels that correspond to the same waveband for simultaneous reading to a shared read out circuit

88 citations


Journal ArticleDOI
TL;DR: A theoretical analysis of this model is led to predict its performances, in particular regarding the contrast level of the image as well as the number of change pixels in the image, and its capacity to detect changes impacting more than 25 percent of a considered pixel under average conditions.
Abstract: This paper presents a new method for unsupervised subpixel change detection using image series. The method is based on the definition of a probabilistic criterion capable of assessing the level of coherence of an image series relative to a reference classification with a finer resolution. In opposition to approaches based on an a priori model of the data, the model developed here is based on the rejection of a nonstructured model-called a-contrario model-by the observation of structured data. This coherence measure is the core of a stochastic algorithm which automatically selects the image subdomain representing the most likely changes. A theoretical analysis of this model is led to predict its performances, in particular regarding the contrast level of the image as well as the number of change pixels in the image. Numerical simulations are also presented that confirm the high robustness of the method and its capacity to detect changes impacting more than 25 percent of a considered pixel under average conditions. An application to land-cover change detection is then provided using time series of satellite images.

88 citations


Proceedings ArticleDOI
14 Mar 2010
TL;DR: A fast subpixel motion estimation method for motion deblurring, where conventional motion estimation algorithms used in video codings are too complex and the new algorithm does not require any interpolation and it does not provide motion compensated frames.
Abstract: We propose a fast subpixel motion estimation method for motion deblurring, where conventional motion estimation algorithms used in video codings are too complex. The new algorithm is a combination of block matching and optical flow. It does not require any interpolation and it does not provide motion compensated frames. Thus it is much faster than conventional methods. Statistical results show that the new algorithm performs quickly and accurately. It also demonstrates compatible performance with the benchmarking full search algorithm, yet uses significantly less amount of time.

79 citations


Journal ArticleDOI
TL;DR: A stereo-matching algorithm, based upon the robust phase correlation method, is demonstrated that is capable of directly measuring disparities up to 1/50th pixel accuracy and precision, potentially allowing DEM generation from images that would otherwise not be deemed suitable for the purpose.
Abstract: To obtain depth-from-stereo imagery, it is traditionally required that the baseline separation between images (or the base-to-height ratio) be very large in order to ensure the largest image disparity range for effective measurement. Typically, a B/H ratio in the range of 0.6-1 is preferred. As a consequence, most existing stereo-matching algorithms are designed to measure disparities reliably with only integer-pixel precision. However, wide baselines may increase the possibility of occlusion occurring between highly contrasting relief, imposing a serious problem to digital elevation model (DEM) generation in urban and highly dissected mountainous areas. A narrow-baseline stereo configuration can alleviate the problem significantly but requires very precise measurements of disparity at subpixel levels. In this paper, we demonstrate a stereo-matching algorithm, based upon the robust phase correlation method, that is capable of directly measuring disparities up to 1/50th pixel accuracy and precision. The algorithm enables complete and dense surface shape information to be retrieved from images with unconventionally low B/H ratios (e.g., less than 0.01), potentially allowing DEM generation from images that would otherwise not be deemed suitable for the purpose.

Journal ArticleDOI
TL;DR: Experimental results indicate that the accuracy with the method to estimate land-surface subpixel temperature is significantly higher than that with a traditional method that uses the NDVI as an input parameter, and the average error of subpixels temperature is decreased by 2-3 K with the proposed method.
Abstract: Among the multisource data fusing methods, the potential advantages of remote sensing of solar-reflective visible and near-Infrared [(VNIR); 400-900 nm] data and thermal-infrared (TIR) data have not been fully mined. Usually, a linear unmixed method is used for the purpose, which results in low estimation accuracy of subpixel land-surface temperature (LST). In this paper, we propose a novel method to estimate subpixel LST. This approach uses the characteristics of high spatial-resolution advanced spaceborne thermal emission and reflection radiometer (ASTER) VNIR data and the low spatial-resolution TIR data simulated from ASTER temperature product to generate the high spatial-resolution temperature data at a subpixel scale. First, the land-surface parameters (e.g., leaf area index, normalized difference vegetation index (NDVI), soil water content index, and reflectance) were extracted from VNIR data and field measurements. Then, the extracted high resolution of land-surface parameters and the LST were simulated into coarse resolutions. Second, the genetic algorithm and self-organizing feature map artificial neural network (ANN) was utilized to create relationships between land-surface parameters and the corresponding LSTs separately for different land-cover types at coarse spatial-resolution scales. Finally, the ANN-trained relationships were applied in the estimation of subpixel temperatures (at high spatial resolution) from high spatial-resolution land-surface parameters. The two sets of data with different spatial resolutions were simulated using an aggregate resampling algorithm. Experimental results indicate that the accuracy with our method to estimate land-surface subpixel temperature is significantly higher than that with a traditional method that uses the NDVI as an input parameter, and the average error of subpixel temperature is decreased by 2-3 K with our method. This method is a simple and convenient approach to estimate subpixel LST from high spatial-temporal resolution data quickly and effectively.

Journal ArticleDOI
TL;DR: In this article, the authors measured the inplane linear displacements of microelectromechanical systems with sub-nanometer accuracy by observing the periodic micropatterns with a charge-coupled device camera attached to an optical microscope.
Abstract: In-plane linear displacements of microelectromechanical systems are measured with subnanometer accuracy by observing the periodic micropatterns with a charge-coupled device camera attached to an optical microscope. The translation of the microstructure is retrieved from the video by phase-shift computation using discrete Fourier transform analysis. This approach is validated through measurements on silicon devices featuring steep-sided periodic microstructures. The results are consistent with the electrical readout of a bulk micromachined capacitive sensor, demonstrating the suitability of this technique for both calibration and sensing. Using a vibration isolation table, a standard deviation of σ = 0.13 nm could be achieved, enabling a measurement resolution of 0.5 nm (4σ) and a subpixel resolution better than 1/100 pixel.

Journal ArticleDOI
TL;DR: Based on the research results of different subpixel algorithms, a first-order Newton-Raphson iteration method and gradient-based method are recommended for 3D-DIC measurement.
Abstract: The three-dimensional digital image correlation (3D-DIC) method is rapidly developing and is being widely applied to engineering and manufacturing. Despite its extensive use, the error caused by different image matching algorithms is seldom discussed. An algorithm for 3D speckle image generation is proposed, and the performances of different subpixel correlation algorithms are studied. The advantage is that there is no interpolation bias of texture in the simulation before and after deformation, and the error from the interpolation of speckle can be omitted in this algorithm. An error criterion for 3D reconstruction is proposed. 3D speckle images were simulated, and the performance of four subpixel algorithms is addressed. Based on the research results of different subpixel algorithms, a first-order Newton–Raphson iteration method and gradient-based method are recommended for 3D-DIC measurement.

Journal ArticleDOI
TL;DR: Experiments on simulated and real-world data show excellent performance of the proposed multiframe SR reconstruction method, which simultaneously estimates a subpixel precise polygon boundary as well as a high-resolution intensity description of a small moving object subject to a modified total variation constraint.
Abstract: Multiframe super-resolution (SR) reconstruction of small moving objects against a cluttered background is difficult for two reasons: a small object consists completely of “mixed” boundary pixels and the background contribution changes from frame-to-frame. We present a solution to this problem that greatly improves recognition of small moving objects under the assumption of a simple linear motion model in the real-world. The presented method not only explicitly models the image acquisition system, but also the space-time variant fore- and background contributions to the “mixed” pixels. The latter is due to a changing local background as a result of the apparent motion. The method simultaneously estimates a subpixel precise polygon boundary as well as a high-resolution (HR) intensity description of a small moving object subject to a modified total variation constraint. Experiments on simulated and real-world data show excellent performance of the proposed multiframe SR reconstruction method.

Patent
Arnz Michael1
30 Mar 2010
TL;DR: In this paper, a discrete intensity profile of the edge, having profile pixels, is derived from the image pixels, and a continuous profile function of the edges is determined based on the profile pixels.
Abstract: The position of an edge of a marker structure in an image of the marker structure is determined with subpixel accuracy. A discrete intensity profile of the edge, having profile pixels, is derived from the image pixels, and a continuous profile function of the edge is determined based on the profile pixels. Profile pixels whose intensity values are near an intensity threshold value are selected as evaluation pixels. Based on the evaluation pixels, a curve of continuous intensity is calculated. A position coordinate at which the intensity value of the continuous intensity curve matches the threshold value is selected as a first position coordinate, and the distance is determined between the first position coordinate and the position coordinate of the evaluation pixel that, from among the evaluation pixels previously selected, has the closest intensity value to the threshold value. The determined distance is compared to a predetermined threshold, and if the distance is greater than the threshold, a shift is effected, and the process iteratively performs the steps of selects the adjacent profile pixels, calculates the curve of continuous intensity, and so forth. If the distance is not greater than the threshold, the position of the edge in the captured image is determined with subpixel accuracy from all the distances determined in step g).

Patent
08 Jun 2010
TL;DR: In this article, a vertical alignment liquid crystal panel based on a transverse electric field drive system is provided which shows few changes in color when looked squarely at, where a liquid crystal layer is sandwiched between substrates (10, 20), and the substrate (10) is provided with an insulating layer (25) having at least two regions that are different in relative permittivity from each other in a pixel (6) composed of a red subpixel (6R), a green subpixel(6G), and a blue subpixels (6B).
Abstract: A vertical alignment liquid crystal panel based on a transverse electric field drive system is provided which shows few changes in color when looked squarely at. A liquid crystal panel (2) is a vertical alignment liquid crystal panel based on a transverse electric field drive system, which carries out a display by driving, with a transverse electric field, a liquid crystal layer (50) sandwiched between substrates (10, 20), and the substrate (10) is provided with an insulating layer (25) having at least two regions that are different in relative permittivity from each other in a pixel (6) composed of a red subpixel (6R), a green subpixel (6G), and a blue subpixel (6B). Those regions of the insulating layer (25) which correspond to the blue, green, and red subpixel (6B, 6G, 6R) in the liquid crystal panel (2) have relative permittivities of 3, 3 to 7, and 4 to 7, respectively.

Journal ArticleDOI
TL;DR: This paper proposes an integrated approach to estimate the HR depth and the SR image from multiple LR stereo observations and demonstrates the efficacy of the proposed method in not only being able to bring out image details but also in enhancing theHR depth over its LR counterpart.
Abstract: Under stereo settings, the twin problems of image superresolution (SR) and high-resolution (HR) depth estimation are intertwined. The subpixel registration information required for image superresolution is tightly coupled to the 3D structure. The effects of parallax and pixel averaging (inherent in the downsampling process) preclude a priori estimation of pixel motion for superresolution. These factors also compound the correspondence problem at low resolution (LR), which in turn affects the quality of the LR depth estimates. In this paper, we propose an integrated approach to estimate the HR depth and the SR image from multiple LR stereo observations. Our results demonstrate the efficacy of the proposed method in not only being able to bring out image details but also in enhancing the HR depth over its LR counterpart.

Journal ArticleDOI
TL;DR: The application of the developed processing system showed that the algorithm achieved better than 1/3 FOV geolocation accuracy for AVHRR 1-km scenes, and was designed for processing daytime data as it intensively employs observations from optical solar bands, the near-infrared channel in particular.
Abstract: Precise geolocation is one of the fundamental requirements for satellite imagery to be suitable for climate applications. The Global Climate Observing System and the Committee on Earth Observing Satellites identified the requirement for the accuracy of geolocation of satellite data for climate applications as 1/3 field of view (FOV). This requirement for the series of the Advanced Very High Resolution Radiometer (AVHRR) on the National Oceanic and Atmospheric Administration platforms cannot be met without implementing the ground control point (GCP) correction, particularly for historical data, because of limited accuracy of orbit modeling and knowledge of satellite attitude angles. This paper presents a new method for precise georeferencing of the AVHRR imagery developed as part of the new Canadian AVHRR processing system (CAPS) designed for generating high-quality AVHRR satellite climate data record at 1-km spatial resolution. The method works in swath projection and uses the following: 1) the reference monthly images from Moderate Resolution Imaging Spectroradiometer at 250-m resolution; 2) orthorectification to correct for surface elevation; and 3) a novel image matching technique in swath projection to achieve the subpixel resolution. The method is designed for processing daytime data as it intensively employs observations from optical solar bands, the near-infrared channel in particular. The application of the developed processing system showed that the algorithm achieved better than 1/3 FOV geolocation accuracy for AVHRR 1-km scenes. It has very high efficiency rate (> 97%) due to the dense and uniform GCP coverage of the study area (5700 × 4800 km2 ), covering the entire Canada, the Northern U.S., Alaska, Greenland, and surrounding oceans.

Journal ArticleDOI
TL;DR: The measurement results of a large-scale waterwheel blade by XJTUDP show that this photogrammetry system can be applied to industrial measurements.
Abstract: A digital photogrammetry measurement system (XJTUDP) is developed in this work, based on close range industry. Studies are carried out on key technologies of a photogrammetry measurement system, such as the high accuracy measurement method of a marker point center based on a fitting subpixel edge, coded point design and coded point autodetection, calibration of a digital camera, and automatic image point matching algorithms. The 3-D coordinates of object points are reconstructed using colinear equations, image orientation based on coplanarity equations, direct linear transformation solution, outer polar-line constraints, 3-D reconstruction, and a bundle adjustment solution. Through the use of circular coded points, the newly developed measurement system first locates the positions of the camera automatically. Matching and reconstruction of the uncoded points are resolved using the outer polar-line geometry of multiple positions of the camera. The normal vector of the marker points is used to eliminate the error caused by the thickness of the marker points. XJTUDP and TRITOP systems are tested on the basis of VDI/VDE2634 guidelines, respectively. Results show that their precision is less than 0.1 mm/m. The measurement results of a large-scale waterwheel blade by XJTUDP show that this photogrammetry system can be applied to industrial measurements.

Journal ArticleDOI
TL;DR: In this article, a flat-panel display with a slanted subpixel arrangement was developed for a multi-view three-dimensional (3D) display, where a set of 3M × N subpixels correspond to one of the cylindrical lenses, which constitutes a lenticular lens, to construct each 3-D pixel of a multiview display that offers M × N views.
Abstract: — A flat-panel display with a slanted subpixel arrangement has been developed for a multi-view three-dimensional (3-D) display. A set of 3M × N subpixels (M × N subpixels for each R, G, and B color) corresponds to one of the cylindrical lenses, which constitutes a lenticular lens, to construct each 3-D pixel of a multi-view display that offers M × N views. Subpixels of the same color in each 3-D pixel have different horizontal positions, and the R, G, and B subpixels are repeated in the horizontal direction. In addition, the ray-emitting areas of the subpixels within a 3-D pixel are continuous in the horizontal direction for each color. One of the vertical edges of each subpixel has the same horizontal position as the opposite vertical edge of another subpixel of the same color. Cross-talk among viewing zones is theoretically zero. This structure is suitable for providing a large number of views. A liquid-crystal panel having this slanted subpixel arrangement was fabricated to construct a mobile 3-D display with 16 views and a 3-D resolution of 256 × 192. A 3-D pixel is comprised of 12 × 4 subpixels (M = 4 and N = 4). The screen size was 2.57 in.

Patent
12 Jul 2010
TL;DR: In this article, the average number of subpixels, which configure one pixel for three-dimensional display, in one row in the horizontal direction, the width of a subpixel, which forms a display, the distance from a predetermined diagonal-direction moire canceling location to the parallax barrier, and the number of viewpoints of an image used for displaying a stereoscopic image.
Abstract: Moire arising in an autostereoscopic display utilizing a parallax barrier method is cancelled. The gap between visible-light-transmitting sections that are adjacent in the horizontal direction of a parallax barrier is determined using: the average number of subpixels, which configure one pixel for three-dimensional display, in one row in the horizontal direction; the width of a subpixel, which forms a display; the distance from a predetermined diagonal-direction moire canceling location to the parallax barrier; the number of viewpoints of an image used for displaying a stereoscopic image; and the distance (Z) from the image display surface of the aforementioned display to the aforementioned parallax barrier.

Journal ArticleDOI
TL;DR: The results suggest that subpixel water fractions can be accurately estimated when high-resolution satellite data or intensively interpreted training datasets are not available, which increases the ability to map small water bodies or small changes in lake size at a regional scale.
Abstract: Small bodies of water can be mapped with moderate-resolution satellite data using methods where water is mapped as subpixel fractions using field measurements or high-resolution images as training datasets. A new method, developed from a regression-tree technique, uses a 30 m Landsat image for training the regression tree that, in turn, is applied to the same image to map subpixel water. The self-trained method was evaluated by comparing the percent-water map with three other maps generated from established percent-water mapping methods: (1) a regression-tree model trained with a 5 m SPOT 5 image, (2) a regression-tree model based on endmembers and (3) a linear unmixing classification technique. The results suggest that subpixel water fractions can be accurately estimated when high-resolution satellite data or intensively interpreted training datasets are not available, which increases our ability to map small water bodies or small changes in lake size at a regional scale.

Journal ArticleDOI
TL;DR: In this article, the inherent accuracy of the methods and the effects of various sources of error, such as noise, bias mismatch, and blurring, were evaluated and the best methods for shift measurements were based on the square difference function and the absolute difference function squared, with subpixel accuracy accomplished by use of two-dimensional quadratic interpolation.
Abstract: Context. Solar Shack–Hartmann wavefront sensors measure differential wavefront tilts as the relative shift between images from different subapertures. There are several methods in use for measuring these shifts. Aims. We evaluate the inherent accuracy of the methods and the effects of various sources of error, such as noise, bias mismatch, and blurring. We investigate whether Z-tilts or G-tilts are measured. Methods. We test the algorithms on two kinds of artificial data sets, one corresponding to images with known shifts and one corresponding to seeing with different r0. Results. Our results show that the best methods for shift measurements are based on the square difference function and the absolute difference function squared, with subpixel accuracy accomplished by use of two-dimensional quadratic interpolation. These methods measure Z-tilts rather than G-tilts.

Journal ArticleDOI
TL;DR: Comparisons of the experimental results show that the proposed FSFPD can preserve edges and structural details of ultrasound images well while removing speckle noise and demonstrate that the discrimination rate of breast cancers has been highly improved after employing the proposed method.

Journal ArticleDOI
TL;DR: The method is greatly simplified compared with the phase-stepping method and can significantly reduce the time-consuming scanning and possibly the unnecessary dose, and can open the way to further widespread application of phase contrast imaging, e.g., into clinical practice.
Abstract: A method for x-ray phase contrast imaging is introduced in which only one absorption grating and a microfocus x-ray source in a tabletop setup are used. The method is based on precise subpixel position determination of the x-ray pattern projected by the grating directly from the pattern image. For retrieval of the phase gradient and absorption image (both images obtained from one exposure), it is necessary to measure only one projection of the investigated object. Thus, our method is greatly simplified compared with the phase-stepping method and our method can significantly reduce the time-consuming scanning and possibly the unnecessary dose. Furthermore, the technique works with a fully polychromatic spectrum and gives ample variability in object magnification. Consequently, the approach can open the way to further widespread application of phase contrast imaging, e.g., into clinical practice. The experimental results on a simple testing object as well as on complex biological samples are presented.

Journal ArticleDOI
TL;DR: In this incoherent on-chip imaging modality, the object of interest is directly positioned onto a nanostructured thin metallic film, where the emitted light from the object plane diffracts over a short distance to be sampled by a detector-array without the use of any lenses.
Abstract: We introduce the use of nanostructured surfaces for lensfree on-chip microscopy. In this incoherent on-chip imaging modality, the object of interest is directly positioned onto a nanostructured thin metallic film, where the emitted light from the object plane, after being modulated by the nanostructures, diffracts over a short distance to be sampled by a detector-array without the use of any lenses. The detected far-field diffraction pattern then permits rapid reconstruction of the object distribution on the chip at the subpixel level using a compressive sampling algorithm. This imaging modality based on nanostructured substrates could especially be useful to create lensfree fluorescent microscopes on a compact chip.

Journal ArticleDOI
TL;DR: In this article, a theoretical analysis of the systematic error of the subpixel centroid estimation algorithm utilizing frequency domain analysis under the consideration of sampling frequency limitation and sampling window limitation is presented, and the dependence of systematic error on Gaussian width of star image, actual star centroid location and the number of sampling pixels is derived.
Abstract: Subpixel centroid estimation is the most important star image location method of star tracker. This paper presents a theoretical analysis of the systematic error of subpixel centroid estimation algorithm utilizing frequency domain analysis under the consideration of sampling frequency limitation and sampling window limitation. Explicit expression of systematic error of centroid estimation is obtained, and the dependence of systematic error on Gaussian width of star image, actual star centroid location and the number of sampling pixels is derived. A systematic error compensation algorithm for star centroid estimation is proposed based on the result of theoretical analysis. Simulation results show that after compensation, the residual systematic errors of 3-pixel- and 5-pixel-windows’ centroid estimation are less than 2×10−3 pixels and 2×10−4 pixels respectively.

Proceedings ArticleDOI
13 Jun 2010
TL;DR: This work implemented in software an algorithm that successfully executes on planar surfaces of diffuse reflectance properties at almost two frames per second with subpixel accuracy, proving the viability of the concept and paving the way for future optimization and generalization.
Abstract: Projector-camera systems use computer vision to analyze their surroundings and display feedback directly onto real world objects, as embodied by spatial augmented reality. To be effective, the display must remain aligned even when the target object moves, but the added illumination causes problems for traditional algorithms. Current solutions consider the displayed content as interference and largely depend on channels orthogonal to visible light. They cannot directly align projector images with real world surfaces, even though this may be the actual goal. We propose instead to model the light emitted by projectors and reflected into cameras, and to consider the displayed content as additional information useful for direct alignment. We implemented in software an algorithm that successfully executes on planar surfaces of diffuse reflectance properties at almost two frames per second with subpixel accuracy. Although slow, our work proves the viability of the concept, paving the way for future optimization and generalization.

Journal ArticleDOI
TL;DR: A forest cover heterogeneity map is produced that contains more detailed information on canopy heterogeneity at the CHRIS subpixel scale than is possible to realize from a single-source optical data set.
Abstract: The Compact High Resolution Imaging Spectrometer (CHRIS) mounted onboard the Project for Onboard Autonomy (PROBA) spacecraft is capable of sampling reflected radiation at five viewing angles over the visible and near-infrared regions of the solar spectrum with high spatial resolution. We combined the spectral domain with the angular domain of CHRIS data in order to map the surface heterogeneity of an Alpine coniferous forest during winter. In the spectral domain, linear spectral unmixing of the nadir image resulted in a canopy cover map. In the angular domain, pixelwise inversion of the Rahman-Pinty-Verstraete (RPV) model at a single wavelength at the red edge (722 nm) yielded a map of the Minnaert-k parameter that provided information on surface heterogeneity at a subpixel scale. However, the interpretation of the Minnaert-k parameter is not always straightforward because fully vegetated targets typically produce the same type of reflectance anisotropy as non-vegetated targets. Merging both maps resulted in a forest cover heterogeneity map, which contains more detailed information on canopy heterogeneity at the CHRIS subpixel scale than is possible to realize from a single-source optical data set.