scispace - formally typeset
Search or ask a question

Showing papers on "Subpixel rendering published in 2014"


Proceedings ArticleDOI
29 Sep 2014
TL;DR: A semi-direct monocular visual odometry algorithm that is precise, robust, and faster than current state-of-the-art methods and applied to micro-aerial-vehicle state-estimation in GPS-denied environments is proposed.
Abstract: We propose a semi-direct monocular visual odometry algorithm that is precise, robust, and faster than current state-of-the-art methods. The semi-direct approach eliminates the need of costly feature extraction and robust matching techniques for motion estimation. Our algorithm operates directly on pixel intensities, which results in subpixel precision at high frame-rates. A probabilistic mapping method that explicitly models outlier measurements is used to estimate 3D points, which results in fewer outliers and more reliable points. Precise and high frame-rate motion estimation brings increased robustness in scenes of little, repetitive, and high-frequency texture. The algorithm is applied to micro-aerial-vehicle state-estimation in GPS-denied environments and runs at 55 frames per second on the onboard embedded computer and at more than 300 frames per second on a consumer laptop. We call our approach SVO (Semi-direct Visual Odometry) and release our implementation as open-source software.

1,814 citations


Book ChapterDOI
02 Sep 2014
TL;DR: A structured lighting system for creating high-resolution stereo datasets of static indoor scenes with highly accurate ground-truth disparities using novel techniques for efficient 2D subpixel correspondence search and self-calibration of cameras and projectors with modeling of lens distortion is presented.
Abstract: We present a structured lighting system for creating high-resolution stereo datasets of static indoor scenes with highly accurate ground-truth disparities. The system includes novel techniques for efficient 2D subpixel correspondence search and self-calibration of cameras and projectors with modeling of lens distortion. Combining disparity estimates from multiple projector positions we are able to achieve a disparity accuracy of 0.2 pixels on most observed surfaces, including in half-occluded regions. We contribute 33 new 6-megapixel datasets obtained with our system and demonstrate that they present new challenges for the next generation of stereo algorithms.

1,071 citations


Journal ArticleDOI
TL;DR: The problem of view synthesis is formulated as a continuous inverse problem, which allows us to correctly take into account foreshortening effects caused by scene geometry transformations, and all optimization problems are solved with state-of-the-art convex relaxation techniques.
Abstract: We develop a continuous framework for the analysis of 4D light fields, and describe novel variational methods for disparity reconstruction as well as spatial and angular super-resolution. Disparity maps are estimated locally using epipolar plane image analysis without the need for expensive matching cost minimization. The method works fast and with inherent subpixel accuracy since no discretization of the disparity space is necessary. In a variational framework, we employ the disparity maps to generate super-resolved novel views of a scene, which corresponds to increasing the sampling rate of the 4D light field in spatial as well as angular direction. In contrast to previous work, we formulate the problem of view synthesis as a continuous inverse problem, which allows us to correctly take into account foreshortening effects caused by scene geometry transformations. All optimization problems are solved with state-of-the-art convex relaxation techniques. We test our algorithms on a number of real-world examples as well as our new benchmark data set for light fields, and compare results to a multiview stereo method. The proposed method is both faster as well as more accurate. Data sets and source code are provided online for additional evaluation.

575 citations


Journal ArticleDOI
TL;DR: A novel supervised metric learning (SML) algorithm is proposed, which can effectively learn a distance metric for hyperspectral target detection, by which target pixels are easily detected in positive space while the background pixels are pushed into negative space as far as possible.
Abstract: The detection and identification of target pixels such as certain minerals and man-made objects from hyperspectral remote sensing images is of great interest for both civilian and military applications. However, due to the restriction in the spatial resolution of most airborne or satellite hyperspectral sensors, the targets often appear as subpixels in the hyperspectral image (HSI). The observed spectral feature of the desired target pixel (positive sample) is therefore a mixed signature of the reference target spectrum and the background pixels spectra (negative samples), which belong to various land cover classes. In this paper, we propose a novel supervised metric learning (SML) algorithm, which can effectively learn a distance metric for hyperspectral target detection, by which target pixels are easily detected in positive space while the background pixels are pushed into negative space as far as possible. The proposed SML algorithm first maximizes the distance between the positive and negative samples by an objective function of the supervised distance maximization. Then, by considering the variety of the background spectral features, we put a similarity propagation constraint into the SML to simultaneously link the target pixels with positive samples, as well as the background pixels with negative samples, which helps to reject false alarms in the target detection. Finally, a manifold smoothness regularization is imposed on the positive samples to preserve their local geometry in the obtained metric. Based on the public data sets of mineral detection in an Airborne Visible/Infrared Imaging Spectrometer image and fabric and vehicle detection in a Hyperspectral Mapper image, quantitative comparisons of several HSI target detection methods, as well as some state-of-the-art metric learning algorithms, were performed. All the experimental results demonstrate the effectiveness of the proposed SML algorithm for hyperspectral target detection.

183 citations


Journal ArticleDOI
TL;DR: A coded aperture with subpixel features is employed to modulate the energy spectrum of coherently scattered photons and recover the object properties using an iterative inversion algorithm based on compressed sensing theory.
Abstract: We present a method for realizing snapshot, depth-resolved material identification using only a single, energy-sensitive pixel. To achieve this result, we employ a coded aperture with subpixel features to modulate the energy spectrum of coherently scattered photons and recover the object properties using an iterative inversion algorithm based on compressed sensing theory. We demonstrate high-fidelity object estimation at x-ray wavelengths for a variety of compression ratios exceeding unity.

79 citations


Journal ArticleDOI
TL;DR: An adaptive subpixel mapping framework based on a multiagent system for remote-sensing imagery is proposed and experimental results indicate that the proposed algorithm outperforms the other two subpixels mapping algorithms in reconstructing the different structures in mixed pixels.
Abstract: The existence of mixed pixels is a major problem in remote-sensing image classification. Although the soft classification and spectral unmixing techniques can obtain an abundance of different classes in a pixel to solve the mixed pixel problem, the subpixel spatial attribution of the pixel will still be unknown. The subpixel mapping technique can effectively solve this problem by providing a fine-resolution map of class labels from coarser spectrally unmixed fraction images. However, most traditional subpixel mapping algorithms treat all mixed pixels as an identical type, either boundary-mixed pixel or linear subpixel, leading to incomplete and inaccurate results. To improve the subpixel mapping accuracy, this paper proposes an adaptive subpixel mapping framework based on a multiagent system for remote-sensing imagery. In the proposed multiagent subpixel mapping framework, three kinds of agents, namely, feature detection agents, subpixel mapping agents and decision agents, are designed to solve the subpixel mapping problem. Experiments with artificial images and synthetic remote-sensing images were performed to evaluate the performance of the proposed subpixel mapping algorithm in comparison with the hard classification method and other subpixel mapping algorithms: subpixel mapping based on a back-propagation neural network and the spatial attraction model. The experimental results indicate that the proposed algorithm outperforms the other two subpixel mapping algorithms in reconstructing the different structures in mixed pixels.

76 citations


Journal ArticleDOI
TL;DR: A novel subpixel registration algorithm withGaussian windows to implicitly optimize the subset sizes by adjusting the shape of Gaussian windows in a self-adaptive fashion with the aid of a so-called weighted zero-normalized sum-of-squared difference correlation criterion is proposed.

75 citations


Journal ArticleDOI
TL;DR: UOC provides an effective and real-time class allocation method for STHSPM algorithms, which allocates classes in units of class (UOC) and is able to produce higher SPM accuracy than UOS and HAVF.
Abstract: There is a type of algorithm for subpixel mapping (SPM), namely, the soft-then-hard SPM (STHSPM) algorithm that first estimates soft attribute values for land cover classes at the subpixel scale level and then allocates classes (i.e., hard attribute values) for subpixels according to the soft attribute values. This paper presents a novel class allocation approach for STHSPM algorithms, which allocates classes in units of class (UOC). First, a visiting order for all classes is predetermined, and the number of subpixels belonging to each class is calculated using coarse fraction data. Then, according to the visiting order, the subpixels belonging to the being visited class are determined by comparing the soft attribute values of this class, and the remaining subpixels are used for the allocation of the next class. The process is terminated when each subpixel is allocated to a class. UOC was tested on three remote sensing images with five STHSPM algorithms: back-propagation neural network, Hopfield neural network, subpixel/pixel spatial attraction model, kriging, and indicator cokriging. UOC was also compared with three existing allocation methods, i.e., linear optimization technique (LOT), sequential assignment in units of subpixel (UOS), and a method that assigns subpixels with highest soft attribute values first (HAVF). Results show that for all STHSPM algorithms, UOC is able to produce higher SPM accuracy than UOS and HAVF; compared with LOT, UOC is able to achieve at least comparable accuracy but needs much less computing time. Hence, UOC provides an effective and real-time class allocation method for STHSPM algorithms.

74 citations


Patent
17 Jun 2014
TL;DR: In this paper, the authors use a micro light emitting diode (LED) in an active matrix display to emit light and a sensing IR diode to sense light, and a display panel includes a display substrate having a display region, an array of subpixel circuits, and array of selection devices.
Abstract: Exemplary methods and systems use a micro light emitting diode (LED) in an active matrix display to emit light and a sensing IR diode to sense light. A display panel includes a display substrate having a display region, an array of subpixel circuits, and an array of selection devices. Each subpixel circuit includes a driving circuit to operate a corresponding infrared (IR) emitting LED in a light emission mode. Each selection device may be coupled to a corresponding sensing IR diode to operate the corresponding sensing IR diode in a light sensing mode.

66 citations


Book ChapterDOI
06 Sep 2014
TL;DR: Quantitative comparisons to OpenCV’s checkerboard detector show that the proposed method detects up to 80% more checkerboards and detects corner points more accurately, even under strong perspective distortion as often present in wide baseline stereo setups.
Abstract: We present a new checkerboard detection algorithm which is able to detect checkerboards at extreme poses, or checkerboards which are highly distorted due to lens distortion even on low-resolution images. On the detected pattern we apply a surface fitting based subpixel refinement specifically tailored for checkerboard X-junctions. Finally, we investigate how the accuracy of a checkerboard detector affects the overall calibration result in multi-camera setups. The proposed method is evaluated on real images captured with different camera models to show its wide applicability. Quantitative comparisons to OpenCV’s checkerboard detector show that the proposed method detects up to 80% more checkerboards and detects corner points more accurately, even under strong perspective distortion as often present in wide baseline stereo setups.

58 citations


Journal ArticleDOI
TL;DR: The proposed technique is capable of performing a full-field 3D shape measurement with high accuracy even in the presence of discontinuities and multiple separate regions.
Abstract: D shape measurement has emerged as a very useful tool in numerous fields because of its wide and ever-increasing applications. In this paper, we present a passive, fast and accurate 3D shape measurement technique using stereo vision approach. The technique first employs a scale-invariant feature transform algorithm to detect point matches at a number of discrete locations despite the discontinuities in the images. Then an automated image registration algorithm is applied to find full-field point matches with subpixel accuracy. After that, the 3D shapes of the objects can be reconstructed according to the obtained point matching and the camera information. The proposed technique is capable of performing a full-field 3D shape measurement with high accuracy even in the presence of discontinuities and multiple separate regions. The validity is verified by experiments.

Journal ArticleDOI
TL;DR: In this article, a variational surface reconstruction method was proposed to increase the lateral resolution of the DEM such that it reaches that of the underlying images, and an illumination-independent image registration scheme was developed.

Journal ArticleDOI
TL;DR: An example-based SRM model using support vector regression (SVR_SRM), which can generate fine resolution land cover maps with more detailed spatial information and higher accuracy at different spatial scales is proposed.
Abstract: Super-resolution mapping (SRM) is a promising technique to generate a fine resolution land cover map from coarse fractional images by predicting the spatial locations of different land cover classes at subpixel scale. In most cases, SRM is accomplished by using the spatial dependence principle, which is a simple method to describe the spatial patterns of different land cover classes. However, the spatial dependence principle used in existing SRM models does not fully reflect the real-world situations, making the resultant fine resolution land cover map often have uncertainty. In this paper, an example-based SRM model using support vector regression (SVR_SRM) was proposed. Without directly using an explicit formulation to describe the prior information about the subpixel spatial pattern, SVR_SRM generates a fine resolution land cover map from coarse fractional images, by learning the nonlinear relationships between the coarse fractional pixels and corresponding labeled subpixels from the selected best-match training data. Based on the experiments of two subset images of National Land Cover Database (NLCD) 2001 and a subset of real hyperspectral Airborne Visible/Infrared Imaging Spectrometer (AVIRIS) image, the performance of SVR_SRM was evaluated by comparing with the traditional pixel-based hard classification (HC) and several existing typical SRM algorithms. The results show that SVR_SRM can generate fine resolution land cover maps with more detailed spatial information and higher accuracy at different spatial scales.

Journal ArticleDOI
TL;DR: Preliminary analysis of example images has shown that this system has potential for crop condition assessment, pest detection, and other agricultural applications.
Abstract: This paper describes the design and evaluation of an airborne multispectral imaging system based on two identical consumer-grade cameras for agricultural remote sensing. The cameras are equipped with a full-frame complementary metal oxide semiconductor (CMOS) sensor with 5616 × 3744 pixels. One camera captures normal color images, while the other is modified to obtain near-infrared (NIR) images. The color camera is also equipped with a GPS receiver to allow geotagged images. A remote control is used to trigger both cameras simultaneously. Images are stored in 14-bit RAW and 8-bit JPEG files in CompactFlash cards. The second-order transformation was used to align the color and NIR images to achieve subpixel alignment in four-band images. The imaging system was tested under various flight and land cover conditions and optimal camera settings were determined for airborne image acquisition. Images were captured at altitudes of 305–3050 m (1000–10,000 ft) and pixel sizes of 0.1–1.0 m were achieved. Four practical application examples are presented to illustrate how the imaging system was used to estimate cotton canopy cover, detect cotton root rot, and map henbit and giant reed infestations. Preliminary analysis of example images has shown that this system has potential for crop condition assessment, pest detection, and other agricultural applications.

Book ChapterDOI
06 Sep 2014
TL;DR: A comprehensive statistical evaluation of selected state-of-the-art stereo matching approaches on an extensive dataset is presented and reference values for the precision limits actually achievable in practice are established.
Abstract: Modern applications of stereo vision, such as advanced driver assistance systems and autonomous vehicles, require highest precision when determining the location and velocity of potential obstacles. Subpixel disparity accuracy in selected image regions is therefore essential. Evaluation benchmarks for stereo correspondence algorithms, such as the popular Middlebury and KITTI frameworks, provide important reference values regarding dense matching performance, but do not sufficiently treat local sub-pixel matching accuracy. In this paper, we explore this important aspect in detail. We present a comprehensive statistical evaluation of selected state-of-the-art stereo matching approaches on an extensive dataset and establish reference values for the precision limits actually achievable in practice. For a carefully calibrated camera setup under real-world imaging conditions, a consistent error limit of 1/10 pixel is determined. We present guidelines on algorithmic choices derived from theory which turn out to be relevant to achieving this limit in practice.

Journal ArticleDOI
TL;DR: The aim of this letter is to present a spatio-temporal pixel-swapping algorithm (STPSA), in which both spatial and temporal contextual information from previous land cover maps or observed samples are well integrated and utilized to improve subpixel mapping accuracy.
Abstract: The aim of this letter is to present a spatio-temporal pixel-swapping algorithm (STPSA), based on conventional pixel-swapping algorithms (PSAs), in which both spatial and temporal contextual information from previous land cover maps or observed samples are well integrated and utilized to improve subpixel mapping accuracy. Unlike conventional pixel-swapping algorithms, STPSA is capable of utilizing prior information, which was previously ignored, to predict the attractiveness based on pairs of subpixels. This algorithm involves three main steps and operates in an iterative manner: 1) it predicts the maximum and minimum attractiveness of each pair of pixels; 2) ranks the swapping scores based on the attractiveness of all the pairs; and 3) swaps the locations of the pair of pixels with a maximum score to increase the objective function. Experiments with actual satellite images have demonstrated that the proposed algorithm performs better than other algorithms. In comparison, the proposed STPSA's better performance is due to the fact that prior information used in other algorithms is restricted to a percentage level rather than the real subpixel level.

Journal ArticleDOI
TL;DR: This paper proposes a method to create “epipolar” geometry for arbitrary stereo configurations of any SAR sensor through appropriate geometric image transformations, and suggests the semiglobal matching (SGM) algorithm can be applied, which is restricted to epipolar geometry and is thus known to be highly efficient.
Abstract: For stereometric processing of optical image pairs, the concept of epipolar geometry is widely used. It helps to reduce the complexity of image matching, which can be seen to be the most crucial step within a workflow to generate digital elevation models. In this paper, it is shown that this concept is also applicable to the cocircular geometry of synthetic aperture radar (SAR) image pairs. First, it is proven that, for any feasible SAR acquisition, the deviation from true epipolar geometry is within subpixel range and therefore acceptably small. Based on this, we propose a method to create “epipolar” geometry for arbitrary stereo configurations of any SAR sensor through appropriate geometric image transformations. Consequently, the semiglobal matching (SGM) algorithm can be applied, which is restricted to epipolar geometry and is thus known to be highly efficient. This innovative approach, integrating both epipolar transformation and SGM, has been applied to a TerraSAR-X stereo data set. Its benefit has been demonstrated in a comparative assessment with respect to results, which have been previously achieved on the same test data using state-of-the-art stereometric methods.

Journal ArticleDOI
TL;DR: This letter first detects image patches within bright PT by using a sinc-like template from a single SAR image and then performs offset tracking on them to obtain the pixel shifts and shows that the proposed PT offset tracking can significantly increase the cross-correlation and thus result in both efficiency and reliability improvements.
Abstract: Offset tracking is an important complement to measure large ground displacements in both azimuth and range dimensions where synthetic aperture radar (SAR) interferometry is unfeasible. Subpixel offsets can be obtained by searching for the cross-correlation peak calculated from the match patches uniformly distributed on two SAR images. However, it has its limitations, including redundant computation and incorrect estimations on decorrelated patches. In this letter, we propose a simple strategy that performs offset tracking on detected point-like targets (PT). We first detect image patches within bright PT by using a sinc-like template from a single SAR image and then perform offset tracking on them to obtain the pixel shifts. Compared with the standard method, the application on the 2010 M 7.2 El Mayor-Cucapah earthquake shows that the proposed PT offset tracking can significantly increase the cross-correlation and thus result in both efficiency and reliability improvements.

Journal ArticleDOI
TL;DR: Several novel tools are developed in this paper, including soft fence detection, weighted truncated optical flow method, and robust temporal median filter, which targets automatic restoration of the video clips that are corrupted by fence-like occlusions during capture.
Abstract: This paper describes and provides an initial solution to a novel video editing task, i.e., video de-fencing. It targets automatic restoration of the video clips that are corrupted by fence-like occlusions during capture. Our key observation lies in the visual parallax between fences and background scenes, which is caused by the fact that the former are typically closer to the camera. Unlike in traditional image inpainting, fence-occluded pixels in the videos tend to appear later in the temporal dimension and are therefore recoverable via optimized pixel selection from relevant frames. To eventually produce fence-free videos, major challenges include cross-frame subpixel image alignment under diverse scene depth, and correct pixel selection that is robust to dominating fence pixels. Several novel tools are developed in this paper, including soft fence detection, weighted truncated optical flow method, and robust temporal median filter. The proposed algorithm is validated on several real-world video clips with fences.

Journal ArticleDOI
TL;DR: The experimental results showed that more accurate SPM results can be generated than with a single observed coarse image in conventional ICK-based SPM, and the accuracy of the proposed method is higher than that of the existing Hopfield neural network (HNN)- based SPM and the HNN with MSI.
Abstract: Subpixel mapping (SPM) is a technique for predicting the spatial distribution of land cover classes in remote sensing images at a finer spatial resolution level than those of the input images. Indicator cokriging (ICK) has been found to be an effective and efficient SPM method. The accuracy of this model, however, is limited by insufficient constraints. In this paper, the accuracy of the ICK-based SPM model is enhanced by using additional information gained from multiple shifted images (MSIs). First, each shifted image is utilized to compute the conditional probability of class occurrence at any fine spatial resolution pixel (i.e., subpixel) using ICK, and a set of conditional probability maps for all classes are generated for each image. The multiple ICK-derived conditional probability maps are then integrated, according to the estimated subpixel shifts of MSI. Lastly, class allocation at the subpixel scale is implemented to produce SPM results. The proposed algorithm was tested on two synthetic coarse spatial resolution remote sensing images and a set of real Moderate Resolution Imaging Spectroradiometer (MODIS) data. It was evaluated both visually and quantitatively. The experimental results showed that more accurate SPM results can be generated with MSI than with a single observed coarse image in conventional ICK-based SPM. In addition, the accuracy of the proposed method is higher than that of the existing Hopfield neural network (HNN)-based SPM and the HNN with MSI.

Journal ArticleDOI
TL;DR: A revised HNN-based SRM with anisotropic spatial dependence model (HNNA) is proposed and results showed that the HNNA can generate more accurate superresolution maps than a traditional HNN model.
Abstract: Superresolution mapping (SRM) based on the Hopfield neural network (HNN) is a technique that produces land cover maps with a finer spatial resolution than the input land cover fraction images. In HNN-based SRM, it is assumed that the spatial dependence of land cover classes is homogeneous. HNN-based SRM uses an isotropic spatial dependence model and gives equal weights to neighboring subpixels in the neighborhood system. However, the spatial dependence directions of different land cover classes are discarded. In this letter, a revised HNN-based SRM with anisotropic spatial dependence model (HNNA) is proposed. The Sobel operator is applied to detect the gradient magnitude and direction of each fraction image at each coarse-resolution pixel. The gradient direction is used to determine the direction of subpixel spatial dependence. The gradient magnitude is used to determine the weights of neighboring subpixels in the neighborhood system. The HNNA was examined on synthetic images with artificial shapes, a synthetic IKONOS image, and a real Landsat multispectral image. Results showed that the HNNA can generate more accurate superresolution maps than a traditional HNN model.

Journal ArticleDOI
TL;DR: An auto-stereoscopic 3D display method with the narrow structure pitch and high dense viewpoints is presented in this article, where a lenticular lens array with one pitch covering 5.333 sub-piexels and a novel subpixel arrangement method are designed.
Abstract: An auto-stereoscopic three-dimensional (3D) display method with the narrow structure pitch and high dense viewpoints is presented. Normally, the number of views is proportional to the structure pitch of the lenticular lens array. Increasing the density of views will decrease the spatial display resolution. Here a lenticular lens array with one pitch covering 5.333 subpiexels and a novel subpixel arrangement method are designed, and a 32 view 3D display is demonstrated. Compared with the traditional 6-view 3D display, the angular resolution and the displayed depth of field are significantly improved.

Journal ArticleDOI
TL;DR: A locally adaptive unmixing (LAU) method to extract lake-water area using 250 m MODIS images that is not only locally adaptive for endmember selection, but is also independent of the number of bands.

Journal ArticleDOI
TL;DR: An optimized stereo-matching method used in an active binocular three-dimensional measurement system that projects a binary fringe pattern in combination with a series of N binary band limited patterns and evaluates a similarity measure using logical comparison instead of a complicated floating-point operation.
Abstract: In this paper, we develop an optimized stereo-matching method used in an active binocular three-dimensional measurement system. A traditional dense stereo-matching algorithm is time consuming due to a long search range and the high complexity of a similarity evaluation. We project a binary fringe pattern in combination with a series of N binary band limited patterns. In order to prune the search range, we execute an initial matching before exhaustive matching and evaluate a similarity measure using logical comparison instead of a complicated floating-point operation. Finally, an accurate point cloud can be obtained by triangulation methods and subpixel interpolation. The experiment results verify the computational efficiency and matching accuracy of the method.

Journal ArticleDOI
TL;DR: In this letter, multiple subpixel shifted images (MSIs) were utilized to increase the accuracy of subpixel mapping (SPM), based on the fast bilinear and bicubic interpolation.
Abstract: In this letter, multiple subpixel shifted images (MSIs) were utilized to increase the accuracy of subpixel mapping (SPM), based on the fast bilinear and bicubic interpolation. First, each coarse spatial resolution image of MSI is soft classified to obtain class fraction images. Using bilinear or bicubic interpolation, all fraction images of MSI are upsampled to the desired fine spatial resolution. The multiple fine spatial resolution images for each class are then integrated. Finally, the integrated fine spatial resolution images are used to allocate hard class labels to subpixels. Experiments on two remote sensing images showed that, with MSI, both bilinear and bicubic interpolation-based SPMs are more accurate. The new methods are fast and do not need any prior spatial structure information.

Patent
08 Aug 2014
TL;DR: In this paper, a pixel layer containing subpixels for different optical bands composed of nano-scale structures and an intensity control layer that can pattern the luminance of the subpixels.
Abstract: A display media including a pixel layer containing subpixels for different optical bands composed of nano-scale structures and an intensity control layer that can pattern the luminance of the subpixels. The display media includes a substrate layer, a sub-wavelength substrate supported by the substrate layer and including subpixels, each subpixel defined by at least one sub-wavelength structure having at least one specific optical property including a specific optical band, at least two of the subpixels having a different specific optical property, and an intensity control layer to individually control an amount of luminance of each individual subpixel in a pattern. Some of the subpixels may have colors that define a color space, while some other subpixels may have an invisible radiation spectrum band. For example, the display media can allow both overt information (color images) and covert information to be embedded together with high density.

Patent
30 Jul 2014
TL;DR: In this article, a utility model is proposed for a naked-eye 3D display system based on a Unity 3D game engine. The system comprises a Shader which is programmed and operates on a GPU according to a subpixel projection matrix of a calculated view spot.
Abstract: The utility model proposes a naked-eye 3D display system based on a Unity 3D game engine. The system comprises a Shader which is programmed and operates on a GPU according to a subpixel projection matrix of a calculated view spot. The Shader comprises a top spot Shader and a segment Shader. A plurality of stereo cameras are built in Unity 3D software, and are arranged according to a certain structural position. Finally, a rendering chartlet outputted by each stereo camera is sampled and fused by the Shader, and an obtained composite image is outputted to a naked-eye 3D display screen, thereby achieving naked-eye 3D display.

Journal ArticleDOI
TL;DR: This article describes a method proposed by Kolmogorov and Zabih in 2001, which puts forward an energy-based formulation to minimize a four-term-energy, and one noteworthy feature of this method is that it handles occlusion.
Abstract: Binocular stereovision estimates the three-dimensional shape of a scene from two photographs taken from different points of view. In rectified epipolar geometry, this is equivalent to a matching problem. This article describes a method proposed by Kolmogorov and Zabih in 2001, which puts forward an energy-based formulation. The aim is to minimize a four-term-energy. This energy is not convex and cannot be minimized except among a class of perturbations called expansion moves, in which case an exact minimization can be done with graph cuts techniques. One noteworthy feature of this method is that it handles occlusion: The algorithm detects points that cannot be matched with any point in the other image. In this method displacements are pixel accurate (no subpixel refinement).

Journal ArticleDOI
TL;DR: This paper proposes a novel method for localization optimization of control points for robust calibration of a pinhole model camera that focuses on estimating the optimal control points in regions of plausibility determined by distortion bias from perspective distortion, lens distortion, and localization bias from out-of-focus blurring.
Abstract: This paper proposes a novel method for localization optimization of control points for robust calibration of a pinhole model camera. Instead of performing accurate subpixel control point detection by specialized operators, which are normally adopted in conventional work, our proposed method concentrates on estimating the optimal control points in regions of plausibility determined by distortion bias from perspective distortion, lens distortion, and localization bias from out-of-focus blurring. With this method, the two main strict preconditions for camera calibration in conventional work are relieved. The first one is that the input images for calibration are assumed to be well focused and the second one is that the individual control point needs to be detected with high accuracy. In our work, we formulate the accurate determination of control points' localization as an optimization process. This optimization process takes determined control points' uncertainty area as input. A global searching algorithm combined with Levenberg-Marquardt optimization algorithm is introduced for searching the optimal control points and refining camera parameters. Experimental results show that the proposed method achieves higher accuracy than the conventional methods.

Proceedings ArticleDOI
12 May 2014
TL;DR: A method for subpixel straight lines detection is presented and tested on images taken from the reticle of a dark field autocollimator and it is reported that for this type of image-content, Gaussian fitting has smaller uncertainty, when cameras with two different sensors are used.
Abstract: External visual interfaces for high precision measuring devices are based on the segmentation of images of their measuring reticle. In this paper, a method for subpixel straight lines detection is presented and tested on images taken from the reticle of a dark field autocollimator. The method has three steps, the sharpening of the image using a version of the Savitzky-Golay filter for smoothing and differentiation, the construction of a coarse edge image using Sobel filters, and finally, the subpixel edge location determination, by fitting a Gaussian function to orthogonal sections of the coarse edge image.We discuss results of applying the proposed method to images of the reticle of a Nikon 6D autocollimator, using the scale of the device as a benchmark for testing the error in the location of the lines and compare them with Sobel/Hough and Sobel/polynomial fitting. We report that for this type of image-content, Gaussian fitting has smaller uncertainty, when cameras with two different sensors are used.