scispace - formally typeset
Search or ask a question

Showing papers on "Subpixel rendering published in 2008"


Journal ArticleDOI
TL;DR: This paper describes the Semi-Global Matching (SGM) stereo method, which uses a pixelwise, Mutual Information based matching cost for compensating radiometric differences of input images and demonstrates a tolerance against a wide range of radiometric transformations.
Abstract: This paper describes the semiglobal matching (SGM) stereo method. It uses a pixelwise, mutual information (Ml)-based matching cost for compensating radiometric differences of input images. Pixelwise matching is supported by a smoothness constraint that is usually expressed as a global cost function. SGM performs a fast approximation by pathwise optimizations from all directions. The discussion also addresses occlusion detection, subpixel refinement, and multibaseline matching. Additionally, postprocessing steps for removing outliers, recovering from specific problems of structured environments, and the interpolation of gaps are presented. Finally, strategies for processing almost arbitrarily large images and fusion of disparity images using orthographic projection are proposed. A comparison on standard stereo images shows that SGM is among the currently top-ranked algorithms and is best, if subpixel accuracy is considered. The complexity is linear to the number of pixels and disparity range, which results in a runtime of just 1-2 seconds on typical test images. An in depth evaluation of the Ml-based matching cost demonstrates a tolerance against a wide range of radiometric transformations. Finally, examples of reconstructions from huge aerial frame and pushbroom images demonstrate that the presented ideas are working well on practical problems.

3,302 citations


Journal ArticleDOI
TL;DR: Three new algorithms for 2D translation image registration to within a small fraction of a pixel that use nonlinear optimization and matrix-multiply discrete Fourier transforms are compared to evaluate a translation-invariant error metric.
Abstract: Three new algorithms for 2D translation image registration to within a small fraction of a pixel that use nonlinear optimization and matrix-multiply discrete Fourier transforms are compared. These algorithms can achieve registration with an accuracy equivalent to that of the conventional fast Fourier transform upsampling approach in a small fraction of the computation time and with greatly reduced memory requirements. Their accuracy and computation time are compared for the purpose of evaluating a translation-invariant error metric.

1,715 citations


Proceedings ArticleDOI
23 Jun 2008
TL;DR: An algorithm that estimates non-parametric, spatially-varying blur functions at subpixel resolution from a single image by predicting a ldquosharprdquo version of a blurry input image and uses the two images to solve for a PSF.
Abstract: Image blur is caused by a number of factors such as motion, defocus, capturing light over the non-zero area of the aperture and pixel, the presence of anti-aliasing filters on a camera sensor, and limited sensor resolution We present an algorithm that estimates non-parametric, spatially-varying blur functions (ie, point-spread functions or PSFs) at subpixel resolution from a single image Our method handles blur due to defocus, slight camera motion, and inherent aspects of the imaging system Our algorithm can be used to measure blur due to limited sensor resolution by estimating a sub-pixel, super-resolved PSF even for in-focus images It operates by predicting a ldquosharprdquo version of a blurry input image and uses the two images to solve for a PSF We handle the cases where the scene content is unknown and also where a known printed calibration target is placed in the scene Our method is completely automatic, fast, and produces accurate results

617 citations


Patent
Martin Ünsal1, Aram Lindahl1
05 Sep 2008
TL;DR: In this paper, a technique for displaying pixels of an image at arbitrary subpixel positions is presented. And interpolated intensity values for the pixels of the image are derived based on the arbitrary sub-pixel location and an intensity distribution or profile.
Abstract: A technique is provided for displaying pixels of an image at arbitrary subpixel positions. In accordance with aspects of this technique, interpolated intensity values for the pixels of the image are derived based on the arbitrary subpixel location and an intensity distribution or profile. Reference to the intensity distribution provides appropriate multipliers for the source image. Based on these multipliers, the image may be rendered at respective physical pixel locations such that the pixel intensities are summed with each rendering, resulting in a destination image having suitable interpolated pixel intensities for the arbitrary subpixel position.

146 citations


Journal ArticleDOI
TL;DR: The main aim of this paper is to show the implementation and application of downscaling cokriging for super-resolution image mapping and shows the performance of the method using Landsat Enhanced Thematic Mapper Plus images.
Abstract: The main aim of this paper is to show the implementation and application of downscaling cokriging for super-resolution image mapping. By super-resolution, we mean increasing the spatial resolution of satellite sensor images where the pixel size to be predicted is smaller than the pixel size of the empirical image with the finest spatial resolution. It is assumed that coregistered images with different spatial and spectral resolutions of the same scene are available. The main advantages of cokriging are that it takes into account the correlation and cross correlation of images, it accounts for the different supports (i.e., pixel sizes), it can explicitly take into account the point spread function of the sensor, and it has the property of prediction coherence. In addition, ancillary images (topographic maps, thematic maps, etc.) as well as sparse experimental data could be included in the process. The main problem is that super-resolution cokriging requires several covariances and cross covariances, some of which are not empirically accessible (i.e., from the pixel values of the images). In the adopted solution, the fundamental concept is that of covariances and cross-covariance models with point support. Once the set of point-support models is estimated using linear systems theory, any pixel-support covariance and cross covariance can be easily obtained by regularization. We show the performance of the method using Landsat Enhanced Thematic Mapper Plus images.

113 citations


Patent
08 Feb 2008
TL;DR: In this paper, the authors describe the use of three primary color or multi-primary color subpixel repeating groups that are particularly suitable for directional display devices which produce at least two images simultaneously, such as autostereoscopic three-dimensional display devices or multiview devices.
Abstract: Display devices and systems are configured with display panels substantially comprising one of several embodiments of three primary color or multi-primary color subpixel repeating groups that are particularly suitable for directional display devices which produce at least two images simultaneously, such as autostereoscopic three-dimensional display devices or multi-view devices. Input image data indicating an image is rendered to a device configured with one of the illustrated subpixel repeating groups using a subpixel rendering operation.

110 citations


Patent
07 Aug 2008
TL;DR: In this article, a speckle pattern is projected onto an object and images of the resulting pattern are acquired from multiple angles, and the images are locally cross-correlated using a sparse array image correlation technique and the surface is resolved by using relative camera position information to calculate the three-dimensional coordinates of each locally correlated region.
Abstract: A high-speed three-dimensional imaging system includes a single lens camera subsystem with an active imaging element and CCD element, and a correlation processing subsystem. The active imaging element can be a rotating aperture which allows adjustable non-equilateral spacing between defocused images to achieve greater depth of field and higher sub-pixel displacement accuracy. A speckle pattern is projected onto an object and images of the resulting pattern are acquired from multiple angles. The images are locally cross-correlated using a sparse array image correlation technique and the surface is resolved by using relative camera position information to calculate the three-dimensional coordinates of each locally correlated region. Increased resolution and accuracy are provided by recursively correlating the images down to the level of individual points of light and using the Gaussian nature of the projected speckle pattern to determine subpixel displacement between images. Processing is done at very high-speeds by compressing the images before they are correlated. Correlation errors are eliminated during processing by a technique based on the multiplication of correlation table elements from one or more adjacent regions.

107 citations


Journal ArticleDOI
06 Apr 2008-Sensors
TL;DR: Two methods for downscaling coarse resolution thermal infrared (TIR) radiance for the purpose of subpixel temperature retrieval were developed on the basis of a scale-invariant physical model on TIR radiance and a statistical relationship between TIR Radiance and land cover fraction at high spatial resolution.
Abstract: Land surface temperature (LST) retrieved from satellite thermal sensors often consists of mixed temperature components. Retrieving subpixel LST is therefore needed in various environmental and ecological studies. In this paper, we developed two methods for downscaling coarse resolution thermal infrared (TIR) radiance for the purpose of subpixel temperature retrieval. The first method was developed on the basis of a scale-invariant physical model on TIR radiance. The second method was based on a statistical relationship between TIR radiance and land cover fraction at high spatial resolution. The two methods were applied to downscale simulated 990-m ASTER TIR data to 90-m resolution. When validated against the original 90-m ASTER TIR data, the results revealed that both downscaling methods were successful in capturing the general patterns of the original data and resolving considerable spatial details. Further quantitative assessments indicated a strong agreement between the true values and the estimated values by both methods.

99 citations


Journal ArticleDOI
T. J. Bin1, Ao Lei1, Cui Jiwen1, Kang Wen-jing1, Liu Dandan1 
TL;DR: Experimental results show that the proposed approach to use the lower radial orders and the rotation invariance of these moments to describe small objects in images is an efficient approach to satisfy the stringent requirements for higher edge location accuracy in such fields as medical image analysis and satellite remote sensing.

79 citations


Journal ArticleDOI
TL;DR: This work fully characterize and model the Satellite Pour l'Observation de la Terre (SPOT) 4-HRV1 sensor, and conjecture that distortions mostly result from the mechanical strain produced when the satellite was launched rather than from effects of on-orbit thermal variations or aging.
Abstract: We describe a method that allows for accurate inflight calibration of the interior orientation of any pushbroom camera and that in particular solves the problem of modeling the distortions induced by charge coupled device (CCD) misalignments. The distortion induced on the ground by each CCD is measured using subpixel correlation between the orthorectified image to be calibrated and an orthorectified reference image that is assumed distortion free. Distortions are modeled as camera defects, which are assumed constant over time. Our results show that in-flight interior orientation calibration reduces internal camera biases by one order of magnitude. In particular, we fully characterize and model the Satellite Pour l'Observation de la Terre (SPOT) 4-HRV1 sensor, and we conjecture that distortions mostly result from the mechanical strain produced when the satellite was launched rather than from effects of on-orbit thermal variations or aging. The derived calibration models have been integrated to the software package Coregistration of Optically Sensed Images and Correlation (COSI-Corr), freely available from the Caltech Tectonics Observatory website. Such calibration models are particularly useful in reducing biases in digital elevation models (DEMs) generated from stereo matching and in improving the accuracy of change detection algorithms.

79 citations


Journal ArticleDOI
TL;DR: A curvilinear detector, which combines the zero-crossing detection algorithm with Steger's detector, is employed to detect the subpixel locations of the light stripes, thus preventing the Stegers detector from failing to detect them in the endpoints of the stripes.
Abstract: This paper proposes a robust and accurate method for measuring 3-D surfaces using a binocular system. To eliminate the effect caused by the distortion of projector lens, each structured light sheet is fitted to a conicoid. A curvilinear detector, which combines the zero-crossing detection algorithm with Steger's detector, is employed to detect the subpixel locations of the light stripes, thus preventing the Steger's curvilinear detector from failing to detect them in the endpoints of the stripes. The proposed coding method combines the information of the linked line with the gray code to avoid producing outliers caused by erroneous decoding and make the coding procedure more robust. Experiments showed that each structured light sheet fitted to a conicoid can effectively improve the measurement accuracy. The subpixel detection method can detect the exact subpixel locations of the stripes. Likewise, the encoding strategy results in the production of fewer outliers, while the reconstruction result becomes more perfect.

Journal ArticleDOI
Feng Ling1, Fei Xiao1, Y. Du1, Huaiping Xue1, Xianyou Ren1 
TL;DR: In this paper, a novel algorithm based on a high spatial resolution digital elevation model (DEM) was proposed to address the subpixel waterline mapping problem, where the waterline was mapped at the sub-pixel scale with a proposed rule according to the physical features of the water flow and additional information provided by the DEM.
Abstract: Subpixel mapping technology is a promising method of increasing the spatial resolution of the classification results derived from remote sensing imagery. However, for waterline mapping problems, the traditional spatial dependence principle of subpixel mapping is not suitable as the water flow is always controlled by the topography. This letter presents a novel algorithm based on a high spatial resolution digital elevation model (DEM) to address the subpixel waterline mapping problem. The waterline was mapped at the subpixel scale with a proposed rule according to the physical features of the water flow and additional information provided by the DEM. The method was evaluated with degraded real remotely sensed imagery at different spatial resolutions. The results show that the proposed method can provide more accurate classifications than the traditional subpixel mapping method. Moreover, the fine spatial resolution DEM can be used as feasible supplementary data for subpixel waterline mapping from coarser spatial resolution imagery.

Journal ArticleDOI
TL;DR: An improved measurement technique is presented that enables subpixel estimation of 2D functions and the generalized Gaussian was shown to be an 8 times better fit to the estimated PSF than the Gaussian and a 14 timesbetter fit than the pillbox model.
Abstract: The averaged point-spread function (PSF) estimation of an image acquisition system is important for many computer vision applications, including edge detection and depth from defocus. The paper compares several mathematical models of the PSF and presents an improved measurement technique that enables subpixel estimation of 2D functions. New methods for noise suppression and uneven illumination modeling were incorporated. The PSF was computed from an ensemble of edge-spread function measurements. The generalized Gaussian was shown to be an 8 times better fit to the estimated PSF than the Gaussian and a 14 times better fit than the pillbox model.

Patent
26 Jun 2008
TL;DR: In this article, the backplane layout and addressing for non-standard subpixel arrangements are disclosed, where the thin film transistors are formed in a backplane structure adjacent to intersections of the row and column lines.
Abstract: Liquid crystal display backplane layouts and addressing for non-standard subpixel arrangements are disclosed. A liquid crystal display comprises a panel and a plurality of transistors. The panel substantially comprises a subpixel repeating group having an even number of subpixels in a first direction. Each thin film transistor connects one subpixel to a row and a column line at an intersection in one of a group of quadrants. The group comprises a first quadrant, a second quadrant, a third quadrant and a fourth quadrant, wherein the thin film transistors are formed in a backplane structure adjacent to intersections of the row and column lines. The thin film transistors are also substantially formed in more than one quadrant in the backplane structure.

Patent
29 May 2008
TL;DR: In this article, the authors proposed a method of compensating for changes in the characteristics of transistors and electroluminescent devices in an electroluminous display by using a two-dimensional array of subpixels arranged forming each pixel having at least three subpixels of different colors.
Abstract: A method of compensating for changes in the characteristics of transistors and electroluminescent devices in an electroluminescent display, includes: providing an electroluminescent display having a two-dimensional array of subpixels arranged forming each pixel having at least three subpixels of different colors, with each having an electroluminescent device and a drive transistor, wherein each electroluminescent device is driven by the corresponding drive transistor; providing in each pixel a readout circuit for one of the subpixels of a specific color having a first readout transistor and a second readout transistor connected in series; using the readout circuit to derive a correction signal based on the characteristics of at least one of the transistors in the specific color subpixel, or the electroluminescent device in the specific color subpixel, or both; and using the correction signal to adjust the drive signals.

Patent
02 Oct 2008
TL;DR: In this article, the authors describe a selective compression unit that compresses image data to produce intermediate image data based upon a segmentation of the input image according to a parameter, such as spatial segments, chromatic segments or temporal segments.
Abstract: Displays systems and methods for selectively reducing or compressing image data values within an image are recited. Display systems transform input image data from one input gamut hull or space to another target gamut hull or space that is substantially defined by different subpixel repeating groups comprising the display. Display systems described herein comprise a selective compression unit, said unit surveying said input image data to produce intermediate image data based upon a segmentation of the input image according to a parameter. Suitable parameters for segmenting the image include one or more of the following: spatial segments, chromatic segments or temporal segments. A selective compression amount may be determined so as to substantially maintain local contrast of the image data within a given segment.

Patent
Hidekazu Kobayashi1
23 Jan 2008
TL;DR: A light-emitting device includes a plurality of pixels constituting a screen, each of the pixels including four subpixels, which are a red subpixel, a green subpixel and a blue subpixel.
Abstract: A light-emitting device includes a plurality of pixels constituting a screen, each of the pixels including four subpixels, which are a red subpixel, a green subpixel, a blue subpixel, and a remaining subpixel. The red subpixel has a light-emitting layer made of a white-light-emitting material and extending along the screen, and a color filter provided above the light-emitting layer to transmit red light, the white-light-emitting material emitting two-peak white light having an emission spectrum including a valley between a red peak residing in a wavelength range of red light and a blue peak residing in a wavelength range of blue light. The blue subpixel has a light-emitting layer made of the white-light-emitting material and extending along the screen, and a color filter provided above the light-emitting layer to transmit blue light. The remaining subpixel has a light-emitting layer made of the white-light-emitting material and extending along the screen. The green subpixel has a light-emitting layer made of a green-light-emitting material that emits green light and extending along the screen, and a color filter provided above the light-emitting layer to transmit green light.

Journal ArticleDOI
TL;DR: In this paper, a simple linear mixing of pixel elements (subpixels) was used to examine the impacts of pixel mixtures on temperature retrieval and ground leaving radiance, and the results showed that for a single material with one temperature distribution and with a subpixel temperature standard deviation of 6-k (daytime images), the effects of subpixels temperature variability are small but can exceed 0.5-k in the 3-5-µm band and about a third of that in the 8-12-m band.
Abstract: Virtually all remotely sensed thermal infrared (IR) pixels are, to some degree, mixtures of different materials or temperatures: real pixels are rarely thermally homogeneous. As sensors improve and spectral thermal IR remote sensing becomes more quantitative, the concept of homogeneous pixels becomes inadequate. Quantitative thermal IR remote sensors measure radiance. Planck's Law defines a relationship between temperature and radiance that is more complex than linear proportionality and is strongly wavelength-dependent. As a result, the area-averaged temperature of a pixel is not the same as the temperature derived from the radiance averaged over the pixel footprint, even for blackbodies. This paper uses simple linear mixing of pixel elements (subpixels) to examine the impacts of pixel mixtures on temperature retrieval and ground leaving radiance. The results show that for a single material with one temperature distribution and with a subpixel temperature standard deviation of 6 K (daytime images), the effects of subpixel temperature variability are small but can exceed 0.5 K in the 3-5 µm band and about a third of that in the 8-12 µm band. For pixels with a 50 : 50 mixture of materials (two temperature distributions with different means) the impact of subpixel radiance variability on temperature retrieval can exceed 6 K in the 3-5 µm band and 2 K in the 8-12 µm band. Sub-pixel temperatures determined from Gaussian distributions and also from high-resolution thermal images are used as inputs to our linear mixing model. Model results are compared directly to these broadband thermal images of plowed soil and senesced barley. Finally, a theoretical framework for quantifying the effect of non-homogeneous temperature distributions for the case of a binary combination of mixed pixels is derived, with results shown to be valid for the range of standard deviations and temperature differences examined herein.

Journal ArticleDOI
TL;DR: A post processing algorithm to enhance the quality of motion JPEG (MJPEG) by exploiting temporal redundancies and reconstructing the high frequency coefficients lost during quantization and therefore reduces the ringing artifact.
Abstract: The paper proposes a pixel-based post-processing algorithm to enhance the quality of motion JPEG (MJPEG) by exploiting the temporal redundancies of the decoded frames. The technique permits reconstruction of the high frequency coefficients lost during quantization, thereby reducing ringing artifacts. Based on the linearization of the quantization function, the error between the estimated and original coefficients is analyzed for both cases of ideal and real video sequences. Blocking artifact reduction is verified by a reduction in the variance of this coefficient error. The condition of valid motion vectors to get quality improvement is considered based on these errors. The algorithm is also extended to find the optimal filter for a general estimation scheme based on an arbitrary number of frames. Results in visual and peak signal-to-noise ratio improvement using both integer and subpixel motion vectors are verified by simulations on video sequences.

Patent
10 Dec 2008
TL;DR: In this article, an organic light emitting device includes a first pixel displaying a first color, a second pixel adjacent to the first pixel and displaying a second color, and a third pixel adjacent either to either the first or second pixel or the second pixel, displaying a third color.
Abstract: An organic light emitting device includes a first pixel displaying a first color, a second pixel adjacent to the first pixel and displaying a second color, and a third pixel adjacent to the first pixel or the second pixel and displaying a third color, wherein the first pixel includes a first and second subpixel units that output respective lights having different color characteristics.

Patent
Seok-Jin Han1
29 Apr 2008
TL;DR: The subpixel rendering component of a display system provides the capability to substitute a second subpixel filtering filter for a first subpixel filter for computing the values of certain subpixels on the display panel when the input image data being rendered indicates an image feature that may give rise to a color balance error at some portion of the displayed output image.
Abstract: The subpixel rendering component of a display system provides the capability to substitute a second subpixel rendering filter for a first subpixel rendering filter for computing the values of certain subpixels on the display panel when the input image data being rendered indicates an image feature that may give rise to a color balance error at some portion of the displayed output image. An image processing method of correcting for color balance errors detects the location of a subpixel being rendered and for certain subpixels, detects whether the input image data indicates the presence of a particular image feature. When the image feature is detected for particular subpixels being processed, a second subpixel rendering image filter is substituted for a first subpixel rendering image filter.

Journal ArticleDOI
01 Jul 2008
TL;DR: The new method utilizes the characteristics of bispectrum to suppress Gaussian noise and develops a phase relationship between the image pair and estimates the subpixel translation by solving a set of nonlinear equations.
Abstract: This paper proposes an effective higher order statistics method to address subpixel image registration. Conventional power spectrum-based techniques employ second-order statistics to estimate subpixel translation between two images. They are, however, susceptible to noise, thereby leading to significant performance deterioration in low signal-to-noise ratio environments or in the presence of cross-correlated channel noise. In view of this, we propose a bispectrum-based approach to alleviate this difficulty. The new method utilizes the characteristics of bispectrum to suppress Gaussian noise. It develops a phase relationship between the image pair and estimates the subpixel translation by solving a set of nonlinear equations. Experimental results show that the proposed technique provides performance improvement over conventional power-spectrum-based methods under different noise levels and conditions.

Proceedings ArticleDOI
28 Aug 2008
TL;DR: In this article, a variational method is proposed to estimate the blurs and the high-resolution image simultaneously, and an innovative learning-based algorithm using a neural architecture is described.
Abstract: Imaging plays a key role in many diverse areas of application, such as astronomy, remote sensing, microscopy, and tomography. Owing to imperfections of measuring devices (e.g., optical degradations, limited size of sensors) and instability of the observed scene (e.g., object motion, media turbulence), acquired images can be indistinct, noisy, and may exhibit insufficient spatial and temporal resolution. In particular, several external effects blur images. Techniques for recovering the original image include blind deconvolution (to remove blur) and superresolution (SR). The stability of these methods depends on having more than one image of the same frame. Differences between images are necessary to provide new information, but they can be almost unperceivable. State-of-the-art SR techniques achieve remarkable results in resolution enhancement by estimating the subpixel shifts between images, but they lack any apparatus for calculating the blurs. In this paper, after introducing a review of current SR techniques we describe two recently developed SR methods by the authors. First, we introduce a variational method that minimizes a regularized energy function with respect to the high resolution image and blurs. In this way we establish a unifying way to simultaneously estimate the blurs and the high resolution image. By estimating blurs we automatically estimate shifts with subpixel accuracy, which is inherent for good SR performance. Second, an innovative learning-based algorithm using a neural architecture for SR is described. Comparative experiments on real data illustrate the robustness and utilization of both methods.

Journal ArticleDOI
TL;DR: Computer algorithms for the OIS method were developed, written using the Interactive Data Language (IDL) and applications demonstrating these algorithms are presented, and a complete description of the Gaussian components basis vectors used by Alard & Lupton to construct the convolution kernel is presented.
Abstract: To detect objects that vary in brightness or spatial coordinates over time, C. Alard and R. H. Lupton in 1998 proposed an "optimal image subtraction" (OIS) method that constructs a convolution kernel from a set of matching stars distributed across the two images to be subtracted. Using multivariable least squares, the kernel is derived and can be designed to vary by pixel coordinates across the convolved image. Local effects in the optics, including aberrations or other spatially sensitive perturbations to a perfect image, can be mitigated. This paper presents the specific systems of equations that originate from the OIS method. Also included is a complete description of the Gaussian components basis vectors used by Alard & Lupton to construct the convolution kernel. An alternative set of basis vectors, called the delta function basis, is also described. Important issues are addressed, including the selection of the matching stars, differential background correction, constant photometric flux, contaminated pixel masking, and alignment at the subpixel level. Computer algorithms for the OIS method were developed, written using the Interactive Data Language (IDL), and applications demonstrating these algorithms are presented.

Journal ArticleDOI
22 Feb 2008
TL;DR: An algorithm which incorporates normalized correlation into a pyramid image representation structure to perform fast recognition and localization and employs an estimate of the gradient of the correlation surface to perform a steepest descent search.
Abstract: The ability to quickly locate one or more instances of a model in a grey scale image is of importance to industry. The recognition/localization must be fast and accurate. In this paper we present an algorithm which incorporates normalized correlation into a pyramid image representation structure to perform fast recognition and localization. The algorithm employs an estimate of the gradient of the correlation surface to perform a steepest descent search. Test results are given detailing search time by target size, effect of rotation and scale changes on performance, and accuracy of the subpixel localization algorithm used in the algorithm. Finally, results are given for searches on real images with perspective distortion and the addition of Gaussian noise.

Proceedings ArticleDOI
03 Apr 2008
TL;DR: A set of standardized synthetic images which simulate various scenarios so that different algorithms can be validated and evaluated on the same ground with completely controllable environments and how these six scenarios can be used to evaluate various algorithms in applications of subpixel detection, mixed pixel classification/quantification and endmember extraction is demonstrated.
Abstract: Many hyperspectral imaging algorithms are available for applications such as spectral unmixing, subpixel detection, quantification, endmember extraction, classification, compression, etc and many more are yet to come. It is very difficult to evaluate and validate different algorithms developed and designed for the same application. This paper makes an attempt to design a set of standardized synthetic images which simulate various scenarios so that different algorithms can be validated and evaluated on the same ground with completely controllable environments. Two types of scenarios are developed to simulate how a target can be inserted into the image background. One is called Target Implantation (TI) which implants a target pixel by removing the background pixel it intends to replace. This type of scenarios is of particular interest in endmember extraction where pure signatures can be simulated and inserted into the background with guaranteed 100% purity. The other is called Target Embeddedness (TE) which embeds a target pixel by adding this target pixel to the background pixel it intends to insert. This type of scenarios can be used to simulate signal detection models where the noise is additive. For each of both types three scenarios are designed to simulate different levels of target knowledge by adding a Gaussian noise. In order to make these six scenarios a standardized data set for experiments, the data used to generate synthetic images can be chosen from a data base or spectral library available in the public domain or websites and no particular data are required to simulate these synthetic images. By virtue of the designed six scenarios an algorithm can be assessed objectively and compared fairly to other algorithms on the same setting. This paper demonstrates how these six scenarios can be used to evaluate various algorithms in applications of subpixel detection, mixed pixel classification/quantification and endmember extraction.

Patent
Junichi Ihata1, Koichi Fukuda1
04 Aug 2008
TL;DR: Sticking of a pixel is suppressed to improve the life of a display panel in an emission display apparatus as mentioned in this paper, in which a plurality of pixels each having at least one subpixel (11a, 11b, 11c) are disposed.
Abstract: Sticking of a Pixel is suppressed to improve the life of a display panel In an emission display apparatus with a display panel in which a plurality of pixels each having at least one subpixel (11a, 11b, 11c) are disposed A first display method of emitting light with only a pixel P(I,j) serving as an emission center and a second display method of allocating luminance of the pixel P(i,j) serving as an emission center to nearby pixels surrounding the pixel are combined in a controllable manner A high-resolution mode with a high ratio of the first display method and a long-life mode with a high ratio of the second display method are switched therebetween depending on a spatial change or time change of image input data, an emission time, a degradation rate, a temperature, an emission luminance, and a display time

Journal ArticleDOI
TL;DR: A generalized physically-based reflectance model is described that relates the distribution of surface normals inside each pixel area to its reflectance function and is used to infer subpixel geometric structures on a surface of homogeneous material by spatially arranging the normals among pixels at a higher resolution than that of the input image.
Abstract: Conventional photometric stereo recovers one normal direction per pixel of the input image. This fundamentally limits the scale of recovered geometry to the resolution of the input image, and cannot model surfaces with subpixel geometric structures. In this paper, we propose a method to recover subpixel surface geometry by studying the relationship between the subpixel geometry and the reflectance properties of a surface. We first describe a generalized physically-based reflectance model that relates the distribution of surface normals inside each pixel area to its reflectance function. The distribution of surface normals can be computed from the reflectance functions recorded in photometric stereo images. A convexity measure of subpixel geometry structure is also recovered at each pixel, through an analysis of the shadowing attenuation. Then, we use the recovered distribution of surface normals and the surface convexity to infer subpixel geometric structures on a surface of homogeneous material by spatially arranging the normals among pixels at a higher resolution than that of the input image. Finally, we optimize the arrangement of normals using a combination of belief propagation and MCMC based on a minimum description length criterion on 3D textons over the surface. The experiments demonstrate the validity of our approach and show superior geometric resolution for the recovered surfaces.

Journal ArticleDOI
TL;DR: A method for estimating the point spread function by spatial subpixel analysis is described and an optimal solution for fitting the model parameters to the actual scene is searched for.
Abstract: It has been shown that spatial subpixel analysis can be used to enhance images of fine structured landscapes This method is based on the geometric description of object boundaries that intersect pixels and thus lead to mixed pixels The parameters of the geometric model describing the underlying scene are estimated by means of an optimization algorithm The applicability of the method depends on the relationship between the size of the remotely sensed objects and the pixel size Possible applications include pre-processing for classification to reduce the percentage of mixed pixels, vector segmentation approaches and image fusion techniques The distribution of grey values in the neighbourhood of any mixed pixel not only depends on the parameters of the geometric model of land cover boundaries, but also on the spatial response of the sensor Therefore, knowledge about the sensor point spread function can be used to enhance performance of spatial subpixel analysis If the parameters of the point spread function are included as unknowns in the fitting problem, the optimization may give an estimate of the spatial response For this purpose we assume a Gaussian-shaped point spread function with two parameters, namely the standard deviations along the two image axes, and search for an optimal solution for fitting the model parameters to the actual scene In this contribution we describe a method for estimating the point spread function by spatial subpixel analysis We apply the algorithm to different synthetic and real images The sensitivity of the method to varying input patterns is discussed and the improvement of the results of spatial subpixel analysis by taking into account the point spread function determined in the described way is illustrated

Journal ArticleDOI
TL;DR: An efficient iterative scheme is proposed, which reduces considerably the overall computational cost of the image registration problem and properly combined with the proposed similarity measure results in a fast spatial domain technique for subpixel image registration.
Abstract: In this paper a new technique for performing image registration with subpixel accuracy is presented. The proposed technique, which is based on the maximization of the correlation coefficient function, does not require the reconstruction of the intensity values and provides a closed-form solution to the subpixel translation estimation problem. Moreover, an efficient iterative scheme is proposed, which reduces considerably the overall computational cost of the image registration problem. This scheme properly combined with the proposed similarity measure results in a fast spatial domain technique for subpixel image registration.