scispace - formally typeset
Search or ask a question

Showing papers on "Pixel published in 2003"


Book
01 Jan 2003
TL;DR: In this article, a quantitative analysis of mixed-to-pure pixel conversion is presented, along with an anomaly detection method for unsupervised mixed pixel classification and a projection pursuit method for projection pursuit.
Abstract: 1. Introduction. Part I: Hyperspectral Measures. 2. Hyperspectral measures for spectral characterization. Part II: Subpixel Detection. 3. Target abundance-constrained subpixel detection. 4. Target signature-constrained subpixel detection: linearly constrained minimum variance (LCMV). 5. Automatic subpixel detection (unsupervised subpixel detection). 6. Anomaly detection. 7. Sensitivity of subpixel detection. Part III: Unconstrained Mixed Pixel Classification. 8. Unconstrained Mixed Pixel Classification: least squares subspace projection. 9. A quantitative analysis of mixed-to-pure pixel conversion. Part IV: Constrained Mixed Pixel Classification. 10. Target abundance-constrained mixed pixel classification (TACMPC). 11. Target signature-constrained mixed pixel classification (TSCMPC): LCMV multiple target classifiers. 12. Signature-constrained mixed pixel classification (TSCMPC): Linearly constrained discriminant analysis (LCDA). Part V: Automatic Mixed Pixel Classification (AMPC). 13. Automatic mixed pixel classification (AMPC): unsupervised mixed pixel classification. 14. Automatic mixed pixel classification (AMPC): anomaly classification. 15. Automatic mixed pixel classification (AMPC): linear spectral random mixture analysis (LSRMA). 16. Automatic mixed pixel classification (AMPC): projection pursuit. 17. Estimation of virtual dimensionality of hyperspectral imagery. 18. Conclusion and further techniques. Glossary. References. Index.

1,228 citations


Journal ArticleDOI
TL;DR: A deformation measurement system based on particle image velocimetry (PIV) and close-range photogrammetry has been developed for use in geotechnical testing as mentioned in this paper.
Abstract: A deformation measurement system based on particle image velocimetry (PIV) and close-range photogrammetry has been developed for use in geotechnical testing. In this paper, the theory underlying this system is described, and the performance is validated. Digital photography is used to capture images of planar soil deformation. Using PIV, the movement of a fine mesh of soil patches is measured to a high precision. Since PIV operates on the image texture, intrusive target markers need not be installed in the observed soil. The resulting displacement vectors are converted from image space to object space using a photogrammetric transformation. A series of validation experiments are reported. These demonstrate that the precision, accuracy and resolution of the system are an order of magnitude higher than previous image-based deformation methods, and are comparable to local instrumentation used in element testing. This performance is achieved concurrent with an order of magnitude increase in the number of meas...

1,180 citations


Journal ArticleDOI
TL;DR: A new and efficient steganographic method for embedding secret messages into a gray-valued cover image that provides an easy way to produce a more imperceptible result than those yielded by simple least-significant-bit replacement methods.

1,078 citations


Patent
08 Jan 2003
TL;DR: In this article, the surface of a source signal line or a power supply line in a pixel portion is plated to reduce a resistance of a wiring, and a terminal is similarly plated in order to make the resistance reduction.
Abstract: There is provided a light emitting device in which low power consumption can be realized even in the case of a large screen. The surface of a source signal line or a power supply line in a pixel portion is plated to reduce a resistance of a wiring. The source signal line in the pixel portion is manufactured by a step different from a source signal line in a driver circuit portion. The power supply line in the pixel portion is manufactured by a step different from a power supply line led on a substrate. A terminal is similarly plated to made the resistance reduction. It is desirable that a wiring before plating is made of the same material as a gate electrode and the surface of the wiring is plated to form the source signal line or the power supply line.

806 citations


Journal Article
TL;DR: In this article, the requirements for CMOS image sensors and their historical development, CMOS devices and circuits for pixels, analog signal chain, and on-chip analog-to-digital conversion are reviewed and discussed.
Abstract: CMOS active pixel sensors (APS) have performance competitive with charge-coupled device (CCD) technology, and offer advantages in on-chip functionality, system power reduction, cost, and miniaturization. This paper discusses the requirements for CMOS image sensors and their historical development, CMOS devices and circuits for pixels, analog signal chain, and on-chip analog-to-digital conversion are reviewed and discussed.

693 citations


Journal ArticleDOI
TL;DR: In this paper, the camera distortions and the curvature of the spectral features are used to recover information regarding the background spectrum on wavelength scales much smaller than a pixel, which can propagate this better sampled background spectrum through inverses of the distortion and rectification transformations.
Abstract: In two‐dimensional spectrographs, the optical distortions in the spatial and dispersion directions produce variations in the subpixel sampling of the background spectrum Using knowledge of the camera distortions and the curvature of the spectral features, one can recover information regarding the background spectrum on wavelength scales much smaller than a pixel As a result, one can propagate this better sampled background spectrum through inverses of the distortion and rectification transformations and accurately model the background spectrum in two‐dimensional spectra for which the distortions have not been removed (ie, the data have not been rebinned/rectified) The procedure, as outlined in this paper, is extremely insensitive to cosmic rays, hot pixels, etc Because of this insensitivity to discrepant pixels, sky modeling and subtraction need not be performed as one of the later steps in a reduction pipeline Sky subtraction can now be performed as one of the earliest tasks, perhaps just

686 citations


Proceedings ArticleDOI
01 Jan 2003
TL;DR: In this article, a pedestrian detection system that integrates image intensity information with motion information is presented, which uses a detection style algorithm that scans a detector over two consecutive frames of a video sequence.
Abstract: This paper describes a pedestrian detection system that integrates image intensity information with motion information. We use a detection style algorithm that scans a detector over two consecutive frames of a video sequence. The detector is trained (using AdaBoost) to take advantage of both motion and appearance information to detect a walking person. Past approaches have built detectors based on appearance information, but ours is the first to combine both sources of information in a single detector. The implementation described runs at about 4 frames/second, detects pedestrians at very small scales (as small as 20/spl times/15 pixels), and has a very low false positive rate. Our approach builds on the detection work of Viola and Jones. Novel contributions of this paper include: i) development of a representation of image motion which is extremely efficient, and ii) implementation of a state of the art pedestrian detection system which operates on low resolution images under difficult conditions (such as rain and snow).

652 citations


Proceedings ArticleDOI
01 Jul 2003
TL;DR: This paper describes the approach to generate high dynamic range (HDR) video from an image sequence of a dynamic scene captured while rapidly varying the exposure of each frame, and how to compensate for scene and camera movement when creating an HDR still from a series of bracketed still photographs.
Abstract: Typical video footage captured using an off-the-shelf camcorder suffers from limited dynamic range. This paper describes our approach to generate high dynamic range (HDR) video from an image sequence of a dynamic scene captured while rapidly varying the exposure of each frame. Our approach consists of three parts: automatic exposure control during capture, HDR stitching across neighboring frames, and tonemapping for viewing. HDR stitching requires accurately registering neighboring frames and choosing appropriate pixels for computing the radiance map. We show examples for a variety of dynamic scenes. We also show how we can compensate for scene and camera movement when creating an HDR still from a series of bracketed still photographs.

641 citations


Proceedings ArticleDOI
18 Jun 2003
TL;DR: Two appearance-based methods for clustering a set of images of 3D (three-dimensional) objects into disjoint subsets corresponding to individual objects, based on the concept of illumination cones and another affinity measure based on image gradient comparisons are introduced.
Abstract: We introduce two appearance-based methods for clustering a set of images of 3D (three-dimensional) objects, acquired under varying illumination conditions, into disjoint subsets corresponding to individual objects. The first algorithm is based on the concept of illumination cones. According to the theory, the clustering problem is equivalent to finding convex polyhedral cones in the high-dimensional image space. To efficiently determine the conic structures hidden in the image data, we introduce the concept of conic affinity, which measures the likelihood of a pair of images belonging to the same underlying polyhedral cone. For the second method, we introduce another affinity measure based on image gradient comparisons. The algorithm operates directly on the image gradients by comparing the magnitudes and orientations of the image gradient at each pixel. Both methods have clear geometric motivations, and they operate directly on the images without the need for feature extraction or computation of pixel statistics. We demonstrate experimentally that both algorithms are surprisingly effective in clustering images acquired under varying illumination conditions with two large, well-known image data sets.

630 citations


Journal ArticleDOI
Yu Chen1, J. Au1, P. Kazlas1, A. Ritenour1, Holly G. Gates1, M. McCreary1 
08 May 2003-Nature
TL;DR: The fabrication of such a display on a bendable active-matrix-array sheet with high pixel density and resolution and can be bent to a radius of curvature of 1.5 cm should greatly extend the range of display applications.
Abstract: Ultrathin, flexible electronic displays that look like print on paper are of great interest1,2,3,4 for application in wearable computer screens, electronic newspapers and smart identity cards. Here we realize the fabrication of such a display on a bendable active-matrix-array sheet. The display is less than 0.3 mm thick, has high pixel density (160 pixels × 240 pixels) and resolution (96 pixels per inch), and can be bent to a radius of curvature of 1.5 cm without any degradation in contrast. This use of electronic ink technology on such an ultrathin, flexible substrate should greatly extend the range of display applications.

537 citations


Journal ArticleDOI
TL;DR: The combination of CAMSHIFT and SVMs produces both robust and efficient text detection, as time-consuming texture analyses for less relevant pixels are restricted, leaving only a small part of the input image to be texture-analyzed.
Abstract: The current paper presents a novel texture-based method for detecting texts in images. A support vector machine (SVM) is used to analyze the textural properties of texts. No external texture feature extraction module is used, but rather the intensities of the raw pixels that make up the textural pattern are fed directly to the SVM, which works well even in high-dimensional spaces. Next, text regions are identified by applying a continuously adaptive mean shift algorithm (CAMSHIFT) to the results of the texture analysis. The combination of CAMSHIFT and SVMs produces both robust and efficient text detection, as time-consuming texture analyses for less relevant pixels are restricted, leaving only a small part of the input image to be texture-analyzed.

Journal ArticleDOI
TL;DR: In this paper, a method of selecting endmembers from a spectral library for use in multiple endmember spectral mixture analysis (MESMA) was presented, which was used to map land cover in the Santa Ynez Mountains above Santa Barbara, CA, USA.

Journal ArticleDOI
TL;DR: In this article, the authors proposed a method to recover information regarding the background spectrum on wavelength scales much smaller than a pixel by propagating this better-sampled background spectrum through inverses of the distortion and rectification transformations.
Abstract: In two-dimensional spectrographs, the optical distortions in the spatial and dispersion directions produce variations in the sub-pixel sampling of the background spectrum. Using knowledge of the camera distortions and the curvature of the spectral features, one can recover information regarding the background spectrum on wavelength scales much smaller than a pixel. As a result, one can propagate this better-sampled background spectrum through inverses of the distortion and rectification transformations, and accurately model the background spectrum in two-dimensional spectra for which the distortions have not been removed (i.e. the data have not been rebinned/rectified). The procedure, as outlined in this paper, is extremely insensitive to cosmic rays, hot pixels, etc. Because of this insensitivity to discrepant pixels, sky modeling and subtraction need not be performed as one of the later steps in a reduction pipeline. Sky-subtraction can now be performed as one of the earliest tasks, perhaps just after dividing by a flat-field. Because subtraction of the background can be performed without having to ``clean'' cosmic rays, such bad pixel values can be trivially identified after removal of the two-dimensional sky background.

Patent
08 Jan 2003
TL;DR: In this paper, an active matrix display (AMD) with pixel electrodes, gate wirings and source wires is proposed, in which pixel electrodes are arranged in the pixel portions to realize a high numerical aperture without increasing the number of masks or the amount of steps.
Abstract: An active matrix display device having a pixel structure in which pixel electrodes, gate wirings and source wirings are suitably arranged in the pixel portions to realize a high numerical aperture without increasing the number of masks or the number of steps. The device comprises a gate electrode and a source wiring on an insulating surface, a first insulating layer on the gate electrode and on the source wiring, a semiconductor layer on the first insulating film, a second insulating layer on the semiconductor film, a gate wiring connected to the gate electrode on the second insulating layer, a connection electrode for connecting the source wiring and the semiconductor layer together, and a pixel electrode connected to the semiconductor layer.

Patent
29 May 2003
TL;DR: In this article, a real-time 3D interactive environment using a 3D camera is presented, where at least one computer-generated virtual object is inserted into the scene, and an interaction between a physical object in the scene and the virtual objects is detected based on coordinates of the virtual object and the obtained depth values.
Abstract: An invention is provided for affording a real-time three-dimensional interactive environment using a three-dimensional camera. The invention includes obtaining two-dimensional data values for a plurality of pixels representing a physical scene, and obtaining a depth value for each pixel of the plurality of pixels using a depth sensing device. Each depth value indicates a distance from a physical object in the physical scene to the depth sensing device. At least one computer-generated virtual object is inserted into the scene, and an interaction between a physical object in the scene and the virtual object is detected based on coordinates of the virtual object and the obtained depth values.

Journal ArticleDOI
TL;DR: An object-based approach for urban land cover classification from high-resolution multispectral image data that builds upon a pixel-based fuzzy classification approach is presented and is able to identify buildings, impervious surface, and roads in dense urban areas with 76, 81, and 99% classification accuracies.
Abstract: In this paper, we present an object-based approach for urban land cover classification from high-resolution multispectral image data that builds upon a pixel-based fuzzy classification approach. This combined pixel/object approach is demonstrated using pan-sharpened multispectral IKONOS imagery from dense urban areas. The fuzzy pixel-based classifier utilizes both spectral and spatial information to discriminate between spectrally similar road and building urban land cover classes. After the pixel-based classification, a technique that utilizes both spectral and spatial heterogeneity is used to segment the image to facilitate further object-based classification. An object-based fuzzy logic classifier is then implemented to improve upon the pixel-based classification by identifying one additional class in dense urban areas: nonroad, nonbuilding impervious surface. With the fuzzy pixel-based classification as input, the object-based classifier then uses shape, spectral, and neighborhood features to determine the final classification of the segmented image. Using these techniques, the object-based classifier is able to identify buildings, impervious surface, and roads in dense urban areas with 76%, 81%, and 99% classification accuracies, respectively.

Journal ArticleDOI
TL;DR: In this article, an Arbitrated address-event imager was designed and fabricated in a 0.6-/spl mu/m CMOS process, which is composed of 80 /spl times/ 60 pixels of 32 /spltimes/ 30 /spl m/m. Tests conducted on the imager showed a large output dynamic range of 180 dB (under bright local illumination) for an individual pixel.
Abstract: An arbitrated address-event imager has been designed and fabricated in a 0.6-/spl mu/m CMOS process. The imager is composed of 80 /spl times/ 60 pixels of 32 /spl times/ 30 /spl mu/m. The value of the light intensity collected by each photosensitive element is inversely proportional to the pixel's interspike time interval. The readout of each spike is initiated by the individual pixel; therefore, the available output bandwidth is allocated according to pixel output demand. This encoding of light intensities favors brighter pixels, equalizes the number of integrated photons across light intensity, and minimizes power consumption. Tests conducted on the imager showed a large output dynamic range of 180 dB (under bright local illumination) for an individual pixel. The array, on the other hand, produced a dynamic range of 120 dB (under uniform bright illumination and when no lower bound was placed on the update rate per pixel). The dynamic range is 48.9 dB value at 30-pixel updates/s. Power consumption is 3.4 mW in uniform indoor light and a mean event rate of 200 kHz, which updates each pixel 41.6 times per second. The imager is capable of updating each pixel 8.3K times per second (under bright local illumination).

Journal ArticleDOI
TL;DR: The proposed demosaicking method consists of an interpolation step that estimates missing color values by exploiting spatial and spectral correlations among neighboring pixels, and a post-processing step that suppresses noticeable demosaicks artifacts by adaptive median filtering.
Abstract: Single-sensor digital cameras capture imagery by covering the sensor surface with a color filter array (CFA) such that each sensor pixel only samples one of three primary color values. To render a full-color image, an interpolation process, commonly referred to as CFA demosaicking, is required to estimate the other two missing color values at each pixel. In this paper, we present two contributions to the CFA demosaicking: a new and improved CFA demosaicking method for producing high quality color images and new image measures for quantifying the performance of demosaicking methods. The proposed demosaicking method consists of two successive steps: an interpolation step that estimates missing color values by exploiting spatial and spectral correlations among neighboring pixels, and a post-processing step that suppresses noticeable demosaicking artifacts by adaptive median filtering. Moreover, in recognition of the limitations of current image measures, we propose two types of image measures to quantify the performance of different demosaicking methods; the first type evaluates the fidelity of demosaicked images by computing the peak signal-to-noise ratio and CIELAB /spl utri/E/sup *//sub ab/ for edge and smooth regions separately, and the second type accounts for one major demosaicking artifact-zipper effect. We gauge the proposed demosaicking method and image measures using several existing methods as benchmarks, and demonstrate their efficacy using a variety of test images.

Patent
22 Aug 2003
TL;DR: In this article, a voltage-programmed current source circuit, a drive transistor and a light sensitive device for sensing the display element light output are used to control the voltage provided to the gate of the drive transistor.
Abstract: An active matrix display device comprises an array of display pixels provided over a common substrate. Each pixel has a voltage-programmed current source circuit, a drive transistor and a light sensitive device for sensing the display element light output. The light sensitive device provides a current dependent on the display element output, and the light sensitive device and the current source circuit define a feedback control loop which controls the voltage provided to the gate of the drive transistor. This pixel circuit uses a current source circuit to provide a gate voltage to a drive transistor. This enables the current source circuit to operate at low current levels, and therefore under low voltage stress.

Journal ArticleDOI
Greg Ward1
TL;DR: A three million pixel exposure can be aligned in a fraction of a second on a contemporary microprocessor using this technique, and the cost of the algorithm is linear with respect to the number of pixels and effectively independent of the maximum translation.
Abstract: In this paper, we present a fast, robust, and completely automatic method for translational alignment of hand-held photographs. The technique employs percentile threshold bitmaps to accelerate image operations and avoid problems with the varying exposure levels used in high dynamic range (HDR) photography. An image pyramid is constructed from grayscale versions of each exposure, and these are converted to bitmaps which are then aligned horizontally and vertically using inexpensive shift and difference operations over each image. The cost of the algorithm is linear with respect to the number of pixels and effectively independent of the maximum translation. A three million pixel exposure can be aligned in a fraction of a second on a contemporary microprocessor using this technique.

Proceedings ArticleDOI
Jojic1, Frey2, Kannan2
13 Oct 2003
TL;DR: The epitome of an image is its miniature, condensed version containing the essence of the textural and shape properties of the image, as opposed to previously used simple image models, such as templates or basis functions.
Abstract: We present novel simple appearance and shape models that we call epitomes. The epitome of an image is its miniature, condensed version containing the essence of the textural and shape properties of the image. As opposed to previously used simple image models, such as templates or basis functions, the size of the epitome is considerably smaller than the size of the image or object it represents, but the epitome still contains most constitutive elements needed to reconstruct the image. A collection of images often shares an epitome, e.g., when images are a few consecutive frames from a video sequence, or when they are photographs of similar objects. A particular image in a collection is defined by its epitome and a smooth mapping from the epitome to the image pixels. When the epitomic representation is used within a hierarchical generative model, appropriate inference algorithms can be derived to extract the epitome from a single image or a collection of images and at the same time perform various inference tasks, such as image segmentation, motion estimation, object removal and super-resolution.

Patent
13 Feb 2003
TL;DR: In this article, the image processing method and apparatus acquire input image data from the image recorded optically with a taking lens, acquire the information about the lens used to record the image, and perform image processing schemes on the input data using the acquired lens information.
Abstract: The image processing method and apparatus acquire input image data from the image recorded optically with a taking lens, acquire the information about the lens used to record the image and perform image processing schemes on the input image data using the acquired lens information, provided that the type of the lens used is identified from the acquired lens information and the intensity of sharpness enhancement of the corresponding image is altered in accordance with the identified lens type. The characteristics of the taking lens may also be acquired from the lens information and using the obtained lens characteristics as well as the position information for the recorded image, the input image data is subjected to aberration correction for correcting the deterioration in image quality due to the lens characteristics, provided that the information about the focal length of the taking lens effective at the time of image recording is additionally used, or image processing schemes are performed in two crossed directions of the recorded image, or parameters for correcting aberrations in the imaging plane of the taking lens are scaled such that they are related to the output image data on a pixel basis. High-quality prints reproducing high-quality images can be obtained from original images that were recorded with compact cameras, digital cameras and other conventional inexpensive cameras using low-performance lenses.

Patent
20 Mar 2003
TL;DR: In this paper, a method and apparatus for detecting noise level of an original signal in real-time was proposed. But the method was not suitable for the detection of the luminance variation in the original signal.
Abstract: In an image conversion method and apparatus, luminance Y3 of each pixel of a converted image is obtained by use of an expression Y3=C1Y1+C2Y2 and from luminance Y1 of a corresponding pixel of a source image, luminance Y2 of a corresponding pixel of a low frequency image, and C1 and C2 which are functions of the luminance Y2 Since C1+C2 is constant when Y2<=T2, contrast of a low frequency component can be prevented from decreasing In a portion in which the luminance Y2 of the low frequency image is low, the low frequency image is enhanced In a portion in which the luminance Y2 of the low frequency image is high, the low frequency image is suppressed In a method and apparatus for detecting noise level of an original signal in real time, a plurality of local regions are set on an input image in such a manner that the local regions are distributed uniformly over the entire area of an image In each local region, determination as to whether or not luminance is saturated is performed Variation in luminance is detected in each of a plurality of unsaturated local regions Noise level is determined on the basis of the detected variation

Journal ArticleDOI
TL;DR: The results suggest that MIR is capable of operating at low photon count levels, therefore the method shows promise for use with conventional x-ray sources, and shows that, in addition to producing new types of object descriptions, MIR produces substantially more accurate images than its predecessor, DEI.
Abstract: Conventional radiography produces a single image of an object by measuring the attenuation of an x-ray beam passing through it When imaging weakly absorbing tissues, x-ray attenuation may be a suboptimal signature of diseaserelated information In this paper we describe a new phase-sensitive imaging method, called multiple-image radiography (MIR), which is an improvement on a prior technique called diffraction-enhanced imaging (DEI) This paper elaborates on our initial presentation of the idea in Wernick et al (2002 Proc Int SympBiomedImaging pp 129–32) MIR simultaneously produces several images from a set of measurements made with a single x-ray beam Specifically, MIR yields three images depicting separately the effects of refraction, ultrasmall-angle scatter and attenuation by the object All three images have good contrast, in part because they are virtually immune from degradation due to scatter at higher angles MIR also yields a very comprehensive object description, consisting of the angular intensity spectrum of a transmitted x-ray beam at every image pixel, within a narrow angular range Our experiments are based on data acquired using a synchrotron light source; however, in preparation for more practical implementations using conventional x-ray sources, we develop and evaluate algorithms designed for Poisson noise, which is characteristic of photon-limited imaging The results suggest that MIR is capable of operating at low photon count levels, therefore the method

Journal ArticleDOI
TL;DR: The combination of noninteger pixel displacement identification without interpolation, robustness to noise, and limited computational complexity make this approach a very attractive extension of the PCM.
Abstract: The phase correlation method (PCM) is known to provide straightforward estimation of rigid translational motion between two images. It is often claimed that the original method is best suited to identify integer pixel displacements, which has prompted the development of numerous subpixel displacement identification methods. However, the fact that the phase correlation matrix is rank one for a noise-free rigid translation model is often overlooked. This property leads to the low complexity subspace identification technique presented here. The combination of noninteger pixel displacement identification without interpolation, robustness to noise, and limited computational complexity make this approach a very attractive extension of the PCM. In addition, this approach is shown to be complementary with other subpixel phase correlation based techniques.

Proceedings ArticleDOI
18 Jun 2003
TL;DR: A robust image synthesis method to automatically infer missing information from a damaged 2D image by tensor voting, followed by a voting process that infers non-iteratively the optimal color values in the ND texture space for each defective pixel.
Abstract: We present a robust image synthesis method to automatically infer missing information from a damaged 2D image by tensor voting. Our method translates image color and texture information into an adaptive ND tensor, followed by a voting process that infers non-iteratively the optimal color values in the ND texture space for each defective pixel. ND tensor voting can be applied to images consisting of roughly homogeneous and periodic textures (e.g. a brick wall), as well as difficult images of natural scenes, which contain complex color and texture information. To effectively tackle the latter type of difficult images, a two-step method is proposed. First, we perform texture-based segmentation in the input image, and extrapolate partitioning curves to generate a complete segmentation for the image. Then, missing colors are synthesized using ND tensor voting. Automatic tensor scale analysis is used to adapt to different feature scales inherent in the input. We demonstrate the effectiveness of our approach using a difficult set of real images.

Journal ArticleDOI
TL;DR: A high-speed camera that combines a customized rotating mirror camera frame with charge coupled device (CCD) image detectors and is practically fully operated by computer control was constructed, dubbed Brandaris 128.
Abstract: A high-speed camera that combines a customized rotating mirror camera frame with charge coupled device (CCD) image detectors and is practically fully operated by computer control was constructed. High sensitivity CCDs are used so that image intensifiers, which would degrade image quality, are not necessary. Customized electronics and instruments were used to improve the flexibility and control precisely the image acquisition process. A full sequence of 128 consecutive image frames with 500×292 pixels each can be acquired at a maximum frame rate of 25 million frames/s. Full sequences can be repeated every 20 ms, and six full sequences can be stored on the in-camera memory buffer. A high-speed communication link to a computer allows each full sequence of about 20 Mbytes to be stored on a hard disk in less than 1 s. The sensitivity of the camera has an equivalent International Standards Organization number of 2500. Resolution was measured to be 36 lp/mm on the detector plane of the camera, while under a microscope a bar pattern of 400 nm spacing line pairs could be resolved. Some high-speed events recorded with this camera, dubbed Brandaris 128, are presented.

Journal ArticleDOI
TL;DR: Sub-pixel mapping is a technique designed to use the information contained in these mixed pixels to obtain a sharpened image using genetic algorithms combined with the assumption of spatial dependence to assign a location to every sub-pixel.
Abstract: In remotely sensed images, mixed pixels will always be present. Soft classification defines the membership degree of these pixels for the different land cover classes. Sub-pixel mapping is a technique designed to use the information contained in these mixed pixels to obtain a sharpened image. Pixels are divided into sub-pixels, representing the land cover class fractions. Genetic algorithms combined with the assumption of spatial dependence assign a location to every sub-pixel. The algorithm was tested on synthetic and degraded real imagery. Obtained accuracy measures were higher compared with conventional hard classifications.

Proceedings ArticleDOI
18 Jun 2003
TL;DR: A novel background subtraction method for detecting foreground objects in dynamic scenes involving swaying trees and fluttering flags using the property that image variations at neighboring image blocks have strong correlation, also known as "cooccurrence".
Abstract: This paper presents a novel background subtraction method for detecting foreground objects in dynamic scenes involving swaying trees and fluttering flags. Most methods proposed so far adjust the permissible range of the background image variations according to the training samples of background images. Thus, the detection sensitivity decreases at those pixels having wide permissible ranges. If we can narrow the ranges by analyzing input images, the detection sensitivity can be improved. For this narrowing, we employ the property that image variations at neighboring image blocks have strong correlation, also known as "cooccurrence". This approach is essentially different from chronological background image updating or morphological postprocessing. Experimental results for real images demonstrate the effectiveness of our method.

Journal ArticleDOI
TL;DR: A new approach for the segmentation of local textile defects using feed-forward neural network and a new low-cost solution for the fast web inspection using linear neural network are presented.