scispace - formally typeset
Search or ask a question

Showing papers on "Image sensor published in 2009"


Journal ArticleDOI
30 Apr 2009-Nature
TL;DR: This work maps a two-dimensional (2D) image into a serial time-domain data stream and simultaneously amplifies the image in the optical domain and overcomes the compromise between sensitivity and frame rate without resorting to cooling and high-intensity illumination.
Abstract: Ultrafast real-time optical imaging is used in many areas of science, from biological imaging to the study of shockwaves. But in systems that undergo changes on very fast timescales, conventional technologies such as CCD (charge-coupled-device) cameras are compromised. Either imaging speed or sensitivity has to be sacrificed unless special cooling or extra-bright light is used. This is because it takes time to read out the data from sensor arrays, and at high frame rates only a few photons are collected. Now a UCLA team has developed an imaging method that overcomes these limitations and offers frame rates at least a thousand times faster than those of conventional CCDs, making this perhaps the world's fastest continuously running camera, with a shutter speed of 440 picoseconds. The technology — serial time-encoded amplified microscopy or STEAM — maps a two-dimensional image into a serial time-domain data stream and simultaneously amplifies the image in the optical domain. A single-pixel photodetector then captures the entire image. Ultrafast real-time optical imaging is used in diverse areas of science, but conventional imaging devices such as CCDs are incapable of capturing fast dynamical processes with high sensitivity and resolution. This imaging method overcomes these limitations and offers frame rates that are at least 1,000 times faster than those of conventional CCDs. The approach is applied to continuous real-time imaging of microfluidic flow and phase-explosion effects that occur during laser ablation. Ultrafast real-time optical imaging is an indispensable tool for studying dynamical events such as shock waves1,2, chemical dynamics in living cells3,4, neural activity5,6, laser surgery7,8,9 and microfluidics10,11. However, conventional CCDs (charge-coupled devices) and their complementary metal–oxide–semiconductor (CMOS) counterparts are incapable of capturing fast dynamical processes with high sensitivity and resolution. This is due in part to a technological limitation—it takes time to read out the data from sensor arrays. Also, there is the fundamental compromise between sensitivity and frame rate; at high frame rates, fewer photons are collected during each frame—a problem that affects nearly all optical imaging systems. Here we report an imaging method that overcomes these limitations and offers frame rates that are at least 1,000 times faster than those of conventional CCDs. Our technique maps a two-dimensional (2D) image into a serial time-domain data stream and simultaneously amplifies the image in the optical domain. We capture an entire 2D image using a single-pixel photodetector and achieve a net image amplification of 25 dB (a factor of 316). This overcomes the compromise between sensitivity and frame rate without resorting to cooling and high-intensity illumination. As a proof of concept, we perform continuous real-time imaging at a frame speed of 163 ns (a frame rate of 6.1 MHz) and a shutter speed of 440 ps. We also demonstrate real-time imaging of microfluidic flow and phase-explosion effects that occur during laser ablation.

699 citations


Journal ArticleDOI
TL;DR: How photo-response nonuniformity (PRNU) of imaging sensors can be used for a variety of important digital forensic tasks, such as device identification, device linking, recovery of processing history, and detection of digital forgeries is explained.
Abstract: The article explains how photo-response nonuniformity (PRNU) of imaging sensors can be used for a variety of important digital forensic tasks, such as device identification, device linking, recovery of processing history, and detection of digital forgeries. The PRNU is an intrinsic property of all digital imaging sensors due to slight variations among individual pixels in their ability to convert photons to electrons. Consequently, every sensor casts a weak noise-like pattern onto every image it takes. This pattern, which plays the role of a sensor fingerprint, is essentially an unintentional stochastic spread-spectrum watermark that survives processing, such as lossy compression or filtering. This tutorial explains how this fingerprint can be estimated from images taken by the camera and later detected in a given image to establish image origin and integrity. Various forensic tasks are formulated as a two-channel hypothesis testing problem approached using the generalized likelihood ratio test. The performance of the introduced forensic methods is briefly illustrated on examples to give the reader a sense of the performance.

326 citations


Proceedings ArticleDOI
16 Apr 2009
TL;DR: This work forms this method in a variational Bayesian framework and performs the reconstruction of both the surface of the scene and the (superresolved) light field.
Abstract: Light field cameras have been recently shown to be very effective in applications such as digital refocusing and 3D reconstruction. In a single snapshot these cameras provide a sample of the light field of a scene by trading off spatial resolution with angular resolution. Current methods produce images at a resolution that is much lower than that of traditional imaging devices. However, by explicitly modeling the image formation process and incorporating priors such as Lambertianity and texture statistics, these types of images can be reconstructed at a higher resolution. We formulate this method in a variational Bayesian framework and perform the reconstruction of both the surface of the scene and the (superresolved) light field. The method is demonstrated on both synthetic and real images captured with our light-field camera prototype.

279 citations


Patent
04 Mar 2009
TL;DR: In this article, a device for processing data includes a first input port for receiving color image data from a first image sensor and a second inputport for receiving depth-related image from a second image sensor.
Abstract: A device for processing data includes a first input port for receiving color image data from a first image sensor and a second input port for receiving depth-related image data from a second image sensor. Processing circuitry generates a depth map using the depth-related image data. At least one output port conveys the depth map and the color image data to a host computer.

227 citations


Patent
07 Aug 2009
TL;DR: In this article, an example apparatus to count the number of people in a monitored environment is described, which includes an image sensor having a plurality of pixels, a pseudorandom number generator to generate pseudorandandom coordinates, a reader to read first pixel data generated by a first pixel of the image sensor at a first time, the first pixel corresponding to the pseudorefandom coordinates.
Abstract: Methods and apparatus to count persons in a monitored environment are disclosed. An example apparatus to count the number of people in a monitored environment is described, which includes an image sensor having a plurality of pixels, a pseudorandom number generator to generate pseudorandom coordinates, a reader to read first pixel data generated by a first pixel of the image sensor at a first time, the first pixel corresponding to the pseudorandom coordinates, a comparator to compare the first pixel data with second pixel data generated by the first pixel at a second time different from the first time to generate a change value, and a counter configured to generate a count of persons based at least on the change value.

213 citations


Journal ArticleDOI
TL;DR: A practical, iterative algorithm for image-reconstruction in undersampled tomographic systems, such as digital breast tomosynthesis (DBT), indicates that there may be a substantial advantage in using the present image- reconstruction algorithm for microcalcification imaging.
Abstract: Purpose: The authors develop a practical, iterative algorithm for image-reconstruction in undersampled tomographic systems, such as digital breast tomosynthesis (DBT). Methods: The algorithm controls image regularity by minimizing the image total p variation (TpV), a function that reduces to the total variation when p=1.0 or the image roughness whenp=2.0. Constraints on the image, such as image positivity and estimated projection-data tolerance, are enforced by projection onto convex sets. The fact that the tomographic system is undersampled translates to the mathematical property that many widely varied resultant volumes may correspond to a given data tolerance. Thus the application of image regularity serves two purposes: (1) Reduction in the number of resultant volumes out of those allowed by fixing the data tolerance, finding the minimum image TpV for fixed data tolerance, and (2) traditional regularization, sacrificing data fidelity for higher image regularity. The present algorithm allows for this dual role of image regularity in undersampled tomography. Results: The proposed image-reconstruction algorithm is applied to three clinical DBT data sets. The DBT cases include one with microcalcifications and two with masses. Conclusions: Results indicate that there may be a substantial advantage in using the present image-reconstruction algorithm for microcalcification imaging.

208 citations


Journal ArticleDOI
TL;DR: A novel fluorescence imaging system developed for real-time interventional imaging applications that implements a correction scheme that improves the accuracy of epi-illumination fluorescence images for light intensity variation in tissues.
Abstract: We present a novel fluorescence imaging system developed for real-time interventional imaging applications. The system implements a correction scheme that improves the accuracy of epi-illumination fluorescence images for light intensity variation in tissues. The implementation is based on the use of three cameras operating in parallel, utilizing a common lens, which allows for the concurrent collection of color, fluorescence, and light attenuation images at the excitation wavelength from the same field of view. The correction is based on a ratio approach of fluorescence over light attenuation images. Color images and video is used for surgical guidance and for registration with the corrected fluorescence images. We showcase the performance metrics of this system on phantoms and animals, and discuss the advantages over conventional epi-illumination systems developed for real-time applications and the limits of validity of corrected epi-illumination fluorescence imaging.

180 citations


Patent
Ken Utagawa1, Yosuke Kusaka1
25 Nov 2009
TL;DR: In this paper, a reduction unit adjusts a signal level of the focus detection signals output from the plurality of focus detection pixels to be equal to or less than the signal levels of the image signals each output from one of the pixels under a given light receiving condition.
Abstract: An image sensor includes: a plurality of image-capturing pixels that, upon each receiving a partial light flux within a predetermined wavelength range, which is part of a photographic light flux used to form an optical image, output image signals corresponding to the optical image; a plurality of focus detection pixels that receive a pair of focus detection light fluxes in a wider wavelength range than the predetermined wavelength range and output a pair of focus detection signals; and a reduction unit that adjusts a signal level of the focus detection signals output from the plurality of focus detection pixels to be equal to or less than a signal level of the image signals each output from one of the plurality of image-capturing pixels under a given light receiving condition.

165 citations


Journal ArticleDOI
TL;DR: A new circuit topology for potentiostats that interface with three-electrode amperometric electrochemical sensors that consumes very low power, occupies a very small die area, and has potentially very low noise is presented.
Abstract: We present a new circuit topology for potentiostats that interface with three-electrode amperometric electrochemical sensors. In this new topology, a current-copying circuit, e.g., a current mirror, is placed in the sensor current path to generate a mirrored image of the sensor current. The mirrored image is then measured and processed instead of the sensor current itself. The new potentiostat topology consumes very low power, occupies a very small die area, and has potentially very low noise. These characteristics make the new topology very suitable for portable or bioimplantable applications. In order to demonstrate the feasibility of the new topology, we present the results of a potentiostat circuit implemented in a 0.18-mum CMOS process. The circuit converts the sensor current to a frequency-modulated pulse waveform, for which the time difference between two consecutive pulses is inversely proportional to the sensor current. The potentiostat measures the sensor current from 1 nA to 1 muA with better than 0.1% of accuracy. It consumes only 70 muW of power from a 1.8-V supply voltage and occupies an area of 0.02 mm2.

165 citations


Journal ArticleDOI
13 Jan 2009-Sensors
TL;DR: A review of CMOS-based high-speed imager design is presented and the various implementations that target ultrahigh-speed imaging are described and the design, layout and simulation results of an ultrahigh acquisition rate CMOS active-pixel sensor imager that can take 8 frames at a rate of more than a billion frames per second are discussed.
Abstract: Recent advances in deep submicron CMOS technologies and improved pixel designs have enabled CMOS-based imagers to surpass charge-coupled devices (CCD) imaging technology for mainstream applications. The parallel outputs that CMOS imagers can offer, in addition to complete camera-on-a-chip solutions due to being fabricated in standard CMOS technologies, result in compelling advantages in speed and system throughput. Since there is a practical limit on the minimum pixel size (4~5 μm) due to limitations in the optics, CMOS technology scaling can allow for an increased number of transistors to be integrated into the pixel to improve both detection and signal processing. Such smart pixels truly show the potential of CMOS technology for imaging applications allowing CMOS imagers to achieve the image quality and global shuttering performance necessary to meet the demands of ultrahigh-speed applications. In this paper, a review of CMOS-based high-speed imager design is presented and the various implementations that target ultrahigh-speed imaging are described. This work also discusses the design, layout and simulation results of an ultrahigh acquisition rate CMOS active-pixel sensor imager that can take 8 frames at a rate of more than a billion frames per second (fps).

162 citations


Journal ArticleDOI
TL;DR: A principle component analysis (PCA) based spatially-adaptive denoising algorithm, which works directly on the CFA data using a supporting window to analyze the local image statistics, which can effectively suppress noise while preserving color edges and details.
Abstract: Single-sensor digital color cameras use a process called color demosaicking to produce full color images from the data captured by a color filter array (CFA). The quality of demosaicked images is degraded due to the sensor noise introduced during the image acquisition process. The conventional solution to combating CFA sensor noise is demosaicking first, followed by a separate denoising processing. This strategy will generate many noise-caused color artifacts in the demosaicking process, which are hard to remove in the denoising process. Few denoising schemes that work directly on the CFA images have been presented because of the difficulties arisen from the red, green and blue interlaced mosaic pattern, yet a well designed ldquodenoising first and demosaicking laterrdquo scheme can have advantages such as less noise-caused color artifacts and cost-effective implementation. This paper presents a principle component analysis (PCA) based spatially-adaptive denoising algorithm, which works directly on the CFA data using a supporting window to analyze the local image statistics. By exploiting the spatial and spectral correlations existed in the CFA image, the proposed method can effectively suppress noise while preserving color edges and details. Experiments using both simulated and real CFA images indicate that the proposed scheme outperforms many existing approaches, including those sophisticated demosaicking and denoising schemes, in terms of both objective measurement and visual evaluation.

Journal ArticleDOI
27 Jul 2009
TL;DR: A new camera based interaction solution where an ordinary camera can detect small optical tags from a relatively large distance and use intelligent binary coding to estimate the relative distance and angle to the camera, and shows potential for applications in augmented reality and motion capture.
Abstract: We show a new camera based interaction solution where an ordinary camera can detect small optical tags from a relatively large distance. Current optical tags, such as barcodes, must be read within a short range and the codes occupy valuable physical space on products. We present a new low-cost optical design so that the tags can be shrunk to 3mm visible diameter, and unmodified ordinary cameras several meters away can be set up to decode the identity plus the relative distance and angle. The design exploits the bokeh effect of ordinary cameras lenses, which maps rays exiting from an out of focus scene point into a disk like blur on the camera sensor. This bokeh-code or Bokode is a barcode design with a simple lenslet over the pattern. We show that a code with 15μm features can be read using an off-the-shelf camera from distances of up to 2 meters. We use intelligent binary coding to estimate the relative distance and angle to the camera, and show potential for applications in augmented reality and motion capture. We analyze the constraints and performance of the optical system, and discuss several plausible application scenarios.

Journal ArticleDOI
11 Dec 2009-Sensors
TL;DR: Experimental tests on a CCD/CMOS ToF camera sensor, the SwissRanger (SR)-4000 camera, were performed, and two main aspects are treated: the calibration of the distance measurements of the SR-4000 camera and the photogrammetric calibrations of the amplitude images delivered by the camera using a purpose-built multi-resolution field made of high contrast targets.
Abstract: 3D imaging with Time-of-Flight (ToF) cameras is a promising recent technique which allows 3D point clouds to be acquired at video frame rates. However, the distance measurements of these devices are often affected by some systematic errors which decrease the quality of the acquired data. In order to evaluate these errors, some experimental tests on a CCD/CMOS ToF camera sensor, the SwissRanger (SR)-4000 camera, were performed and reported in this paper. In particular, two main aspects are treated: the calibration of the distance measurements of the SR-4000 camera, which deals with evaluation of the camera warm up time period, the distance measurement error evaluation and a study of the influence on distance measurements of the camera orientation with respect to the observed object; the second aspect concerns the photogrammetric calibration of the amplitude images delivered by the camera using a purpose-built multi-resolution field made of high contrast targets.

Journal ArticleDOI
TL;DR: A standard indicator which reflects the radiation exposure that is incident on a detector after every exposure event and that reflects the noise levels present in the image data is recommended to facilitate the production of consistent, high quality digital radiographic images at acceptable patient doses.
Abstract: Digital radiographic imaging systems, such as those using photostimulable storage phosphor, amorphous selenium, amorphous silicon, CCD, and MOSFET technology, can produce adequate image quality over a much broader range of exposure levels than that of screen/film imaging systems. In screen/film imaging, the final image brightness and contrast are indicative of over- and underexposure. In digital imaging, brightness and contrast are often determined entirely by digital postprocessing of the acquired image data. Overexposure and underexposures are not readily recognizable. As a result, patient dose has a tendency to gradually increase over time after a department converts from screen/film-based imaging to digital radiographic imaging. The purpose of this report is to recommend a standard indicator which reflects the radiation exposure that is incident on a detector after every exposure event and that reflects the noise levels present in the image data. The intent is to facilitate the production of consistent, high quality digital radiographic images at acceptable patient doses. This should be based not on image optical density or brightness but on feedback regarding the detector exposure provided and actively monitored by the imaging system. A standard beam calibration condition is recommended that is based on RQA5 but uses filtration materials that are commonly available and simple to use. Recommendations on clinical implementation of the indices to control image quality and patient dose are derived from historical tolerance limits and presented as guidelines.

Journal ArticleDOI
01 Dec 2009
TL;DR: This work transforms an LCD into a display that supports both 2D multi-touch and unencumbered 3D gestures, and exploits the spatial light modulation capability of LCDs to allow lensless imaging without interfering with display functionality.
Abstract: We transform an LCD into a display that supports both 2D multi-touch and unencumbered 3D gestures. Our BiDirectional (BiDi) screen, capable of both image capture and display, is inspired by emerging LCDs that use embedded optical sensors to detect multiple points of contact. Our key contribution is to exploit the spatial light modulation capability of LCDs to allow lensless imaging without interfering with display functionality. We switch between a display mode showing traditional graphics and a capture mode in which the backlight is disabled and the LCD displays a pinhole array or an equivalent tiled-broadband code. A large-format image sensor is placed slightly behind the liquid crystal layer. Together, the image sensor and LCD form a mask-based light field camera, capturing an array of images equivalent to that produced by a camera array spanning the display surface. The recovered multi-view orthographic imagery is used to passively estimate the depth of scene points. Two motivating applications are described: a hybrid touch plus gesture interaction and a light-gun mode for interacting with external light-emitting widgets. We show a working prototype that simulates the image sensor with a camera and diffuser, allowing interaction up to 50 cm in front of a modified 20.1 inch LCD.

Patent
15 Jan 2009
TL;DR: In this article, an approximate imaging plane is calculated from the relative position of plural evaluation points which are defined by transforming the in-focus coordinate value of each imaging position in a three dimensional coordinate system.
Abstract: A lens unit (15) and a sensor unit (16) are held by a lens holding mechanism (44) and a sensor shift mechanism (45). As the sensor unit (16) is moved in a Z axis direction on a second slide stage (76), a chart image is captured with an image sensor (12) through a taking lens (6) so as to obtain in-focus coordinate values in at least five imaging positions on an imaging surface (12a). An approximate imaging plane is calculated from the relative position of plural evaluation points which are defined by transforming the in-focus coordinate value of each imaging position in a three dimensional coordinate system. The second slide stage (76) and a biaxial rotation stage (74) adjust the position and tilt of the sensor unit (16) so that the imaging surface (12a) overlaps with the approximate imaging plane.

Journal ArticleDOI
TL;DR: In this article, the authors proposed a set of criteria upon which an effective comparative analysis of the performance of wide-DR (WDR) sensors can be done, based upon the quantitative assessments of the following parameters: signal-to-noise ratio, DR extension, noise floor, minimal transistor count, and sensitivity.
Abstract: A large variety of solutions for widening the dynamic range (DR) of CMOS image sensors has been proposed throughout the years. We propose a set of criteria upon which an effective comparative analysis of the performance of wide-DR (WDR) sensors can be done. Sensors for WDR are divided into seven categories: 1) companding sensors; 2) multimode sensors; 3) clipping sensors; 4) frequency-based sensors; 5) time-to-saturation (time-to-first spike) sensors; 6) global-control-over-the-integration-time sensors; and 7) autonomous-control-over-the-integration-time sensors. The comparative analysis for each category is based upon the quantitative assessments of the following parameters: signal-to-noise ratio, DR extension, noise floor, minimal transistor count, and sensitivity. These parameters are assessed using consistent assumptions and definitions, which are common to all WDR sensor categories. The advantages and disadvantages of each category in the sense of power consumption and data rate are discussed qualitatively. The influence of technology advancements on the proposed set of criteria is discussed as well.

Journal ArticleDOI
TL;DR: Experimental results showed that the proposed rangefinder method is effective, and extensive testing has shown the suitability of the technique and confirmed phase accuracy predictions.
Abstract: Phase and intensity of light are detected simultaneously using a fully digital imaging technique: single-photon synchronous detection. This approach has been theoretically and experimentally investigated in this paper. We designed a fully integrated camera implementing the new technique that was fabricated in a 0.35 mum CMOS technology. The camera demonstrator features a modulated light source, so as to independently capture the time-of-flight of the photons reflected by a target, thereby reconstructing a depth map of the scene. The camera also enables image enhancement of 2D scenes when used in passive mode, where differential maps of the reflection patterns are the basis for advanced image processing algorithms. Extensive testing has shown the suitability of the technique and confirmed phase accuracy predictions. Experimental results showed that the proposed rangefinder method is effective. Distance measurement performance was characterized with a maximum nonlinearity error lower than 12 cm within a range of a few meters. In the same range, the maximum repeatability error was 3.8 cm.

Journal ArticleDOI
TL;DR: The feasibility of the proposed TV semi-norm based approach for pixel-level fusion to fuse images acquired using multiple sensors is demonstrated on images from computed tomography and magnetic resonance imaging as well as visible-band and infrared sensors.
Abstract: In this paper, a total variation (TV) based approach is proposed for pixel-level fusion to fuse images acquired using multiple sensors. In this approach, fusion is posed as an inverse problem and a locally affine model is used as the forward model. A TV semi-norm based approach in conjunction with principal component analysis is used iteratively to estimate the fused image. The feasibility of the proposed algorithm is demonstrated on images from computed tomography (CT) and magnetic resonance imaging (MRI) as well as visible-band and infrared sensors. The results clearly indicate the feasibility of the proposed approach.

Patent
20 Oct 2009
TL;DR: In this paper, a static defect table storing the locations of known static defects is provided, and the location of a current pixel is compared to the static defect tables, and a replacement value for correcting the dynamic defect is determined by interpolating the value of two neighboring pixels on opposite sides of the current pixel in a direction exhibiting the smallest gradient.
Abstract: Various techniques are provided for the detection and correction of defective pixels in an image sensor. In accordance with one embodiment, a static defect table storing the locations of known static defects is provided, and the location of a current pixel is compared to the static defect table. If the location of the current pixel is found in the static defect table, the current pixel is identified as a static defect and is corrected using the value of the previous pixel of the same color. If the current pixel is not identified as a static defect, a dynamic defect detection process includes comparing pixel-to-pixel gradients between the current pixel a set of neighboring pixels against a dynamic defect threshold. If a dynamic defect is detected, a replacement value for correcting the dynamic defect may be determined by interpolating the value of two neighboring pixels on opposite sides of the current pixel in a direction exhibiting the smallest gradient.

Proceedings ArticleDOI
20 Jun 2009
TL;DR: This paper proposes a special application of the distributed estimation algorithm known as Kalman-Consensus filter through which each camera comes to a consensus with its neighboring cameras about the actual state of the target, leading to a camera network topology that changes with time.
Abstract: This paper deals with the problem of tracking multiple targets in a distributed network of self-configuring pan-tilt-zoom cameras. We focus on applications where events unfold over a large geographic area and need to be analyzed by multiple overlapping and non-overlapping active cameras without a central unit accumulating and analyzing all the data. The overall goal is to keep track of all targets in the region of deployment of the cameras, while selectively focusing at a high resolution on some particular target features. To acquire all the targets at the desired resolutions while keeping the entire scene in view, we use cooperative network control ideas based on multi-player learning in games. For tracking the targets as they move through the area covered by the cameras, we propose a special application of the distributed estimation algorithm known as Kalman-Consensus filter through which each camera comes to a consensus with its neighboring cameras about the actual state of the target. This leads to a camera network topology that changes with time. Combining these ideas with single-view analysis, we have a completely distributed approach for multi-target tracking and camera network self-configuration. We show performance analysis results with real-life experiments on a network of 10 cameras.

Journal ArticleDOI
TL;DR: The proposed internal reference generation and return-to-zero digital signal feedback techniques enhance the ADC to have low read noise, a high resolution of 13 b, and a resulting dynamic range of 71 dB.
Abstract: A high-performance CMOS image sensor (CIS) with 13-b column-parallel single-ended cyclic ADCs is presented The simplified single-ended circuits for the cyclic ADC are squeezed into a 56-mum-pitch single-side column The proposed internal reference generation and return-to-zero digital signal feedback techniques enhance the ADC to have low read noise, a high resolution of 13 b, and a resulting dynamic range of 71 dB An ultralow vertical fixed pattern noise of 01 erms - is attained by a digital CDS technique, which performs A/D conversion twice in a horizontal scan period (6 mus) The implemented CIS with 018-mum technology operates at 390 frames/s and has 707-V/lx middots sensitivity, 61- muV/e- conversion gain, 49-erms - read noise, and less than 04 LSB differential nonlinearity

Proceedings ArticleDOI
28 Dec 2009
TL;DR: A new positioning method using an image sensor and visible light LEDs (Light Emitting Diode) and color LEDs is proposed, which achieved position accuracy of less than 5cm using this method.
Abstract: This paper proposes a new positioning method using an image sensor and visible light LEDs (Light Emitting Diode). The color LEDs are used to detect position. We achieved position accuracy of less than 5cm using our method. We applied our method to a robot and demonstrated that accurate position control of a robot was feasible.

Patent
09 Apr 2009
TL;DR: In this article, an image sensor-based reading terminal was described for decoding of decodable indicia and for providing color frames of image data for storage or transmission, where the image sensor pixel array included a first subset of monochrome pixels and a second subset of color pixels.
Abstract: There is described in one embodiment an indicia reading terminal having an image sensor pixel array incorporated therein, wherein the terminal is operative for decoding of decodable indicia and for providing color frames of image data for storage or transmission. An image sensor based terminal in one embodiment can include an image sensor having a hybrid monochrome and color image sensor pixel array wherein the image sensor pixel array includes a first subset of monochrome pixels and a second subset of color pixels. In one embodiment, an output response curve for the image sensor pixel array can include a logarithmic pattern.

Journal ArticleDOI
TL;DR: A quantitative comparison between the energy costs associated with direct transmission of uncompressed images and sensor platform-based JPEG compression followed by transmission of the compressed image data is presented.
Abstract: One of the most important goals of current and future sensor networks is energy-efficient communication of images. This paper presents a quantitative comparison between the energy costs associated with 1) direct transmission of uncompressed images and 2) sensor platform-based JPEG compression followed by transmission of the compressed image data. JPEG compression computations are mapped onto various resource-constrained platforms using a design environment that allows computation using the minimum integer and fractional bit-widths needed in view of other approximations inherent in the compression process and choice of image quality parameters. Advanced applications of JPEG, such as region of interest coding and successive/progressive transmission, are also examined. Detailed experimental results examining the tradeoffs in processor resources, processing/transmission time, bandwidth utilization, image quality, and overall energy consumption are presented.

Patent
09 Apr 2009
TL;DR: In this article, the authors proposed a nonlinear image sensor signal response based on severely limiting the number of pixel states, combined with clustering of pixels into what may be termed as super-pixels.
Abstract: In previously known imaging devices as in still and motion cameras, for example, image sensor signal response typically is linear as a function of intensity of incident light. Desirably, however, akin to the response of the human eye, response is sought to be nonlinear and, more particularly, essentially logarithmic. Preferred nonlinearity is realized in image sensor devices of the invention upon severely limiting the number of pixel states, combined with clustering of pixels into what may be termed as super-pixels.

Proceedings ArticleDOI
07 Nov 2009
TL;DR: This paper proposes a general design for color filter arrays that allow the joint capture of visible/NIR images using a single sensor and poses the CFA design as a novel spatial domain optimization problem, and provides an efficient iterative procedure that finds (locally) optimal solutions.
Abstract: Digital camera sensors are inherently sensitive to the near-infrared (NIR) part of the light spectrum. In this paper, we propose a general design for color filter arrays that allow the joint capture of visible/NIR images using a single sensor. We pose the CFA design as a novel spatial domain optimization problem, and provide an efficient iterative procedure that finds (locally) optimal solutions. Numerical experiments confirm the effectiveness of the proposed CFA design, which can simultaneously capture high quality visible and NIR image pairs.

Patent
02 Jan 2009
TL;DR: In this paper, an image-type intubation-aiding device comprises a small-size image sensor and a light source module both placed into an endotracheal tube to help doctors with quick intra-tubation.
Abstract: An image-type intubation-aiding device comprises a small-size image sensor and a light source module both placed into an endotracheal tube to help doctors with quick intubation. Light from light emission devices in the light source module passes through a transparent housing and is reflected by a target and then focused. The optical signal is converted into a digital or analog electric signal by the image sensor for displaying on a display device after processing. Doctors can thus be helped to quickly find the position of trachea, keep an appropriate distance from a patient for reducing the possibility of infection, and lower the medical treatment cost. Disposable products are available to avoid the problem of infection. The intubation-aiding device can be used as an electronic surgical image examination instrument for penetration into a body. Moreover, a light source with tunable wavelengths can be used to increase the spot ratio of nidus.

Proceedings ArticleDOI
Kostia Robert1
06 Nov 2009
TL;DR: This paper presents a new framework to detect vehicles, based on a hierarchy of features detection and fusion, which is thus road illumination agnostic and allows vehicles to be detected day and night.
Abstract: Due to the recent progress in computer vision to interpret images and sequence of images, the video camera is a promising sensor for traffic monitoring and traffic surveillance at low cost. This paper focuses on the detection and tracking of multiple vehicles present in the field of view of a camera. Until now, the vehicle detection has been mainly performed by the widely used technique called background subtraction, which is based on detecting changes in an image sequence. While there has been long research on this technique, it still faces many challenges. We present in this paper a new framework to detect vehicles, based on a hierarchy of features detection and fusion. The first layer of the hierarchy extract image features. The next layer fuses image features to detect vehicle features such as headlights or windshields. A last layer fuses the vehicle features to detect a vehicle with more confidence. This approach is thus road illumination agnostic and allows vehicles to be detected day and night. The vehicle features are tracked over frames. We use a constant acceleration tracking model augmented with traffic-domain rules to handle the occlusions challenges.

Patent
18 Mar 2009
TL;DR: In this paper, an imaging system for acquisition of NIR and full-color images includes a light source providing visible light and NIR light to an area under observation, such as living tissue.
Abstract: An imaging system for acquisition of NIR and full-color images includes a light source providing visible light and NIR light to an area under observation, such as living tissue, a camera having one or more image sensors configured to separately detect blue reflectance light, green reflectance light, and combined red reflectance light/detected NIR light returned from the area under observation. A controller in signal communication with the light source and the camera is configured to control the light source to continuously illuminate area under observation with temporally continuous blue/green illumination light and with red illumination light and NIR excitation light. At least one of the red illumination light and NIR excitation light are switched on and off periodically in synchronism with the acquisition of red and NIR light images in the camera.