scispace - formally typeset
Search or ask a question

Showing papers on "Image sensor published in 2005"


Patent
07 Jul 2005
TL;DR: In this article, a surgical imaging device includes at least one light source for illuminating an object, at least two image sensors configured to generate image data corresponding to the object in the form of an image frame, and a video processor configured to receive from each image sensor the image data correspond to the image frames and to process the data so as to generate a composite image.
Abstract: A surgical imaging device includes at least one light source for illuminating an object, at least two image sensors configured to generate image data corresponding to the object in the form of an image frame, and a video processor configured to receive from each image sensor the image data corresponding to the image frames and to process the image data so as to generate a composite image. The video processor may be configured to normalize, stabilize, orient and/or stitch the image data received from each image sensor so as to generate the composite image. Preferably, the video processor stitches the image data received from each image sensor by processing a portion of image data received from one image sensor that overlaps with a portion of image data received from another image sensor. Alternatively, the surgical device may be, e.g., a circular stapler, that includes a first part, e.g., a DLU portion, having an image sensor a second part, e.g., an anvil portion, that is moveable relative to the first @0rt. The second @0rt includes an arrangement, e.g., a bore extending therethrough, for conveying the image to the image sensor. The arrangement enables the image to be received by the image sensor without removing the surgical device from the surgical site.

893 citations


Journal ArticleDOI
TL;DR: This article provides a basic introduction to CMOS image-sensor technology, design and performance limits and presents recent developments and future research directions enabled by pixel-level processing, which promise to further improveCMOS image sensor performance and broaden their applicability beyond current markets.
Abstract: In this article, we provide a basic introduction to CMOS image-sensor technology, design and performance limits and present recent developments and future directions in this area. We also discuss image-sensor operation and describe the most popular CMOS image-sensor architectures. We note the main non-idealities that limit CMOS image sensor performance, and specify several key performance measures. One of the most important advantages of CMOS image sensors over CCDs is the ability to integrate sensing with analog and digital processing down to the pixel level. Finally, we focus on recent developments and future research directions that are enabled by pixel-level processing, the applications of which promise to further improve CMOS image sensor performance and broaden their applicability beyond current markets.

748 citations


Journal ArticleDOI
TL;DR: The authors present a technique which takes into account the physical electromagnetic spectrum responses of sensors during the fusion process, which produces images closer to the image obtained by the ideal sensor than those obtained by usual wavelet-based image fusion methods.
Abstract: Usual image fusion methods inject features from a high spatial resolution panchromatic sensor into every low spatial resolution multispectral band trying to preserve spectral signatures and improve spatial resolution to that of the panchromatic sensor. The objective is to obtain the image that would be observed by a sensor with the same spectral response (i.e., spectral sensitivity and quantum efficiency) as the multispectral sensors and the spatial resolution of the panchromatic sensor. But in these methods, features from electromagnetic spectrum regions not covered by multispectral sensors are injected into them, and physical spectral responses of the sensors are not considered during this process. This produces some undesirable effects, such as resolution overinjection images and slightly modified spectral signatures in some features. The authors present a technique which takes into account the physical electromagnetic spectrum responses of sensors during the fusion process, which produces images closer to the image obtained by the ideal sensor than those obtained by usual wavelet-based image fusion methods. This technique is used to define a new wavelet-based image fusion method.

702 citations


Reference BookDOI
01 Aug 2005
TL;DR: This paper discusses the development of image sensors for digital still cameras, and some of the current and future designs of these devices and their applications are discussed.
Abstract: Preface DIGITAL STILL CAMERAS AT A GLANCE Kenji Toyoda What Is a Digital Still Camera? History of Digital Still Cameras Variations of Digital Still Cameras Basic Structure of Digital Still Cameras Applications of Digital Still Cameras OPTICS IN DIGITAL STILL CAMERAS Takeshi Koyama Optical System Fundamentals and Standards for Evaluating Optical Performance Characteristics of DSC Imaging Optics Important Aspects of Imaging Optics Design for DSCs DSC Imaging Lens Zoom Types and Their Applications Conclusion References BASICS OF IMAGE SENSORS Junichi Nakamura Functions of an Image Sensor Photodetector in a Pixel Noise Photoconversion Characteristics Array Performance Optical Format and Pixel Size CCD Image Sensor vs. CMOS Image Sensor References CCD IMAGE SENSORS Tetsuo Yamada Basics of CCDs Structures and Characteristics of CCD Image Sensor DSC Applications Future Prospects References CMOS IMAGE SENSORS Isao Takayanagi Introduction to CMOS Image Sensors CMOS Active Pixel Technology Signal Processing and Noise Behavior CMOS Image Sensors for DSC Applications Future Prospects of CMOS Image Sensors for DSC Applications References EVALUATION OF IMAGE SENSORS Toyokazu Mizoguchi What is Evaluation of Image Sensors? Evaluation Environment Evaluation Methods COLOR THEORY AND ITS APPLICATION TO DIGITAL STILL CAMERAS Po-Chieh Hung Color Theory Camera Spectral Sensitivity Characterization of a Camera White Balance Conversion for Display (Color Management) Summary References IMAGE-PROCESSING ALGORITHMS Kazuhiro Sato Basic Image-Processing Algorithms Camera Control Algorithm Advanced Image Processing: How to Obtain Improved Image Quality References IMAGE-PROCESSING ENGINES Seiichiro Watanabe Key Characteristics of an Image-Processing Engine Imaging Engine Architecture Comparison Analog Front End (AFE) Digital Back End (DBE) Future Design Engines References EVALUATION OF IMAGE QUALITY Hideaki Yoshida What is Image Quality? General Items or Parameters Detailed Items or Factors Standards Relating to Image Quality SOME THOUGHTS ON FUTURE DIGITAL STILL CAMERAS Eric R. Fossum The Future of DSC Image Sensors Some Future Digital Cameras References

578 citations


Proceedings ArticleDOI
06 Nov 2005
TL;DR: It is argued that a camera sensor network containing heterogeneous elements provides numerous benefits over traditional homogeneous sensor networks and that a multi-tier sensor network can reconcile the traditionally conflicting systems goals of latency and energy-efficiency.
Abstract: This paper argues that a camera sensor network containing heterogeneous elements provides numerous benefits over traditional homogeneous sensor networks. We present the design and implementation of senseye---a multi-tier network of heterogeneous wireless nodes and cameras. To demonstrate its benefits, we implement a surveillance application using senseye comprising three tasks: object detection, recognition and tracking. We propose novel mechanisms for low-power low-latency detection, low-latency wakeups, efficient recognition and tracking. Our techniques show that a multi-tier sensor network can reconcile the traditionally conflicting systems goals of latency and energy-efficiency. An experimental evaluation of our prototype shows that, when compared to a single-tier prototype, our multi-tier senseye can achieve an order of magnitude reduction in energy usage while providing comparable surveillance accuracy.

397 citations


Journal ArticleDOI
TL;DR: In this paper, an imaging system for depth information capture of arbitrary 3D objects is presented, based on an array of 32 × 32 rangefinding pixels that independently measure the time of flight of a ray of light as it is reflected back from the objects in a scene.
Abstract: The design and characterization of an imaging system is presented for depth information capture of arbitrary three-dimensional (3-D) objects. The core of the system is an array of 32 /spl times/ 32 rangefinding pixels that independently measure the time-of-flight of a ray of light as it is reflected back from the objects in a scene. A single cone of pulsed laser light illuminates the scene, thus no complex mechanical scanning or expensive optical equipment are needed. Millimetric depth accuracies can be reached thanks to the rangefinder's optical detectors that enable picosecond time discrimination. The detectors, based on a single photon avalanche diode operating in Geiger mode, utilize avalanche multiplication to enhance light detection. On-pixel high-speed electrical amplification can therefore be eliminated, thus greatly simplifying the array and potentially reducing its power dissipation. Optical power requirements on the light source can also be significantly relaxed, due to the array's sensitivity to single photon events. A number of standard performance measurements, conducted on the imager, are discussed in the paper. The 3-D imaging system was also tested on real 3-D subjects, including human facial models, demonstrating the suitability of the approach.

374 citations


Proceedings ArticleDOI
31 Jul 2005
TL;DR: The systems and processes described provide true multi-touch (multi-input) and high-spatial and temporal resolution capability due to the continuous imaging of the frustrated total internal reflection that escapes the entire optical waveguide.
Abstract: High-resolution, scalable multi-touch sensing display systems and processes based on frustrated total internal reflection employ an optical waveguide that receives light, such as infrared light, that undergoes total internal reflection and an imaging sensor that detects light that escapes the optical waveguide caused by frustration of the total internal reflection due to contact by a user. The optical waveguide when fitted with a compliant surface overlay provides superior sensing performance, as well as other benefits and features. The systems and processes described provide true multi-touch (multi-input) and high-spatial and temporal resolution capability due to the continuous imaging of the frustrated total internal reflection that escapes the entire optical waveguide. Among other features and benefits, the systems and processes are scalable to large installations.

313 citations


Patent
14 Nov 2005
TL;DR: In this paper, the authors present a fully digital camera system that provides high-resolution still image and streaming video signals via a network to a centralized, server supported security and surveillance system, where a plurality of image transducers or sensors are included in a single camera unit, providing array imaging such as full 360 degree panoramic imaging, universal or spherical imaging and field imaging by stacking or arranging the sensors in an array.
Abstract: A fully digital camera system provides high-resolution still image and streaming video signals via a network to a centralized, server supported security and surveillance system. The digital camera for collects an image from one or more image transducers, compressing the image and sending the compressed digital image signal to a receiving station over a digital network. A plurality of image transducers or sensors may be included in a single camera unit, providing array imaging such as full 360 degree panoramic imaging, universal or spherical imaging and field imaging by stacking or arranging the sensors in an array. The multiple images are then compressed and merged at the camera in the desired format to permit transmission of the least amount of data to accomplish the desired image transmission. The camera also employs, or connects to, a variety of sensors other than the traditional image sensor. Sensors for fire, smoke, sound, glass breakage, motion, panic buttons, and the like, may be embedded in or connected to the camera. Data captured by these sensors may be digitized, compressed, and networked to detect notable conditions. An internal microphone and associated signal processing system may be equipped with suitable signal processing algorithms for the purpose of detecting suitable acoustic events and their location. In addition, the camera is equipped with a pair of externally accessible terminals where an external sensor may be connected. In addition, the camera may be equipped with a short-range receiver that may detect the activation of a wireless ‘panic button’ carried by facility personnel. This ‘panic button’ may employ infrared, radio frequency (RF), ultrasonic, or other suitable methods to activate the camera's receiver.

303 citations


Journal ArticleDOI
TL;DR: It was demonstrated that high image quality in CT reconstructions is possible even in systems with large geometric nonidealities, and a general analytic algorithm and corresponding calibration phantom for estimating these geometric parameters in cone-beam computed tomography (CT) systems were developed.
Abstract: Cone-beam computed tomography systems have been developed to provide in situ imaging for the purpose of guiding radiation therapy. Clinical systems have been constructed using this approach, a clinical linear accelerator (Elekta Synergy RP) and an iso-centric C-arm. Geometric calibration involves the estimation of a set of parameters that describes the geometry of such systems, and is essential for accurate image reconstruction. We have developed a general analytic algorithm and corresponding calibration phantom for estimating these geometric parameters in cone-beam computed tomography (CT) systems. The performance of the calibration algorithm is evaluated and its application is discussed. The algorithm makes use of a calibration phantom to estimate the geometric parameters of the system. The phantom consists of 24 steel ball bearings (BBs) in a known geometry. Twelve BBs are spaced evenly at 30 deg in two plane-parallel circles separated by a given distance along the tube axis. The detector (e.g., a flat panel detector) is assumed to have no spatial distortion. The method estimates geometric parameters including the position of the x-ray source, position, and rotation of the detector, and gantry angle, and can describe complex source-detector trajectories. The accuracy and sensitivity of the calibration algorithm was analyzed. The calibration algorithm estimates geometric parameters in a high level of accuracy such that the quality of CT reconstruction is not degraded by the error of estimation. Sensitivity analysis shows uncertainty of 0.01 degrees (around beam direction) to 0.3 degrees (normal to the beam direction) in rotation, and 0.2 mm (orthogonal to the beam direction) to 4.9 mm (beam direction) in position for the medical linear accelerator geometry. Experimental measurements using a laboratory bench Cone-beam CT system of known geometry demonstrate the sensitivity of the method in detecting small changes in the imaging geometry with an uncertainty of 0.1 mm in transverse and vertical (perpendicular to the beam direction) and 1.0 mm in the longitudinal (beam axis) directions. The calibration algorithm was compared to a previously reported method, which uses one ball bearing at the isocenter of the system, to investigate the impact of more precise calibration on the image quality of cone-beam CT reconstruction. A thin steel wire located inside the calibration phantom was imaged on the conebeam CT lab bench with and without perturbations in source and detector position during the scan. The described calibration method improved the quality of the image and the geometric accuracy of the object reconstructed, improving the full width at half maximum of the wire by 27.5% and increasing contrast of the wire by 52.8%. The proposed method is not limited to the geometric calibration of cone-beam CT systems but can be used for many other systems, which consist of one or more point sources and area detectors such as calibration of megavoltage (MV) treatment system (focal spot movement during the beam delivery, MV source trajectory versus gantry angle, the axis of collimator rotation, and couch motion), cross calibration between Kilovolt imaging and MV treatment system, and cross calibration between multiple imaging systems. Using the complete information of the system geometry, it was demonstrated that high image quality in CT reconstructions is possible even in systems with large geometric nonidealities.

287 citations


Patent
18 Feb 2005
TL;DR: In this article, a camera phone includes a phone stage for generating voice signals, a first image sensor for generating a first sensor output, a fixed focal length wide angle lens for forming the first image of the scene on the first sensor, and a second image sensor with a telephoto lens pointing in the same direction as the first lens and forming the second image on the second sensor.
Abstract: A camera phone includes a phone stage for generating voice signals, a first image sensor for generating a first sensor output, a first fixed focal length wide angle lens for forming a first image of the scene on the first image sensor, a second image sensor for generating a second sensor output, and a second fixed focal length telephoto lens pointing in the same direction as the first lens and forming a second image of the same scene on the second image sensor. A control element selects either the first sensor output from the first image sensor or the second sensor output from the second image sensor. A processing section produces the output image signals from the selected sensor output, and a cellular stage processes the image and voice signals for transmission over a cellular network.

259 citations


Journal ArticleDOI
TL;DR: An artificial compound-eye objective fabricated by micro-optics technology is adapted and attached to a CMOS sensor array and the lithographic generation of opaque walls between channels for optical isolation is experimentally demonstrated.
Abstract: An artificial compound-eye objective fabricated by micro-optics technology is adapted and attached to a CMOS sensor array. The novel optical sensor system with an optics thickness of only 0.2 mm is examined with respect to resolution and sensitivity. An optical resolution of 60 × 60 pixels is determined from captured images. The scaling behavior of artificial compound-eye imaging systems is analyzed. Cross talk between channels fabricated by different technologies is evaluated, and the influence on an extension of the field of view by addition of a (Fresnel) diverging lens is discussed. The lithographic generation of opaque walls between channels for optical isolation is experimentally demonstrated.

Journal ArticleDOI
TL;DR: What is believed to be the first optical synthetic-aperture image of a fixed, diffusely scattering target with a moving aperture is reported, and a general digital signal-processing solution to the laser waveform instability problem is described and demonstrated.
Abstract: The spatial resolution of a conventional imaging laser radar system is constrained by the diffraction limit of the telescope’s aperture. We investigate a technique known as synthetic-aperture imaging laser radar (SAIL), which employs aperture synthesis with coherent laser radar to overcome the diffraction limit and achieve fine-resolution, long-range, two-dimensional imaging with modest aperture diameters. We detail our laboratory-scale SAIL testbed, digital signal-processing techniques, and image results. In particular, we report what we believe to be the first optical synthetic-aperture image of a fixed, diffusely scattering target with a moving aperture. A number of fine-resolution, well-focused SAIL images are shown, including both retroreflecting and diffuse scattering targets, with a comparison of resolution between real-aperture imaging and synthetic-aperture imaging. A general digital signal-processing solution to the laser waveform instability problem is described and demonstrated, involving both new algorithms and hardware elements. These algorithms are primarily data driven, without a priori knowledge of waveform and sensor position, representing a crucial step in developing a robust imaging system.

Patent
12 Aug 2005
TL;DR: In this paper, a pixel for image sensor comprises a plurality of small-sized radiation-sensitive elements (2.1-2.9) for converting incident radiation into electric signals.
Abstract: The pixel (1) for use in an image sensor comprises a plurality of small-sized radiation-sensitive elements (2.1-2.9) for converting incident radiation into electric signals, the radiation-sensitive elements (2.1-2.9) being properly interconnected to form a larger radiation-sensitive area. The pixel (1) further comprises a plurality of storage elements (3A-3D) for storing the electric signals. The pixel further comprises transfer means for transferring the electric signals from the radiation-sensitive elements (2.1-2.9) to any selected one of the storage elements (3A-3D). The pixel (1) exhibits a high optical sensitivity and a high demodulation speed, and is especially suited for distance-measuring sensors based on the time-of-flight (TOF) principle or interferometry.

Patent
03 Jan 2005
TL;DR: In this article, a vehicle occupant in a compartment of the vehicle is projected into an area of interest in the compartment, rays of light forming the structured light originating from the light source, reflected light is detected at an image sensor at a position different than the position from which the projected light is projected.
Abstract: Arrangement and method for obtaining information about a vehicle occupant in a compartment of the vehicle in which a light source is mounted in the vehicle, structured light is projected into an area of interest in the compartment, rays of light forming the structured light originating from the light source, reflected light is detected at an image sensor at a position different than the position from which the structured light is projected, and the reflected light is analyzed relative to the projected structured light to obtain information about the area of interest. The structured light is designed to appear as if it comes from a source of light (virtual or actual) which is at a position different than the position of the image sensor.

Patent
05 Jan 2005
TL;DR: In this paper, a video-scope with an image sensor and a light source unit that is capable of selectively emitting normal-light and excitation-light was used to produce a diagnosis color image.
Abstract: An electronic endoscope system according to the present invention has a video-scope that has an image sensor, and a light source unit that is capable of selectively emitting normal-light and excitation-light. The electronic endoscope system further has a signal processor and a display processor. The signal processor generates normal color image signals, which corresponds to the normal color image, on the basis of the normal image-pixel signals. Similarly, the signal processor generates auto-fluorescent image signals corresponding to the auto-fluorescent image on the basis of the auto-fluorescent image-pixel signals, and generates diagnosis color image signals corresponding to the diagnosis color image on the basis of the normal color image signals and the auto-fluorescent image signals. The display processor processes the normal color image signals, the auto-fluorescent image signals, and the diagnosis color image signals so as simultaneously to display a normal color movie-image, an auto-fluorescent movie-image, and a diagnosis color movie-image.

Journal ArticleDOI
TL;DR: In this paper, a wide dynamic range CMOS image sensor with a burst readout multiple exposure method is proposed, where maximally four different exposure-time signals are read out in one frame.
Abstract: A wide dynamic range CMOS image sensor with a burst readout multiple exposure method is proposed. In this method, maximally four different exposure-time signals are read out in one frame. To achieve the high-speed readout, a compact cyclic analog-to-digital converter (ADC) with noise canceling function is proposed and arrays of the cyclic ADCs are integrated at the column. A prototype wide dynamic range CMOS image sensor has been developed with 0.25-/spl mu/m 1-poly 4-metal CMOS image sensor technology. The dynamic range is expanded maximally by a factor of 1791 compared to the case of single exposure. The dynamic range is measured to be 19.8 bit or 119 dB. The 12-bit ADC integrated at the column of the CMOS image sensor has DNL of +0.2/-0.8 LSB.

Patent
09 Feb 2005
TL;DR: A digital image reading system including an image sensor and a computer that is programmed to adjust the frame rate of the image sensor, and to obtain a maximum frame rate for obtaining an acceptable image as discussed by the authors.
Abstract: A digital image reading system including an image sensor and a computer that is programmed to adjust the frame rate of the image sensor, and to obtain a maximum frame rate of the image sensor for obtaining an acceptable image. An algorithm for adjusting the frame rate evaluates image parameters and calculates new exposure times, gain values, and exposure settings to support a maximum frame rate of the image sensor. A process for obtaining an acceptable image with an image reader evaluates an image signal level and adjusts the frame rate if the signal level is outside of a predetermined range. The process adjusts the image sensor to run at a maximum operational frame rate. A digital image reading system including multiple separate digitizers for use in various read environments and under various read conditions.

Proceedings ArticleDOI
14 Mar 2005
TL;DR: In this paper, a wavelet-based denoising filter was used to identify the camera from a given image, using the reference pattern noise as a highfrequency spread spectrum watermark whose presence in the image was established using a correlation detector.
Abstract: In this paper, we demonstrate that it is possible to use the sensor’s pattern noise for digital camera identification from images. The pattern noise is extracted from the images using a wavelet-based denoising filter. For each camera under investigation, we first determine its reference noise, which serves as a unique identification fingerprint. This could be done using the process of flat-fielding, if we have the camera in possession, or by averaging the noise obtained from multiple images, which is the option taken in this paper. To identify the camera from a given image, we consider the reference pattern noise as a high-frequency spread spectrum watermark, whose presence in the image is established using a correlation detector. Using this approach, we were able to identify the correct camera out of 9 cameras without a single misclassification for several hundred images. Furthermore, it is possible to perform reliable identification even from images that underwent subsequent JPEG compression and/or resizing. These claims are supported by experiments on 9 different cameras including two cameras of exactly same model (Olympus C765).

Journal ArticleDOI
TL;DR: In this article, a real-time kinematic global positioning system (RTK-GPS) was adopted, and an inertial sensor (INS) that provides posture (roll and pitch angles) was installed in the helicopter.

Patent
23 Sep 2005
TL;DR: A parking-assist system for providing parking assist information was proposed in this paper, where a front imaging camera (2 - 6), a left image camera (4 - 6 ), a right image camera(5 - 6); a rear imaging camera(3 - 7); and a left infrared laser camera(4 - 7) were configured to obtain information on a distance as to the left side on a pixel to pixel basis.
Abstract: A parking-assist system for providing parking-assist information, including: a front imaging camera ( 2 - 6 ); a left imaging camera ( 4 - 6 ); a right imaging camera ( 5 - 6 ); a rear imaging camera ( 3 - 6 ); a left infrared laser camera ( 4 - 7 ) configured to obtain information on a distance as to the left side on a pixel to pixel basis; a right infrared laser camera ( 5 - 7 ) configured to obtain information on a distance as to the right side on a pixel to pixel basis; a rear infrared laser camera ( 3 - 7 ) configured to obtain information on a distance as to the rear side on a pixel to pixel basis; and a signal processing portion ( 13 ), wherein the parking-assist system provides the parking-assist information according to the information on the images from the imaging cameras ( 6 ) and the information on the distances as to each pixel from the infrared laser cameras ( 7 ).

Patent
18 Feb 2005
TL;DR: In this paper, a curved microelectronic image sensor with a face with a convex and/or concave portion at one side of the substrate has been proposed, which can further include external contacts electrically coupled to the integrated circuitry and a cover over the curved image sensor.
Abstract: Microelectronic imagers with shaped image sensors and methods for manufacturing curved image sensors. In one embodiment, a microelectronic imager device comprises an imaging die having a substrate, a curved microelectronic image sensor having a face with a convex and/or concave portion at one side of the substrate, and integrated circuitry in the substrate operatively coupled to the image sensor. The imaging die can further include external contacts electrically coupled to the integrated circuitry and a cover over the curved image sensor.


Patent
18 Feb 2005
TL;DR: In this article, the first camera is also a zoom lens, where the maximum focal length of the first lens is less than or equal to the minimum focal lengths of the second zoom lens.
Abstract: A digital camera includes a first image sensor, a first wide angle lens for forming a first image of a scene on the first image sensor; a second image sensor, a zoom lens for forming a second image of the same scene on the second image sensor, a control element for selecting either a first sensor output from the first image sensor or a second sensor output from the second image sensor, and a processing section for producing the output image from the selected sensor output. In one variation of this embodiment, the first lens is also a zoom lens, where the maximum focal length of the first lens is less than or equal to the minimum focal length of the second zoom lens.

Proceedings ArticleDOI
29 Aug 2005
TL;DR: In this article, a wide DR CMOS image sensor incorporating a lateral overflow capacitor in each pixel to integrate the overflow charges from the photodiode when it saturates is presented.
Abstract: The wide DR CMOS image sensor incorporates a lateral overflow capacitor in each pixel to integrate the overflow charges from the photodiode when it saturates. The 7.5/spl times/7.5 /spl mu/m/sup 2/ pixel, 1/3" VGA sensor fabricated in a 0.35 /spl mu/m 3M2P CMOS process achieves a 100 dB dynamic range with no image lag, 0.15 mV/sub rms/ random noise and 0.15 mV fixed pattern noise.

Proceedings ArticleDOI
20 Jun 2005
TL;DR: This paper provides a method for evaluating the performance of image fusion algorithms and defines a set of measures of effectiveness for comparative performance analysis and uses them on the output of a number of fusion algorithms that have been applied to aSet of real passive infrared (IR) and visible band imagery.
Abstract: Image fusion is and will be an integral part of many existing and future surveillance systems. However, little or no systematic attempt has been made up to now on studying the relative merits of various fusion techniques and their effectiveness on real multi-sensor imagery. In this paper we provide a method for evaluating the performance of image fusion algorithms. We define a set of measures of effectiveness for comparative performance analysis and then use them on the output of a number of fusion algorithms that have been applied to a set of real passive infrared (IR) and visible band imagery.

Journal ArticleDOI
TL;DR: This work incorporates anatomical noise in experimental and theoretical descriptions of the "generalized DQE" by including a spatial-frequency-dependent noise-power term, S(B), corresponding to background anatomical fluctuations in an anthropomorphic phantom.
Abstract: Analysis of detective quantum efficiency (DQE) is an important component of the investigation of imaging performance for flat-panel detectors (FPDs). Conventional descriptions of DQE are limited, however, in that they take no account of anatomical noise (i.e., image fluctuations caused by overlying anatomy), even though such noise can be the most significant limitation to detectability, often outweighing quantum or electronic noise. We incorporate anatomical noise in experimental and theoretical descriptions of the "generalized DQE" by including a spatial-frequency-dependent noise-power term, S(B), corresponding to background anatomical fluctuations. Cascaded systems analysis (CSA) of the generalized DQE reveals tradeoffs between anatomical noise and the factors that govern quantum noise. We extend such analysis to dual-energy (DE) imaging, in which the overlying anatomical structure is selectively removed in image reconstructions by combining projections acquired at low and high kVp. The effectiveness of DE imaging in removing anatomical noise is quantified by measurement of S(B) in an anthropomorphic phantom. Combining the generalized DQE with an idealized task function to yield the detectability index, we show that anatomical noise dramatically influences task-based performance, system design, and optimization. For the case of radiography, the analysis resolves a fundamental and illustrative quandary: The effect of kVp on imaging performance, which is poorly described by conventional DQE analysis but is clarified by consideration of the generalized DQE. For the case of DE imaging, extension of a generalized CSA methodology reveals a potentially powerful guide to system optimization through the optimal selection of the tissue cancellation parameter. Generalized task-based analysis for DE imaging shows an improvement in the detectability index by more than a factor of 2 compared to conventional radiography for idealized detection tasks.

Journal ArticleDOI
TL;DR: A novel modeling-through-registration approach that fuses 3D information from both the 3D sensor and the video is introduced, and initial experiments with real data illustrate the potential of the proposed approach.
Abstract: We propose a general framework for aligning continuous (oblique) video onto 3D sensor data. We align a point cloud computed from the video onto the point cloud directly obtained from a 3D sensor. This is in contrast to existing techniques where the 2D images are aligned to a 3D model derived from the 3D sensor data. Using point clouds enables the alignment for scenes full of objects that are difficult to model; for example, trees. To compute 3D point clouds from video, motion stereo is used along with a state-of-the-art algorithm for camera pose estimation. Our experiments with real data demonstrate the advantages of the proposed registration algorithm for texturing models in large-scale semi-urban environments. The capability to align video before a 3D model is built from the 3D sensor data offers new practical opportunities for 3D modeling. We introduce a novel modeling-through-registration approach that fuses 3D information from both the 3D sensor and the video. Initial experiments with real data illustrate the potential of the proposed approach.

Patent
Christopher Silsby1
14 Mar 2005
TL;DR: In this paper, a motion detection camera system includes a portable motion detection device having an image sensor for detecting motion within a field of view of the motion detector and automatically generating a digital image of a scene within the field-of-view upon detection of motion.
Abstract: A motion detecting camera system includes a portable motion detection device having an image sensor for detecting motion within a field of view of the motion detection device and automatically generating a digital image of a scene within the field of view upon detection of motion. The motion detection device includes a cellular telephone transmitter for transmitting cellular telephone communications. The camera system includes a base unit having a display screen. The motion detection device is configured to automatically transmit the digital image via the transmitter to the base unit for display on the display screen.

Patent
29 Sep 2005
TL;DR: In this paper, the received data can include video data defining pixel values and ancillary data relating to settings on the image sensor, which can be processed in accordance with the video data to adjust the visual characteristics, such as filtering the images, blending images, and other processing operations.
Abstract: Systems and techniques for processing sequences of video images involve receiving, on a computer, data corresponding to a sequence of video images detected by an image sensor. The received data is processed using a graphics processor to adjust one or more visual characteristics of the video images corresponding to the received data. The received data can include video data defining pixel values and ancillary data relating to settings on the image sensor. The video data can be processed in accordance with ancillary data to adjust the visual characteristics, which can include filtering the images, blending images, and/or other processing operations.

Patent
30 Nov 2005
TL;DR: Backthinning in an area selective manner is applied to CMOS imaging sensors 12 for use in electron bombarded active pixel array devices as mentioned in this paper, which results in an array of collimators aligned with pixels 42 or groups of pixels of an active pixel arrays providing improved image contrast of such image sensor.
Abstract: Backthinning in an area selective manner is applied to CMOS imaging sensors 12 for use in electron bombarded active pixel array devices. A further arrangement results in an array of collimators 51 aligned with pixels 42 or groups of pixels of an active pixel array providing improved image contrast of such image sensor. Provision of a thin P-doped layer 52 on the illuminated rear surface provides both a diffusion barrier resulting in improved resolution and a functional shield for reference pixels. A gradient in concentration of P-doped layer 52 optimizes electron collection at the pixel array.