scispace - formally typeset
Search or ask a question

Showing papers on "Image sensor published in 2004"


Proceedings ArticleDOI
28 Sep 2004
TL;DR: A direct solution is given that minimizes an algebraic error from this constraint, and subsequent nonlinear refinement minimizes a re-projection error, which is the first published calibration tool for this problem.
Abstract: We describe theoretical and experimental results for the extrinsic calibration of sensor platform consisting of a camera and a 2D laser range finder. The calibration is based on observing a planar checkerboard pattern and solving for constraints between the "views" of a planar checkerboard calibration pattern from a camera and laser range finder. We give a direct solution that minimizes an algebraic error from this constraint, and subsequent nonlinear refinement minimizes a re-projection error. To our knowledge, this is the first published calibration tool for this problem. Additionally we show how this constraint can reduce the variance in estimating intrinsic camera parameters.

697 citations


Patent
02 Apr 2004
TL;DR: In this paper, the authors propose a sensing device for: sensing coded data disposed on a surface; and generating interaction data based on the sensed coded data, the interaction data being indicative of interaction of the sensing device with the surface; the device comprising an image sensor for capturing image information; at least one analog to digital converter for converting the captured image information into image data; an image processor for processing the image data to generate processed image data.
Abstract: A sensing device for: sensing coded data disposed on a surface; and generating interaction data based on the sensed coded data, the interaction data being indicative of interaction of the sensing device with the surface; the sensing device comprising: (a) an image sensor for capturing image information; (b) at least one analog to digital converter for converting the captured image information into image data; (c) an image processor for processing the image data to generate processed image data; (d) a host processor for generating the interaction data based at least partially on the processed image data.

356 citations


Patent
28 Jan 2004
TL;DR: In this article, a distance image sensor for removing the background light and improving the charge transfer efficiency in a device for measuring the distance to an object by measuring the time-of-flight of the light.
Abstract: A distance image sensor for removing the background light and improving the charge transfer efficiency in a device for measuring the distance to an object by measuring the time-of-flight of the light. In a distance image sensor for determining the signals of two charge storage nodes which depend on the delay time of the modulated light, a signal by the background light is received from the third charge storage node or the two charge storage nodes in a period when the modulated light does not exist, and is subtracted from the signal which depends on the delay time of the two charge storage nodes, so as to remove the influence of the background. Also by using a buried diode as a photo-detector, and using an MOS gate as gate means, the charge transfer efficiency improves. The charge transfer efficiency is also improved by using a negative feedback amplifier where a capacitor is disposed between the input and output.

353 citations


Journal ArticleDOI
TL;DR: The results indicate that the beam stop array-based scatter correction algorithm is practical and effective to reduce and correct x-ray scatter for a CBCT imaging task.
Abstract: Developing and optimizing an x-ray scatter control and reduction technique is one of the major challenges for cone beam computed tomography (CBCT) because CBCT will be much less immune to scatter than fan-beam CT. X-ray scatter reduces image contrast, increases image noise and introduces reconstruction error into CBCT. To reduce scatter interference, a practical algorithm that is based upon the beam stop array technique and image sequence processing has been developed on a flat panel detector-based CBCT prototype scanner. This paper presents a beam stop array-based scatter correction algorithm and the evaluation results through phantom studies. The results indicate that the beam stop array-based scatter correction algorithm is practical and effective to reduce and correct x-ray scatter for a CBCT imaging task.

346 citations


Book ChapterDOI
11 May 2004
TL;DR: This paper shows that not only does a correct shadow-free image emerge, but also that the angle found agrees with that recovered from a calibration, and can be applied successfully to remove shadows from unsourced imagery.
Abstract: A method was recently devised for the recovery of an invariant image from a 3-band colour image. The invariant image, originally 1D greyscale but here derived as a 2D chromaticity, is independent of lighting, and also has shading removed: it forms an intrinsic image that may be used as a guide in recovering colour images that are independent of illumination conditions. Invariance to illuminant colour and intensity means that such images are free of shadows, as well, to a good degree. The method devised finds an intrinsic reflectivity image based on assumptions of Lambertian reflectance, approximately Planckian lighting, and fairly narrowband camera sensors. Nevertheless, the method works well when these assumptions do not hold. A crucial piece of information is the angle for an “invariant direction” in a log-chromaticity space. To date, we have gleaned this information via a preliminary calibration routine, using the camera involved to capture images of a colour target under different lights. In this paper, we show that we can in fact dispense with the calibration step, by recognizing a simple but important fact: the correct projection is that which minimizes entropy in the resulting invariant image. To show that this must be the case we first consider synthetic images, and then apply the method to real images. We show that not only does a correct shadow-free image emerge, but also that the angle found agrees with that recovered from a calibration. As a result, we can find shadow-free images for images with unknown camera, and the method is applied successfully to remove shadows from unsourced imagery.

307 citations


Proceedings ArticleDOI
18 Feb 2004
TL;DR: The SwissRanger 2 as mentioned in this paper is a 3D camera system based on the time-of-flight (TOF) principle, which can achieve sub-centimeter depth resolution for a wide range of operating conditions.
Abstract: A new miniaturized camera system that is capable of 3-dimensional imaging in real-time is presented. The compact imaging device is able to entirely capture its environment in all three spatial dimensions. It reliably and simultaneously delivers intensity data as well as range information on the objects and persons in the scene. The depth measurement is based on the time-of-flight (TOF) principle. A custom solid-state image sensor allows the parallel measurement of the phase, offset and amplitude of a radio frequency (RF) modulated light field that is emitted by the system and reflected back by the camera surroundings without requiring any mechanical scanning parts. In this paper, the theoretical background of the implemented TOF principle is presented, together with the technological requirements and detailed practical implementation issues of such a distance measuring system. Furthermore, the schematic overview of the complete 3D-camera system is provided. The experimental test results are presented and discussed. The present camera system can achieve sub-centimeter depth resolution for a wide range of operating conditions. A miniaturized version of such a 3D-solid-state camera, the SwissRanger 2, is presented as an example, illustrating the possibility of manufacturing compact, robust and cost effective ranging camera products for 3D imaging in real-time.

301 citations


Proceedings ArticleDOI
Bennett Wilburn1, Neel Joshi1, Vaibhav Vaish1, Marc Levoy1, Mark Horowitz1 
27 Jun 2004
TL;DR: A system for capturing multi-thousand frame-per-second video using a dense array of cheap 30 fps CMOS image sensors and how to compensate for spatial and temporal distortions caused by the electronic rolling shutter, a common feature of low-end CMOS sensors is demonstrated.
Abstract: We demonstrate a system for capturing multi-thousand frame-per-second (fps) video using a dense array of cheap 30 fps CMOS image sensors. A benefit of using a camera array to capture high-speed video is that we can scale to higher speeds by simply adding more cameras. Even at extremely high frame rates, our array architecture supports continuous streaming to disk from all of the cameras. This allows us to record unpredictable events, in which nothing occurs before the event of interest that could be used to trigger the beginning of recording. Synthesizing one high-speed video sequence using images from an array of cameras requires methods to calibrate and correct those cameras' varying radiometric and geometric properties. We assume that our scene is either relatively planar or is very far away from the camera and that the images can therefore be aligned using projective transforms. We analyze the errors from this assumption and present methods to make them less visually objectionable. We also present a new method to automatically color match our sensors. Finally, we demonstrate how to compensate for spatial and temporal distortions caused by the electronic rolling shutter, a common feature of low-end CMOS sensors.

245 citations


Patent
07 Jul 2004
TL;DR: In this article, three non-complex imaging arrangements are provided where in two of the imaging arrangements a moveable carrier housing at least one objective lens is provided and, in the other imaging arrangement, at least stationary objective lens and additional optical elements are provided.
Abstract: Three non-complex imaging arrangements are provided where in two of the imaging arrangements a moveable carrier housing at least one objective lens is provided and, in the other imaging arrangement, at least one stationary objective lens and additional optical elements are provided. Each embodiment includes at least one fixed image sensor array for imaging thereon an optical code or target, such as a one-dimensional barcode symbol, or label, marking, picture, etc. Each imaging arrangement provides an extended working range of approximately 5-102cm. The imaging arrangements are, capable of being incorporated within a barcode imager to provide a non-complex barcode imager having an extended working range which is comparable to or greater than the working ranges of conventional image-based barcode imagers.

242 citations


Proceedings ArticleDOI
19 Jul 2004
TL;DR: A method for simultaneously recovering the trajectory of a target and the external calibration parameters of non-overlapping cameras in a multi-camera system with a network of indoor wireless cameras is described.
Abstract: We describe a method for simultaneously recovering the trajectory of a target and the external calibration parameters of non-overlapping cameras in a multi-camera system. Each camera is assumed to measure the location of a moving target within its field of view with respect to the camera's ground-plane coordinate system. Calibrating the network of cameras requires aligning each camera's ground-plane coordinate system with a global ground-plane coordinate system. Prior knowledge about the target's dynamics can compensate for the lack of overlap between the camera fields of view. The target is allowed to move freely with varying speed and direction. We demonstrate the idea with a network of indoor wireless cameras.

231 citations


Journal ArticleDOI
TL;DR: A new method for high-resolution image reconstruction, called a pixel rearrange method, is proposed, where the relation between the target object and the captured signals is estimated and utilized to rearrange the original pixel information.
Abstract: The authors have proposed an architecture for a compact image-capturing system called TOMBO (thin observation module by bound optics), which uses compound-eye imaging for a compact hardware configuration [Appl. Opt. 40, 1806 (2001)]. The captured compound image is decomposed into a set of unit images, then the pixels in the unit images are processed with digital processing to retrieve the target image. A new method for high-resolution image reconstruction, called a pixel rearrange method, is proposed. The relation between the target object and the captured signals is estimated and utilized to rearrange the original pixel information. Experimental results show the effectiveness of the proposed method. In the experimental TOMBO system, the resolution obtained is four times higher than that of the unit image that did not undergo reconstruction processing.

226 citations


Patent
20 Aug 2004
TL;DR: In this paper, an apparatus and technique for compensating the display of an image obtained from a video camera system associated with an endoscope as it is moved through various orientations are described.
Abstract: An apparatus and technique for compensating the display of an image obtained from a video camera system associated with an endoscope as it is moved through various orientations are described. The received optical image is converted to an electrical signal with an image sensor that can be a CCD or a CMOS detector. The endoscope video camera system has an inertial sensor to sense rotations of the received image about the optical axis of the endoscope and the sensor's output signals are used to rotate either the image or the image sensor. In case of rotation of the image sensor the rotation sensor can be a gyroscope or a pair of accelerometers. In case of a rotation of the image obtained with the image sensor the inertial sensor, which can be an accelerometer or a gyroscope, the image is rotated within a microprocessor for subsequent viewing on a video display.

Journal ArticleDOI
01 Feb 2004
TL;DR: A Viewpoint Planner is developed to generate the sensor placement plan, which includes many functions, such as 3-D animation of the object geometry, sensor specification, initialization of the viewpoint number and their distribution, viewpoint evolution, shortest path computation, scene simulation of a specific viewpoint, parameter amendment.
Abstract: This paper presents a method for automatic sensor placement for model-based robot vision. In such a vision system, the sensor often needs to be moved from one pose to another around the object to observe all features of interest. This allows multiple three-dimensional (3-D) images to be taken from different vantage viewpoints. The task involves determination of the optimal sensor placements and a shortest path through these viewpoints. During the sensor planning, object features are resampled as individual points attached with surface normals. The optimal sensor placement graph is achieved by a genetic algorithm in which a min-max criterion is used for the evaluation. A shortest path is determined by Christofides algorithm. A Viewpoint Planner is developed to generate the sensor placement plan. It includes many functions, such as 3-D animation of the object geometry, sensor specification, initialization of the viewpoint number and their distribution, viewpoint evolution, shortest path computation, scene simulation of a specific viewpoint, parameter amendment. Experiments are also carried out on a real robot vision system to demonstrate the effectiveness of the proposed method.

Patent
16 Aug 2004
TL;DR: In this article, a semiconductor package is configured to be aligned with and joined to a wafer bearing a plurality of image sensor dice, wherein optional, downwardly protruding skirts along peripheries of the frames may be received into kerfs cut along streets between die locations on the wafer, followed by installation of other package components.
Abstract: A semiconductor package such as an image sensor package, and methods for fabrication. A frame structure includes an array of frames, each having an aperture therethrough, into which an image sensor die in combination with a cover glass, filter, lens or other components may be installed in precise mutual alignment. Singulated image sensor dice and other components may be picked and placed into each frame of the frame structure. Alternatively, the frame structure may be configured to be aligned with and joined to a wafer bearing a plurality of image sensor dice, wherein optional, downwardly protruding skirts along peripheries of the frames may be received into kerfs cut along streets between die locations on the wafer, followed by installation of other package components. In either instance, the frame structure in combination with singulated image sensor dice or a joined wafer is singulated into individual image sensor packages. Various external connection approaches may be used for the packages.

Patent
12 Jan 2004
TL;DR: In this article, a technique for measuring, inspecting, characterizing and/or evaluating optical lithographic equipment, methods, and materials used therewith, for example, photomasks is presented.
Abstract: In one aspect, the present invention is a technique of, and a system and sensor for measuring, inspecting, characterizing and/or evaluating optical lithographic equipment, methods, and/or materials used therewith, for example, photomasks. In one embodiment, the system, sensor and technique measures, collects and/or detects an aerial image produced or generated by the interaction between the photomask and lithographic equipment. An image sensor unit may measure, collect, sense and/or detect the aerial image in situ—that is, the aerial image at the wafer plane produced, in part, by a product-type photomask (i.e., a wafer having integrated circuits formed during the integrated circuit fabrication process) and/or by associated lithographic equipment used, or to be used, to manufacture of integrated circuits. In this way, the aerial image used, generated or produced to measure, inspect, characterize and/or evaluate the photomask is the same aerial image used, generated or produced during wafer exposure in integrated circuit manufacturing. In another embodiment, the system, sensor and technique characterizes and/or evaluates the performance of the optical lithographic equipment, for example, the optical sub-system of such equipment. In this regard, in one embodiment, an image sensor unit measures, collects, senses and/or detects the aerial image produced or generated by the interaction between lithographic equipment and a photomask having a known, predetermined or fixed pattern (i.e., test mask). In this way, the system, sensor and technique collects, senses and/or detects the aerial image produced or generated by the test mask—lithographic equipment in order to inspect, evaluate and/or characterize the performance of the lithographic equipment.

Patent
03 May 2004
TL;DR: In this article, a color-based imaging system and method for the detection and classification of insects and other arthropods are described, including devices for counting arthropod and providing taxonomic capabilities useful for pest-management.
Abstract: A color-based imaging system and method for the detection and classification of insects and other arthropods are described, including devices for counting arthropods and providing taxonomic capabilities useful for pest-management. Some embodiments include an image sensor (for example, a digital color camera, scanner or a video camera) with optional illumination that communicates with a computer system. Some embodiments include a color scanner connected to a computer. Sampled arthropods are put on a scanner to be counted and identified. The computer captures images from the scanner, adjusts scanner settings, and processes the acquired images to detect and identify the arthropods. Other embodiments include a trapping device and a digital camera connected by cable or wireless communications to the computer. Some devices include a processor to do the detection and identification in the field, or the field system can send the images to a centralized host computer for detection and identification.

Patent
15 Nov 2004
TL;DR: In this paper, the authors present methods of manufacturing a digital apparatus having compensation for defective pixels, where the compensation circuitry compensates the digital data representative of the image using the information indicative of the locations of the defective pixels.
Abstract: Disclosed is a digital apparatus, such as a digital camera, which generates a digital representation of an image. The digital apparatus includes an image sensor having an array of pixels. An analog-to-digital converter converts electrical signals from the array of pixels into digital data representative of the image. Information indicative of locations of defective pixels in the pixel array is stored in a pixel defect memory. Compensation circuitry compensates the digital data representative of the image using the information indicative of the locations of the defective pixels. Also disclosed are methods of manufacturing a digital apparatus having compensation for defective pixels.

Proceedings ArticleDOI
TL;DR: Two different approaches for ultra flat image acquisition sensors on the basis of artificial compound eyes are examined and measurements of the angular sensitivity function are compared to calculations using commercial raytracing software.
Abstract: Two different approaches for ultra flat image acquisition sensors on the basis of artificial compound eyes are examined. In apposition optics the image reconstruction is based on moire- or static sampling while the superposition eye approach produces an overall image. Both types of sensors are compared with respect to theoretical limitations of resolution, sensitivity and system thickness. Explicit design rules are given. A paraxial 3×3 matrix formalism is used to describe the arrangement of three microlens arrays with different pitches to find first order parameters of artificial superposition eyes. The model is validated by analysis of the system with raytracing software. Measurements of focal length of anamorphic reflow lenses, which are key components of the superposition approach, under oblique incidence are performed. For the second approach, the artificial apposition eye, a first demonstrator system is presented. The monolithic device consists of a UV-replicated reflow microlens array on a thin silica-substrate with a pinhole array in a metal layer on the backside. The pitch of the pinholes differs from the lens array pitch to enable an individual viewing angle for each channel. Imaged test patterns are presented and measurements of the angular sensitivity function are compared to calculations using commercial raytracing software.

Journal ArticleDOI
TL;DR: Experimental results of the image reconstruction show the effectiveness of the proposed multispectral imaging system, in which pixels in the captured image are geometrically rearranged onto a multi-channel virtual image plane.
Abstract: A very thin image capturing system called TOMBO (thin observation module by bound optics) is developed with compound-eye imaging and digital post-processing. As an application of TOMBO, a multispectral imaging system is proposed. With a specific arrangement of the optical system, spatial points can be observed by multiple photodetectors simultaneously. A filter array inserted in front of the image sensor enables observation of the spectrum of the target. The captured image is reconstructed by a modified pixel rearranging method extended to treat multi-channel spectral data, in which pixels in the captured image are geometrically rearranged onto a multi-channel virtual image plane. Experimental results of the image reconstruction show the effectiveness of the proposed system.

Patent
Minefuji Nobutaka1, Masahiro Oono1
01 Jun 2004
TL;DR: In this paper, a multiple-focal-length imaging device includes at least one image sensor positioned in one plane; and a plurality of image-forming optical systems through which images at different magnifications are formed on a plurality on the image sensor.
Abstract: A multiple-focal-length imaging device includes at least one image sensor positioned in one plane; and a plurality of image-forming optical systems through which a plurality of images at different magnifications are formed on a plurality of different image-forming areas on the image sensor.

Patent
19 Jul 2004
TL;DR: In this article, a microelectronic imager assembly comprising a workpiece including a substrate and a plurality of imaging dies on and/or in the substrate is described, which can further include optical devices mounted or otherwise carried by the optics supports.
Abstract: Microelectronic imager assemblies comprising a workpiece including a substrate and a plurality of imaging dies on and/or in the substrate. The substrate includes a front side and a back side, and the imaging dies comprise imaging sensors at the front side of the substrate and external contacts operatively coupled to the image sensors. The microelectronic imager assembly further comprises optics supports superimposed relative to the imaging dies. The optics supports can be directly on the substrate or on a cover over the substrate. Individual optics supports can have (a) an opening aligned with one of the image sensors, and (b) a bearing element at a reference distance from the image sensor. The microelectronic imager assembly can further include optical devices mounted or otherwise carried by the optics supports.

Patent
Teruyuki Higuchi1
02 Sep 2004
TL;DR: A fingerprint input device as mentioned in this paper is an image sensor which responds to the scattered light emanating from a finger, which is generated inside a finger having a fingerprint pattern in accordance with external light.
Abstract: A fingerprint input apparatus includes an image sensor which responds to scattered light emanating from a finger. The scattered light is generated inside a finger having a fingerprint pattern in accordance with external light. The sensor may be a two-dimensional image sensor made of a large number of light-receiving elements arranged in a two-dimensional array or a one-dimensional sensor made of a large number of light-receiving elements arranged in a line-type array. In the latter case, the fingerprint is input by swiping the finger across the image sensor and reconstructing the fingerprint image. The fingerprint input apparatus is used to control use of a variety of devices, including electronic devices such as cellular telephones and personal computers, and access to buildings, rooms, safes and the like. The fingerprint input apparatus makes possible the elimination of personal identification numbers (PINs) and signatures in a variety of transactions.

Journal ArticleDOI
TL;DR: This paper proposes effective schemes to enhance two existing state-of-the-art demosaicking methods and shows that the enhanced methods achieve notable improvements over the existing methods, in terms of both subjective and objective evaluations.
Abstract: To minimize cost and size, most commercial digital cameras acquire imagery using a single electronic sensor (CCD or CMOS) overlaid with a color filter array (CFA) such that each sensor pixel only samples one of the three primary color values. To restore a full-color image from CFA samples, the two missing color values at each pixel need to be estimated from the neighboring samples, a process that is commonly known as CFA demosaicking or interpolation. In this paper we present two contributions to CFA demosaicking. First, we stress the importance of well exploiting both image spatial and spectral correlations, and characterize the demosaicking artifacts due to inadequate use of either correlation. Second, based on the insights gained from our empirical study, we propose effective schemes to enhance two existing state-of-the-art demosaicking methods. Experimental results show that our enhanced methods achieve notable improvements over the existing methods, in terms of both subjective and objective evaluations, on a large variety of test images. In addition, the computational complexities of the enhanced methods are comparable to the originals.

Book
01 Jan 2004
TL;DR: A scalable architecture that continuously streams color video from over 100 inexpensive cameras to disk using four PCs, creating a one gigasample-per-second photometer is presented and a novel multiple-camera optical flow variant for spatiotemporal view interpolation is presented.
Abstract: Digital cameras are becoming increasingly cheap and ubiquitous, leading researchers to exploit multiple cameras and plentiful processing to create richer and more accurate representations of real settings. This thesis addresses issues of scale in large camera arrays. I present a scalable architecture that continuously streams color video from over 100 inexpensive cameras to disk using four PCs, creating a one gigasample-per-second photometer. It extends prior work in camera arrays by providing as much control over those samples as possible. For example, this system not only ensures that the cameras are frequency-locked, but also allows arbitrary, constant temporal phase shifts between cameras, allowing the application to control the temporal sampling. The flexible mounting system also supports many different configurations, from tightly packed to widely spaced cameras, so applications can specify camera placement. Even greater flexibility is provided by processing power at each camera, including an MPEG2 encoder for video compression, and FPGAs and embedded microcontrollers to perform low-level image processing for real-time applications. I present three novel applications for the camera array that highlight strengths of the architecture and the advantages and feasibility of working with many inexpensive cameras: synthetic aperture videography, high speed videography, and spatiotemporal view interpolation. Synthetic aperture videography uses numerous moderately spaced cameras to emulate a single large-aperture one. Such a camera can see through partially occluding objects like foliage or crowds. I show the first synthetic aperture images and videos of dynamic events, including live video accelerated by image warps performed at each camera. High-speed videography uses densely packed cameras with staggered trigger times to increase the effective frame rate of the system. I show how to compensate for artifacts induced by the electronic rolling shutter commonly used in inexpensive CMOS image sensors and present results streaming 1560 fps video using 52 cameras. Spatiotemporal view interpolation processes images from multiple video cameras to synthesize new views from times and positions not in the captured data. We simultaneously extend imaging performance along two axes by properly staggering the trigger times of many moderately spaced cameras, enabling a novel multiple-camera optical flow variant for spatiotemporal view interpolation.* *This dissertation is a compound document (contains both a paper copy and a CD as part of the dissertation). The CD requires the following system requirements: Windows MediaPlayer or RealPlayer.

Patent
14 Dec 2004
TL;DR: In this article, a color image sensor has a light sensor and imaging elements arranged to form images of the subject in light of different colors on respective regions (131, 132, 133) of the light sensor.
Abstract: The color image sensor (100) generates an image signal (114) representing a subject. The color image sensor has a light sensor (112) and imaging elements (101, 102, 103) arranged to form images of the subject in light of different colors on respective regions (131, 132, 133) of the light sensor. The light sensor includes sensor elements (e.g., 121) and is operable to generate the image signal in response to light incident on it.

Proceedings ArticleDOI
TL;DR: Analysis of dynamic-range and signal-to-noise-ratio (SNR) for high fidelity, high-dynamic-range (HDR) image sensor architectures is presented and examples of SNR in the extended DR and implementation and power consumption issues for each scheme are presented.
Abstract: Analysis of dynamic-range (DR) and signal-to-noise-ratio (SNR) for high fidelity, high-dynamic-range (HDR) image sensor architectures is presented. Four architectures are considered: (i) time-to-saturation, (ii) multiple-capture, (iii) asynchronous self-reset with multiple capture, and (iv) synchronous self-reset with residue readout. The analysis takes into account circuit nonidealities such as quantization noise and the effects of limited pixel area on building block and reference signal performance and accuracy. Examples that demonstrate the behavior of SNR in the extended DR and implementation and power consumption issues for each scheme are presented.

Patent
30 Apr 2004
TL;DR: In this paper, a wide-angle motion video camera for use in surveillance and security applications is disclosed in which multiple objects can be processed by the camera and the results of the processing transmitted to a base station.
Abstract: A multi object processing video camera is disclosed. The video camera comprises an image scan and capture circuit capable of capturing a wide-angle field of view in high resolution and a processing circuit capable of executing a plurality of software tasks on a plurality of regions from the image sensor and capture circuit based upon the wide angle field of view. The video camera includes memory for storing programs for the processing circuit based upon the captured wide angle field of view and a network connection coupled to the processing circuit. A wide-angle motion video camera for use in surveillance and security applications is disclosed in which multiple objects can be processed by the camera and the results of the processing transmitted to a base station.

Journal ArticleDOI
TL;DR: A simulation model for CCD and CMOS imager-based luminescence detection systems is developed and signal processing algorithms are applied to the image to enhance detection reliability and hence increase the overall system throughput.

Journal ArticleDOI
TL;DR: This work describes a method to measure effective interexposure time using subtracted image data of a uniformly moving object, and measures the sensitivity to patient motion artifacts.
Abstract: Dual energy detector systems are combinations of x-ray detectors,x-ray source spectrum switching, and x-ray filter attenuation that provide two measurements of transmitted flux through the object with different effective spectra. We describe technology independent methods to measure and compare the quantum noise and sensitivity to motion artifacts of these systems. The experimental methods use relatively simple phantoms to measure the parameters in the general mathematical expressions for the noise in the subtracted image. The parameters are used to compute an x-ray energy spectrum quality factor and a subtracted imagenoise per unit patient dose quality factor. Patient motion causes artifacts in switched spectrum systems, particularly with the heart in chest radiography. We describe a method to measure effective interexposure time using subtracted image data of a uniformly moving object. This parameter measures the sensitivity to patient motion artifacts. We use these methods to compare three examples of systems with different dual energy detector technologies: a passive, “sandwich” detector with two computed radiography plates separated by a copper filter, an “active” detector that uses voltage switching with an electro-optical system and computed radiography plates, and a flat-panel, solid state detector with voltage switching.


Patent
27 Sep 2004
TL;DR: In this article, the authors provide light sources and endoscopy systems that can improve the quality of images and the ability of users to distinguish desired features when viewing tissues by providing methods and apparatus that improve the dynamic range of images from endoscopes.
Abstract: The apparatus and methods herein provide light sources and endoscopy systems that can improve the quality of images and the ability of users to distinguish desired features when viewing tissues by providing methods and apparatus that improve the dynamic range of images from endoscopes, in particular endoscopes that have dynamic range limited because of small image sensors and small pixel electron well capacity, and other optical systems.