scispace - formally typeset
Search or ask a question

Showing papers on "Image sensor published in 2006"


Journal ArticleDOI
TL;DR: A new method is proposed for the problem of digital camera identification from its images based on the sensor's pattern noise, which serves as a unique identification fingerprint for each camera under investigation by averaging the noise obtained from multiple images using a denoising filter.
Abstract: In this paper, we propose a new method for the problem of digital camera identification from its images based on the sensor's pattern noise. For each camera under investigation, we first determine its reference pattern noise, which serves as a unique identification fingerprint. This is achieved by averaging the noise obtained from multiple images using a denoising filter. To identify the camera from a given image, we consider the reference pattern noise as a spread-spectrum watermark, whose presence in the image is established by using a correlation detector. Experiments on approximately 320 images taken with nine consumer digital cameras are used to estimate false alarm rates and false rejection rates. Additionally, we study how the error rates change with common image processing, such as JPEG compression or gamma correction.

1,195 citations


Proceedings ArticleDOI
01 Oct 2006
TL;DR: A novel technique for calibrating central omnidirectional cameras by assuming that the imaging function can be described by a Taylor series expansion whose coefficients are estimated by solving a four-step least-squares linear minimization problem, followed by a non-linear refinement based on the maximum likelihood criterion.
Abstract: In this paper, we present a novel technique for calibrating central omnidirectional cameras. The proposed procedure is very fast and completely automatic, as the user is only asked to collect a few images of a checker board, and click on its corner points. In contrast with previous approaches, this technique does not use any specific model of the omnidirectional sensor. It only assumes that the imaging function can be described by a Taylor series expansion whose coefficients are estimated by solving a four-step least-squares linear minimization problem, followed by a non-linear refinement based on the maximum likelihood criterion. To validate the proposed technique, and evaluate its performance, we apply the calibration on both simulated and real data. Moreover, we show the calibration accuracy by projecting the color information of a calibrated camera on real 3D points extracted by a 3D sick laser range finder. Finally, we provide a Toolbox which implements the proposed calibration procedure.

653 citations


Journal ArticleDOI
TL;DR: In this paper, CMOS Image Sensors are reviewed, providing information on the latest advances achieved, their applications, the new challenges and their limitations, leading to the State-of-the-art of CMOS image sensors.

546 citations


Journal ArticleDOI
13 Mar 2006
TL;DR: This paper provides an overview of the approaches and techniques developed during the last decade to overcome limitations in 3-D imaging techniques and expects that II-based 3- D imaging systems will reach practical applicability in various fields.
Abstract: Three dimensional (3-D) imaging and display have been subjects of much research due to their diverse benefits and applications. However, due to the necessity to capture, record, process, and display an enormous amount of optical data for producing high-quality 3-D images, the developed 3-D imaging techniques were forced to compromise their performances (e.g., gave up the continuous parallax, restricting to a fixed viewing point) or to use special devices and technology (such as coherent illuminations, special spectacles) which is inconvenient for most practical implementation. Today's rapid progress of digital capture and display technology opened the possibility to proceed toward noncompromising, easy-to-use 3-D imaging techniques. This technology progress prompted the revival of the integral imaging (II)technique based on a technique proposed almost one century ago. II is a type of multiview 3-D imaging system that uses an array of diffractive or refractive elements to capture the 3-D optical data. It has attracted great attention recently, since it produces autostereoscopic images without special illumination requirements. However, with a conventional II system it is not possible to produce 3-D images that have both high resolution, large depth-of-field, and large viewing angle. This paper provides an overview of the approaches and techniques developed during the last decade to overcome these limitations. By combining these techniques with upcoming technology it is to be expected that II-based 3-D imaging systems will reach practical applicability in various fields.

405 citations



Proceedings ArticleDOI
01 Dec 2006
TL;DR: This paper proposes algorithms and hardware to support a new theory of compressive imaging based on a new digital image/video camera that directly acquires random projections of the signal without first collecting the pixels/voxels.
Abstract: Compressive Sensing is an emerging field based on the revelation that a small group of non-adaptive linear projections of a compressible signal contains enough information for reconstruction and processing. In this paper, we propose algorithms and hardware to support a new theory of Compressive Imaging. Our approach is based on a new digital image/video camera that directly acquires random projections of the signal without first collecting the pixels/voxels. Our camera architecture employs a digital micromirror array to perform optical calculations of linear projections of an image onto pseudo-random binary patterns. Its hallmarks include the ability to obtain an image with a single detection element while measuring the image/video fewer times than the number of pixels ? this can significantly reduce the computation required for video acquisition/encoding. Because our system relies on a single photon detector, it can also be adapted to image at wavelengths that are currently impossible with conventional CCD and CMOS imagers. We are currently testing a proto-type design for the camera and include experimental results.

374 citations


Proceedings ArticleDOI
02 Feb 2006
TL;DR: In this paper, the presence of camera pattern noise was detected in individual regions in the image. But the detection of the pattern noise is based on the assumption that either the camera that took the image is available or other images taken by that camera are available.
Abstract: We present a new approach to detection of forgeries in digital images under the assumption that either the camera that took the image is available or other images taken by that camera are available. Our method is based on detecting the presence of the camera pattern noise, which is a unique stochastic characteristic of imaging sensors, in individual regions in the image. The forged region is determined as the one that lacks the pattern noise. The presence of the noise is established using correlation as in detection of spread spectrum watermarks. We proposed two approaches. In the first one, the user selects an area for integrity verification. The second method attempts to automatically determine the forged area without assuming any a priori knowledge. The methods are tested both on examples of real forgeries and on non-forged images. We also investigate how further image processing applied to the forged image, such as lossy compression or filtering, influences our ability to verify image integrity.

339 citations


Proceedings ArticleDOI
18 Sep 2006
TL;DR: A vision sensor responds to temporal contrast with asynchronous output and continuously quantizes changes in log intensity using a 128times128-pixel chip.
Abstract: A vision sensor responds to temporal contrast with asynchronous output. Each pixel independently and continuously quantizes changes in log intensity. The 128times128-pixel chip has 120dB illumination operating range and consumes 30mW. Pixels respond in <100mus at 1klux scene illumination with <10% contrast-threshold FPN

321 citations


Patent
01 Jun 2006
TL;DR: In this paper, a CMOS type semiconductor image sensor module is provided by stacking a first semiconductor chip, which has an image sensor wherein a plurality of pixels composed of a photoelectric conversion element and a transistor are arranged, and a second semiconductor chips, which have an A/D converter array.
Abstract: A CMOS type semiconductor image sensor module wherein a pixel aperture ratio is improved, chip use efficiency is improved and furthermore, simultaneous shutter operation by all the pixels is made possible, and a method for manufacturing such semiconductor image sensor module are provided. The semiconductor image sensor module is provided by stacking a first semiconductor chip, which has an image sensor wherein a plurality of pixels composed of a photoelectric conversion element and a transistor are arranged, and a second semiconductor chip, which has an A/D converter array. Preferably, the semiconductor image sensor module is provided by stacking a third semiconductor chip having a memory element array. Furthermore, the semiconductor image sensor module is provided by stacking the first semiconductor chip having the image sensor and a fourth semiconductor chip having an analog nonvolatile memory array.

311 citations


Journal ArticleDOI
TL;DR: In this paper, a pH image sensor is presented, which is capable of measuring two-dimensional (2D) distributions and dynamic images of various chemical reactions in real-time.
Abstract: In this paper, a pH image sensor, which is capable of measuring two-dimensional (2-D) distributions and dynamic images of various chemical reactions in real time, is presented. The novel pH imaging sensor developed in this work use a charge transfer technique and with them it is possible to form images of variations in chemical reactions. A prototype pH image sensor was successfully fabricated using complimentary metal-oxide semiconductor (CMOS) circuit process technology and was used to measure two-dimensional distributions of various chemical reactions in moving images at 30 frames/s. It is expected that this sensor can be used for novel applications such as in the medical and biochemistry fields.

207 citations


Journal ArticleDOI
TL;DR: A new high-precision method for cone beam CT system calibration that uses multiple projection images acquired from rotating point-like objects and the angle information generated from the rotating gantry system is presented.
Abstract: Cone beam CT systems are being deployed in large numbers for small animal imaging, dental imaging, and other specialty applications. A new high-precision method for cone beam CT system calibration is presented in this paper. It uses multiple projection images acquired from rotating point-like objects (metal ball bearings) and the angle information generated from the rotating gantry system is also used. It is assumed that the whole system has a mechanically stable rotation center and that the detector does not have severe out-of-plane rotation (<2 degrees). Simple geometrical relationships between the orbital paths of individual BBs and five system parameters were derived. Computer simulations were employed to validate the accuracy of this method in the presence of noise. Equal or higher accuracy was achieved compared with previous methods. This method was implemented for the geometrical calibration of both a micro CT scanner and a breast CT scanner. The reconstructed tomographic images demonstrated that the proposed method is robust and easy to implement with high precision.

Patent
07 Mar 2006
TL;DR: In this article, the authors present an image reader and a corresponding method for capturing a sharp distortion free image of a target, such as a one or two-dimensional bar code.
Abstract: The invention features an image reader and a corresponding method for capturing a sharp distortion free image of a target, such as a one or two-dimensional bar code. In one embodiment, the image reader comprises a two-dimensional CMOS based image sensor array, a timing module, an illumination module, and a control module. The time during which the target is illuminated is referred to as the illumination period. The capture of the image by the image sensor array is driven by the timing module that, in one embodiment, is able to simultaneously expose substantially all of the pixels in the array. The time during which the pixels are collectively activated to photo-convert incident light into charge defines the exposure period for the sensor array. In one embodiment, at least a portion of the exposure period occurs during the illumination period.

Patent
27 Nov 2006
TL;DR: In this paper, a fixed-focal-length lens is split into two beams by a beam splitter, to form respective images on a first image sensor and a second image sensor.
Abstract: A digital camera enables high-speed zooming operation without use of a zoom lens Light originating from a fixed-focal-length lens is split into two beams by a beam splitter, to thus form respective images on a first image sensor and a second image sensor The first image sensor and the second image sensor are equal to each other in terms of the number of pixels, but differ from each other in terms of a pixel size The first image sensor acquires a wide image, and the second image sensor acquires a telephotography image An output is produced by means of switching between the first image sensor and the second image sensor, in response to zooming operation When the image from the first image sensor is recorded, focus detection is performed by use of an image signal from the second image sensor, to thus effect automatic focusing

Patent
07 Apr 2006
TL;DR: A stereoscopic imaging system incorporates a plurality of imaging devices or cameras to generate a high resolution, wide field of view image database from which images can be combined in real time to provide wide field-of-view or panoramic or omni-directional still or video images.
Abstract: A stereoscopic imaging system incorporates a plurality of imaging devices or cameras to generate a high resolution, wide field of view image database from which images can be combined in real time to provide wide field of view or panoramic or omni-directional still or video images.

Patent
25 Jul 2006
TL;DR: In this article, a digital imaging system and method using multiple cameras arranged and aligned to create a much larger virtual image sensor array is presented. But the system is limited to a single set of sensors, and the sensor arrays are spatially arranged relative to their respective optical axes so that each sensor images a portion of a target region that is substantially different from other portions of the target region imaged by other sensors.
Abstract: A digital imaging system and method using multiple cameras arranged and aligned to create a much larger virtual image sensor array. Each camera has a lens with an optical axis aligned parallel to the optical axes of the other camera lenses, and a digital image sensor array with one or more non-contiguous pixelated sensors. The non-contiguous sensor arrays are spatially arranged relative to their respective optical axes so that each sensor images a portion of a target region that is substantially different from other portions of the target region imaged by other sensors, and preferably overlaps adjacent portions imaged by the other sensors. In this manner, the portions imaged by one set of sensors completely fill the image gaps found between other portions imaged by other sets of sensors, so that a seamless mosaic image of the target region may be produced.

Patent
24 Feb 2006
TL;DR: A microcrystalline germanium image sensor array as discussed by the authors is an image sensor that consists of a charge collecting electrode for collecting electrical charges and a readout means for reading out the charges collected by the collecting electrode.
Abstract: A microcrystalline germanium image sensor array. The array includes a number of pixel circuits fabricated in or on a substrate. Each pixel circuit comprises a charge collecting electrode for collecting electrical charges and a readout means for reading out the charges collected by the charge collecting electrode. A photodiode layer of charge generating material located above the pixel circuits convert electromagnetic radiation into electrical charges. This photodiode layer includes microcrystalline germanium and defines at least an n-layer, and i-layer and a p-layer. The sensor array also includes and a surface electrode in the form of a grid or thin transparent layer located above the layer of charge generating material. The sensor is especially useful for imaging in visible and near infrared spectral regions of the electromagnetic spectrum and provides imaging with starlight illumination.

Journal ArticleDOI
TL;DR: A 1/1.8-inch 6.4 MPixel 60 frames/s CMOS image sensor fabricated in a 0.18-mum single-poly triple-metal (1P3M) process is described, which has 38% fill factor and 12ke-/lux sensibility.
Abstract: A 1/1.8-inch 6.4 MPixel 60 frames/s CMOS image sensor fabricated in a 0.18-mum single-poly triple-metal (1P3M) process is described. A zigzag-shaped 1.75 T/pixel architecture and a 10-bit counter-type column parallel ADC enables 2.5times2.5 mum2 pixels. The resulting pixel has 38% fill factor and 12ke-/lux.s sensibility. In addition, full frame and 2times2 binning modes are interchangeable without an extra invalid frame

Proceedings ArticleDOI
TL;DR: In this paper, the authors describe system simulations that predict the output of imaging sensors with the same dye size but different pixel sizes and presents metrics that quantify the spatial resolution and light sensitivity for these different imaging sensors.
Abstract: When the size of a CMOS imaging sensor array is fixed, the only way to increase sampling density and spatial resolution is to reduce pixel size. But reducing pixel size reduces the light sensitivity. Hence, under these constraints, there is a tradeoff between spatial resolution and light sensitivity. Because this tradeoff involves the interaction of many different system components, we used a full system simulation to characterize performance. This paper describes system simulations that predict the output of imaging sensors with the same dye size but different pixel sizes and presents metrics that quantify the spatial resolution and light sensitivity for these different imaging sensors.

BookDOI
01 Oct 2006
TL;DR: This chapter discusses adaptation in the Visual System to Color, Spatial, and Temporal Contrast, and the role of light distribution in this transformation.
Abstract: 1 Processing of Information in the Human Visual System (Prof. Dr. F. Schaeffel, University of Tubingen). 1.1 Preface. 1.2 Design and Structure of the Eye. 1.3 Optical Aberrations and Consequences for Visual Performance. 1.4 Chromatic Aberration. 1.5 Neural Adaptation to Monochromatic Aberrations. 1.6 Optimizing Retinal Processing with Limited Cell Numbers, Space and Energy. 1.7 Adaptation to Different Light Levels. 1.8 Rod and Cone Responses. 1.9 Spiking and Coding. 1.10 Temporal and Spatial Performance. 1.11 ON/OFF Structure, Division of theWhole Illuminance Amplitude in Two Segments. 1.12 Consequences of the Rod and Cone Diversity on Retinal Wiring. 1.13 Motion Sensitivity in the Retina. 1.14 Visual Information Processing in Higher Centers. 1.15 Effects of Attention. 1.16 Color Vision, Color Constancy, and Color Contrast. 1.17 Depth Perception. 1.18 Adaptation in the Visual System to Color, Spatial, and Temporal Contrast. 1.19 Conclusions. References. 2 Introduction to Building a Machine Vision Inspection (Axel Telljohann, Consulting Team Machine Vision (CTMV)). 2.1 Preface. 2.2 Specifying a Machine Vision System. 2.3 Designing a Machine Vision System. 2.4 Costs. 2.5 Words on Project Realization. 2.6 Examples. 3 Lighting in Machine Vision (I. Jahr, Vision & Control GmbH). 3.1 Introduction. 3.2 Demands on Machine Vision lighting. 3.3 Light used in Machine Vision. 3.4 Interaction of Test Object and Light. 3.5 Basic Rules and Laws of Light Distribution. 3.6 Light Filters. 3.7 Lighting Techniques and Their Use. 3.8 Lighting Control. 3.9 Lighting Perspectives for the Future. References. 4 Optical Systems in Machine Vision (Dr. Karl Lenhardt, Jos. Schneider OptischeWerke GmbH). 4.1 A Look on the Foundations of Geometrical Optics. 4.2 Gaussian Optics. 4.3 The Wave Nature of Light. 4.4 Information Theoretical Treatment of Image Transfer and Storage. 4.5 Criteria for Image Quality. 4.6 Practical Aspects. References. 5 Camera Calibration (R. Godding, AICON 3D Systems GmbH). 5.1 Introduction. 5.2 Terminology. 5.3 Physical Effects. 5.4 Mathematical Calibration Model. 5.5 Calibration and Orientation Techniques. 5.6 Verification of Calibration Results. 5.7 Applications. References. 6 Camera Systems in Machine Vision (Horst Mattfeldt, Allied Vision Technologies GmbH). 6.1 Camera Technology. 6.2 Sensor Technologies. 6.3 CCD Image Artifacts. 6.4 CMOS Image Sensor. 6.5 Block Diagrams and their Description. 6.6 Digital Cameras. 6.7 Controlling Image Capture. 6.8 Configuration of the Camera. 6.9 Camera Noise1. 6.10 Digital Interfaces. References. 7 Camera Computer Interfaces (Tony Iglesias, Anita Salmon, Johann Scholtz, Robert Hedegore, Julianna Borgendale, Brent Runnels, Nathan McKimpson, National Instruments). 7.1 Overview. 7.2 Analog Camera Buses. 7.3 Parallel Digital Camera Buses. 7.4 Standard PC Buses. 7.5 Choosing a Camera Bus. 7.6 Computer Buses. 7.7 Choosing a Computer Bus. 7.8 Driver Software. 7.9 Features of a Machine Vision System. 8 Machine Vision Algorithms (Dr. Carsten Steger, MVTec Software GmbH). 8.1 Fundamental Data Structures. 8.2 Image Enhancement. 8.3 Geometric Transformations. 8.4 Image Segmentation. 8.5 Feature Extraction. 8.6 Morphology. 8.7 Edge Extraction. 8.8 Segmentation and Fitting of Geometric Primitives. 8.9 Template Matching. 8.10 Stereo Reconstruction. 8.11 Optical Character Recognition. References. 9 Machine Vision in Manufacturing (Dr.-Ing. Peter Waszkewitz, Robert Bosch GmbH). 9.1 Introduction. 9.2 Application Categories. 9.3 System Categories. 9.4 Integration and Interfaces. 9.5 Mechanical Interfaces. 9.6 Electrical Interfaces. 9.7 Information Interfaces. 9.8 Temporal Interfaces. 9.9 Human-Machine Interfaces. 9.10 Industrial Case Studies. 9.11 Constraints and Conditions. References. Index.

Journal ArticleDOI
TL;DR: The experimental results confirm that the proposed method suppresses noise (CMOS/CCD image sensor noise model) while effectively interpolating the missing pixel components, demonstrating a significant improvement in image quality when compared to treating demosaicing and denoising problems independently.
Abstract: The output image of a digital camera is subject to a severe degradation due to noise in the image sensor. This paper proposes a novel technique to combine demosaicing and denoising procedures systematically into a single operation by exploiting their obvious similarities. We first design a filter as if we are optimally estimating a pixel value from a noisy single-color (sensor) image. With additional constraints, we show that the same filter coefficients are appropriate for color filter array interpolation (demosaicing) given noisy sensor data. The proposed technique can combine many existing denoising algorithms with the demosaicing operation. In this paper, a total least squares denoising method is used to demonstrate the concept. The algorithm is tested on color images with pseudorandom noise and on raw sensor data from a real CMOS digital camera that we calibrated. The experimental results confirm that the proposed method suppresses noise (CMOS/CCD image sensor noise model) while effectively interpolating the missing pixel components, demonstrating a significant improvement in image quality when compared to treating demosaicing and denoising problems independently

Proceedings ArticleDOI
18 Sep 2006
TL;DR: A progressive 1/1.8-inch 1920times1440 CMOS image sensor with a column-inline dual CDS architecture uses a 0.18mum CMOS process that implements digital double sampling with analog CDS on a column parallel ADC.
Abstract: A progressive 1/1.8-inch 1920times1440 CMOS image sensor with a column-inline dual CDS architecture uses a 0.18mum CMOS process. This sensor implements digital double sampling with analog CDS on a column parallel ADC. Random noise is 5.2e-rms and the DR is 68dB at 180frames/s(6.0Gb/s). FPN is <0.5e-rms without the correction circuit

Proceedings ArticleDOI
01 Sep 2006
TL;DR: An image sensor comprising an array of apertures each with its own local integrated optics and pixel array is presented, showing depth resolution to continue to improve with pixel scaling below the diffraction limit.
Abstract: An image sensor comprising an array of apertures each with its own local integrated optics and pixel array is presented. A lens focuses the image above the sensor creating overlapping fields of view between apertures. Multiple perspectives of the image in the focal plane facilitate the synthesis of a 3D image at a higher spatial resolution than the aperture count. Depth resolution is shown to continue to improve with pixel scaling below the diffraction limit. Preliminary circuit implementation is described.

Patent
07 Jun 2006
TL;DR: In this paper, the authors presented a small solid-state image sensor with a high refractive index layer formed in the apertures, where each aperture has a smaller aperture width than a maximum wavelength in a wavelength of light.
Abstract: An object of the present invention is to provide a small solid-state image sensor which realizes significant improvement in sensitivity. The solid-state image sensor of the present invention includes a semiconductor substrate in which photoelectric conversion units are formed, a light-blocking film which is formed above the semiconductor substrate and has apertures formed so as to be positioned above respective photoelectric conversion units, and a high refractive index layer formed in the apertures. Here, each aperture has a smaller aperture width than a maximum wavelength in a wavelength of light in a vacuum converted from a wavelength of the light entering the photoelectric conversion unit through the apertures, and the high refractive index is made of a high refractive index material having a refractive index which allows transmission of light having the maximum wavelength through the aperture.

Patent
02 Oct 2006
TL;DR: In this paper, the authors proposed an image restoration procedure, comprising determining sample point pixels from a pixel array based upon a distance of an object being imaged to the pixel array, and reading intensities of the sampled point pixels into a memory.
Abstract: Various exemplary embodiments of the invention provide an extended depth of field. One embodiment provides an image restoration procedure, comprising determining sample point pixels from a pixel array based upon a distance of an object being imaged to the pixel array, and reading intensities of the sample point pixels into a memory. Another embodiment provides an image capture procedure comprising capturing light rays on a pixel array of an imaging sensor, wherein specific sampling point pixels are selected to be evaluated based on spread of an image spot across a based on spread of an image spot across the plurality of pixels of the pixel array plurality of pixels of the pixel array.

Journal ArticleDOI
TL;DR: It is demonstrated that it is possible to achieve a high rate of accuracy in the identification of source camera identification by noting the intrinsic lens radial distortion of each camera.
Abstract: Source camera identification refers to the task of matching digital images with the cameras that are responsible for producing these images. This is an important task in image forensics, which in turn is a critical procedure in law enforcement. Unfortunately, few digital cameras are equipped with the capability of producing watermarks for this purpose. In this paper, we demonstrate that it is possible to achieve a high rate of accuracy in the identification by noting the intrinsic lens radial distortion of each camera. To reduce manufacturing cost, the majority of digital cameras are equipped with lenses having rather spherical surfaces, whose inherent radial distortions serve as unique fingerprints in the images. We extract, for each image, parameters from aberration measurements, which are then used to train and test a support vector machine classifier. We conduct extensive experiments to evaluate the success rate of a source camera identification with five cameras. The results show that this is a viable approach with high accuracy. Additionally, we also present results on how the error rates may change with images captured using various optical zoom levels, as zooming is commonly available in digital cameras.

Patent
06 Mar 2006
TL;DR: In this paper, an image sensor comprising a plurality of arrays of pixel cells at a surface of a substrate, wherein each pixel cell comprises a photo-conversion device, is configured to commonly capture an image.
Abstract: The invention, in various exemplary embodiments, incorporates multiple image sensor arrays, with separate respective color filters, on the same imager die. One exemplary embodiment is an image sensor comprising a plurality of arrays of pixel cells at a surface of a substrate, wherein each pixel cell comprises a photo-conversion device. The arrays are configured to commonly capture an image. An image processor circuit is connected to said plurality of arrays and configured to combine the captured images, captured by the plurality of arrays, and output a color image.

Journal ArticleDOI
TL;DR: A method is described for accurately calibrating cameras including radial lens distortion, by using known points such as those measured from a calibration fixture, including partial derivatives for propagating both from object space to image space and vice versa.
Abstract: A method is described for accurately calibrating cameras including radial lens distortion, by using known points such as those measured from a calibration fixture. Both the intrinsic and extrinsic parameters are calibrated in a single least-squares adjustment, but provision is made for including old values of the intrinsic parameters in the adjustment. The distortion terms are relative to the optical axis, which is included in the model so that it does not have to be orthogonal to the image sensor plane. These distortion terms represent corrections to the basic lens model, which is a generalization that includes the perspective projection and the ideal fish-eye lens as special cases. The position of the entrance pupil point as a function of off-axis angle also is included in the model. (The complete camera model including all of these effects often is called CAHVORE.) A way of adding decentering distortion also is described. A priori standard deviations can be used to apply weight to given initial approximations (which can be zero) for the distortion terms, for the difference between the optical axis and the perpendicular to the sensor plane, and for the terms representing movement of the entrance pupil, so that the solution for these is well determined when there is insufficient information in the calibration data. For the other parameters, initial approximations needed for the nonlinear least-squares adjustment are obtained in a simple manner from the calibration data and other known information. (Weight can be given to these also, if desired.) Outliers among the calibration points that disagree excessively with the other data are removed by means of automatic editing based on analysis of the residuals. The use of the camera model also is described, including partial derivatives for propagating both from object space to image space and vice versa. These methods were used to calibrate the cameras on the Mars Exploration Rovers.

Proceedings ArticleDOI
01 Jan 2006

Patent
03 Jan 2006
TL;DR: An imaging apparatus using a solid-state image sensor that reads out a signal of each pixel by an XY address method to capture an image includes a mechanical shutter configured to block light incident on a light receiving surface of the image sensor as mentioned in this paper.
Abstract: An imaging apparatus using a solid-state image sensor that reads out a signal of each pixel by an XY address method to capture an image includes a mechanical shutter configured to block light incident on a light receiving surface of the solid-state image sensor; and control means for simultaneously resetting the pixel signals for all rows in the solid-state image sensor to start exposure to the solid-state image sensor, closing the mechanical shutter after a predetermined exposure period is elapsed, and sequentially reading out the pixel signals for every row of the solid-state image sensor with the mechanical shutter being closed.

Proceedings ArticleDOI
19 Apr 2006
TL;DR: A fully distributed approach for camera network calibration that scales easily to very large camera networks and requires minimal overlap of the cameras' fields of view and makes very few assumptions about the motion of the object.
Abstract: Camera networks are perhaps the most common type of sensor network and are deployed in a variety of real-world applications including surveillance, intelligent environments and scientific remote monitoring. A key problem in deploying a network of cameras is calibration, i.e., determining the location and orientation of each sensor so that observations in an image can be mapped to locations in the real world. This paper proposes a fully distributed approach for camera network calibration. The cameras collaborate to track an object that moves through the environment and reason probabilistically about which camera poses are consistent with the observed images. This reasoning employs sophisticated techniques for handling the difficult nonlinearities imposed by projective transformations, as well as the dense correlations that arise between distant cameras. Our method requires minimal overlap of the cameras' fields of view and makes very few assumptions about the motion of the object. In contrast to existing approaches, which are centralized, our distributed algorithm scales easily to very large camera networks. We evaluate the system on a real camera network with 25 nodes as well as simulated camera networks of up to 50 cameras and demonstrate that our approach performs well even when communication is lossy.