scispace - formally typeset
Search or ask a question

Showing papers on "Three-CCD camera published in 2006"


Journal ArticleDOI
TL;DR: It is demonstrated that it is possible to achieve a high rate of accuracy in the identification of source camera identification by noting the intrinsic lens radial distortion of each camera.
Abstract: Source camera identification refers to the task of matching digital images with the cameras that are responsible for producing these images. This is an important task in image forensics, which in turn is a critical procedure in law enforcement. Unfortunately, few digital cameras are equipped with the capability of producing watermarks for this purpose. In this paper, we demonstrate that it is possible to achieve a high rate of accuracy in the identification by noting the intrinsic lens radial distortion of each camera. To reduce manufacturing cost, the majority of digital cameras are equipped with lenses having rather spherical surfaces, whose inherent radial distortions serve as unique fingerprints in the images. We extract, for each image, parameters from aberration measurements, which are then used to train and test a support vector machine classifier. We conduct extensive experiments to evaluate the success rate of a source camera identification with five cameras. The results show that this is a viable approach with high accuracy. Additionally, we also present results on how the error rates may change with images captured using various optical zoom levels, as zooming is commonly available in digital cameras.

134 citations


Patent
09 Jan 2006
TL;DR: In this article, a multi-spectral, high resolution images were generated using a single camera that significantly improves upon image discrimination as compared to, for example, the Bayer color filter array pattern.
Abstract: An imaging system has a single focal plane array that does not require the precise alignment of multiple cameras relative to one another. It incorporates a multi-band, band pass filter that includes filter elements corresponding to pixel regions of a detector within a camera. The imaging system may further incorporate a detector that vertically discriminates among radiation in different spectral bands incident on an image plane of the detector. In this manner, spectral content may be determined in each spatial region without the need for beam splitting or multiple cameras. The filter itself may further comprise different filter elements, for example, filter elements A and B arranged in a checkerboard pattern, where filter element A passes different spectral bands than filter element B. In this manner, multi-spectral, high resolution images may be generated using a single camera that significantly improves upon image discrimination as compared to, for example, the Bayer color filter array pattern. The single camera implementation is well suited for incorporation into marine, land and air vehicles.

108 citations


Patent
30 Jun 2006
TL;DR: In this article, a mobile imaging device having memory, a video camera system and a still camera system is shown to be able to process motion detection output of the video camera to form motion correction input for the still camera.
Abstract: Disclosed are devices including a mobile imaging device having memory, a video camera system and a still camera system. The video camera system can be configured for video imaging, and configured to generate motion detection output. An application can be stored in memory of the device and configured to process the motion detection output of the video camera system to form motion correction input for the still camera system. The still camera system is configured for still photography imaging and configured to process the motion correction input. Also disclosed are methods of a mobile imaging device including a still camera system and a video camera system. A method includes processing the sequential image data of the video camera system to generate motion detection output, processing the motion detection output to form motion correction input and processing still image correction by the still camera system based on the motion correction input.

81 citations


Patent
01 Aug 2006
TL;DR: In this paper, a dual-sensor video camera with a color filter array (CFA) sensor and a beam splitter is described, and an output image is produced based on image information from the two sensors.
Abstract: Various embodiments of a dual-sensor video camera are disclosed. The dual-sensor video camera includes a color filter array (CFA) sensor, which has a low-pass filter. The dual-sensor video camera also includes a panchromatic sensor. A beam splitter directs an incoming light beam to both sensors. An output image is produced based on image information from the two sensors. The output image includes luminance information based on the image information from the panchromatic sensor and chrominance information based on the image information from the CFA sensor.

61 citations


Journal ArticleDOI
14 Feb 2006
TL;DR: The problem of establishing a computational model for visual attention using cooperation between two cameras is addressed through the understanding and modeling of the geometric and kinematic coupling between a static camera and an active camera.
Abstract: In this paper we address the problem of establishing a computational model for visual attention using cooperation between two cameras. More specifically we wish to maintain a visual event within the field of view of a rotating and zooming camera through the understanding and modeling of the geometric and kinematic coupling between a static camera and an active camera. The static camera has a wide field of view thus allowing panoramic surveillance at low resolution. High-resolution details may be captured by a second camera, provided that it looks in the right direction. We derive an algebraic formulation for the coupling between the two cameras and we specify the practical conditions yielding a unique solution. We describe a method for separating a foreground event (such as a moving object) from its background while the camera rotates. A set of outdoor experiments shows the two-camera system in operation.

59 citations


Patent
03 Feb 2006
TL;DR: In this paper, a method of analyzing a digital video clip captured by a camera to determine candidate frames for subsequent key frame selection is proposed, which includes providing a camera motion sensor in the camera so that information is provided during image capture regarding camera motion.
Abstract: A method of analyzing a digital video clip captured by a camera to determine candidate frames for subsequent key frame selection including providing a camera motion sensor in the camera so that information is provided during image capture regarding camera motion including translation of the scene or camera, or scaling of the scene; forming a plurality of video segments based on the global motion estimate and labeling each segment in accordance with a predetermined series of camera motion classes; extracting key frame candidates from the labeled segments and computing a confidence score for each candidate by using rules corresponding to each camera motion class and a rule corresponding to object motion.

43 citations


Proceedings ArticleDOI
09 Jul 2006
TL;DR: A fundamental element of stereoscopic image production is to geometrically analyze the conversion from real space to stereoscopic images by binocular parallax under various shooting and viewing conditions, particularly on the setting of the optical axes of 3D cameras.
Abstract: A fundamental element of stereoscopic image production is to geometrically analyze the conversion from real space to stereoscopic images by binocular parallax under various shooting and viewing conditions. This paper reports on this analysis, particularly on the setting of the optical axes of 3D cameras, which has received little attention in the past. The parallel camera configuration maintains linearity during the conversion from real space to stereoscopic images. But the toed-in camera configuration often can not maintain linearity during the conversion from real space to stereoscopic images.

41 citations


Patent
Lonnel J. Swarr1, Sezai Sablak1
19 Oct 2006
TL;DR: In this paper, the authors propose a dome camera assembly that includes a nonvolatile storage mechanism including a stored video image and a corresponding stored camera position, and an adjustment mechanism adjusts a current camera position in accordance with the determined offset amount to facilitate repeatability.
Abstract: Embodiments of the invention relate to a dome camera assembly that is able to ensure repeatability. The dome camera assembly includes a nonvolatile storage mechanism including a stored video image and a corresponding stored camera position. The assembly may further include an image capturing device for capturing a current video image after the camera moves to the stored camera position. An image processing component compares the current video image with the stored video image and determines an offset amount between the current video image and the stored video image. An adjustment mechanism adjusts a current camera position in accordance with the determined offset amount to facilitate repeatability.

32 citations


Book ChapterDOI
01 Jan 2006
TL;DR: This work considers calibration and structure-from-motion tasks for a previously introduced, highly general imaging model, where cameras are modeled as possibly unconstrained sets of projection rays, and introduces a natural hierarchy of camera models.
Abstract: We consider calibration and structure-from-motion tasks for a previously introduced, highly general imaging model, where cameras are modeled as possibly unconstrained sets of projection rays. This allows to describe most existing camera types (at least for those operating in the visible domain), including pinhole cameras, sensors with radial or more general distortions, and especially panoramic cameras (central or non-central). Generic algorithms for calibration and structure-from-motion tasks (absolute and relative orientation, 3D point triangulation) are outlined. The foundation for a multi-view geometry of non-central cameras is given, leading to the formulation of multi-view matching tensors, analogous to the essential matrix, trifocal and quadrifocal tensors of perspective cameras. Besides this, we also introduce a natural hierarchy of camera models: the most general model has unconstrained projection rays whereas the most constrained model dealt with here is the central one, where all rays pass through a single point.

31 citations


Patent
14 Apr 2006
TL;DR: In this article, a camera may be controlled by one or more motors in a base of the camera and the motors in the base may reduce the size of the outer case and add stability.
Abstract: In various embodiments, a camera may be controlled by one or more motors in a base of the camera. Cables and other components may be used to manipulate the camera lens through the side arms of the camera. Putting the motors in the base may reduce the size of the outer case of the camera and add stability. A pan motor may pan the camera while a tilt motor may move a tilt pulley relative to a lens portion of the camera (which may or may not tilt the camera depending on the panning motion of the camera). In some embodiments, images from the camera may be converted into a serialized stream and transported over a cable from the lens through a center shaft of the camera.

26 citations


Patent
28 Dec 2006
TL;DR: In this article, a system for capturing, encoding and transmitting continuous video from a camera to a display monitor via a network includes a user friendly interface wherein a map is provided at a display screen for illustrating the location of the cameras and indicating the direction of the camera angle.
Abstract: A system for capturing, encoding and transmitting continuous video from a camera to a display monitor via a network includes a user friendly interface wherein a map is provided at a display screen for illustrating the location of the cameras and indicating the direction of the camera angle. The monitor includes a display area for selectively displaying selected cameras and for controlling the selection, display and direction of the cameras from a remote location. The display screen can be configured to display one or any combination of cameras. The cameras can be selected by manual selection, pre-programmed sequencing or by event detection with the selected camera automatically displayed on the display area. Secondary monitors may be incorporated to enhance the display features. The secondary monitors are controlled by the control panel provided on the primary monitor.

Patent
13 Nov 2006
TL;DR: In this article, a discover-locate process is used to discover, from the estimated relative distances, unknown cameras in the vicinity of at least three cameras at known locations. Absolute locations of the discovered unknown cameras can then be calculated using a geometric calculation.
Abstract: A method for automatically estimating the spatial positions between cameras in a camera network utilizes unique identifying signals, such as RFID signals, transmitting between nearby cameras to estimate the relative distances or positions between cameras from received signal strength (RSS), time of arrival (TOA), or time difference of arrival (TDOA) measurements to thereby determine the neighboring relationship among the cameras. A discover-locate process can be used to discover, from the estimated relative distances, unknown cameras in the vicinity of at least three cameras at known locations. Absolute locations of the discovered unknown cameras can then be calculated using a geometric calculation. The discover-locate process can be cascaded throughout the network to discover and locate all unknown cameras automatically using previously discovered and located cameras. Such methods can be implemented in systems having cameras with transceivers integrated therein and a controller operably linked to the cameras.

Journal ArticleDOI
TL;DR: The moment camera gathers significantly more data than is needed for a single image, and this data, coupled with computational photography and user-assisted algorithms, provide powerful new paradigms for image making.
Abstract: Future cameras are used to "capture the moment" not just the instant when the shutter opens. The moment camera gathers significantly more data than is needed for a single image. This data, coupled with computational photography and user-assisted algorithms, provide powerful new paradigms for image making. The camera constantly records time slices of imagery. Although the input to the moment camera creates a spacetime slab, the moment's output typically consists of a single image. Thus, the processing primarily selects the color for each output pixel given the set of input images in the spacetime slab

Patent
06 Dec 2006
TL;DR: In this paper, a system and method for automatic camera health monitoring, such as for a camera in a video surveillance system, is provided, which provides substantially continuous monitoring and detection of camera malfunction, due to either external or internal conditions.
Abstract: A system and method are provided for automatic camera health monitoring, such as for a camera in a video surveillance system. The system preferably provides substantially continuous monitoring and detection of camera malfunction, due to either external or internal conditions. Camera malfunction is detected when a computed camera health measurement exceeds a malfunction threshold. The camera health measurement is computed based on a comparison of a current camera health record to a plurality of stored camera health records obtained in a learning mode, which characterize known states of normal camera operation. Aside from monitoring camera health, methods of the present invention can also be used to detect an object being added to or removed from a scene, or to detect a change in lighting in a scene, possibly caused by a defective light fixture.

Patent
Naoto Yumiki1
05 Dec 2006
TL;DR: In this article, an imaging device that is able to display an image with an aperture value at the time when actually photographing on a liquid crystal display monitor for displaying an image, in a single-lens reflex digital camera is provided.
Abstract: An imaging device that is able to display an image with an aperture value at the time when actually photographing on a liquid crystal display monitor for displaying an image, in a single-lens reflex digital camera is provided. In a depth-of-field preview mode, a quick return mirror ( 4 ) is retracted from an optical path and light is incident on an imaging sensor ( 11 ). Then, an imaging optical system is controlled to be in an actual aperture state, and by displaying an image data obtained at the imaging sensor ( 11 ) on the liquid crystal display monitor ( 16 ) for displaying an image, a plurality of images with aperture values at the time of actually photographing, in other words, actual aperture live view images, can be concurrently displayed simultaneously on the liquid crystal display monitor ( 16 ) for displaying an image. By doing so, it is possible to easily compare images with different depths of field, and it is possible to simultaneously photograph images with different depths of field.

Proceedings ArticleDOI
TL;DR: The MMT all-sky camera is a low-cost, wide-angle camera system that takes images of the sky every 10 seconds, day and night and utilizes an auto-iris fish-eye lens to allow safe operation under all lighting conditions, even direct sunlight.
Abstract: The MMT all-sky camera is a low-cost, wide-angle camera system that takes images of the sky every 10 seconds, day and night. It is based on an Adirondack Video Astronomy StellaCam II video camera and utilizes an auto-iris fish-eye lens to allow safe operation under all lighting conditions, even direct sunlight. This combined with the anti-blooming characteristics of the StellaCam's detector allows useful images to be obtained during sunny days as well as brightly moonlit nights. Under dark skies the system can detect stars as faint as 6th magnitude as well as very thin cirrus and low surface brightness zodiacal features such as gegenschein. The total hardware cost of the system was less than $3500 including computer and framegrabber card, a fraction of the cost of comparable systems utilizing traditional CCD cameras.

Patent
21 Jun 2006
TL;DR: In this article, a video projection calibration technique includes a first camera to view the video display screen from the projector side, and a second camera viewed the display from the viewing side.
Abstract: A video projection calibration technique includes a first camera to view the video display screen from the projector side, and a second camera to view the video display screen from the viewing side. The first camera may be a lower resolution than the second camera. The combination of the two cameras permits the transfer functions of the projection system and the first camera to be characterized and reduced to a warping transform that may be stored by the control system. The presence of the warping transform would permit the lower resolution, first camera to perform image realignment after the video projection system is in use by an end user.

Patent
03 May 2006
TL;DR: In this paper, a video camera system may be configured to enter and exit an adaptive exposure modification mode upon detection of the presence of a bright object in the field of view of the video camera.
Abstract: Methods and systems for automatically detecting the presence or absence of a bright object in the field of view of a video camera and/or for adaptively modifying video camera exposure level. A video camera system may be configured to enter and exit an adaptive exposure modification mode upon detection of the presence of a bright object in the field of view of a video camera.

Patent
07 Jul 2006
TL;DR: In this article, a vehicle circumference video image including a bird's-eye video image showing an entire circumference of a vehicle by combining viewpoint conversion images of a plurality of camera image photographed by the plurality of cameras photographing the circumference of the vehicle from different directions is presented.
Abstract: PROBLEM TO BE SOLVED: To intuitively understand which video image is being displayed among a plurality of video images photographed by a plurality of cameras. SOLUTION: When displaying a vehicle circumference video image including a bird's-eye video image 11 showing an entire circumference of a vehicle by combining viewpoint conversion images of a plurality of camera image photographed by the plurality of cameras photographing the circumference of the vehicle from different directions respectively and a normal view video image 12 of any camera video image selected from video images in which a plurality of viewpoint conversion are not processed, in order to correlate the normal view video image 12 of selected camera image and the viewpoint conversion image corresponding to the selected camera video image in the bird's-eye video image 11, an enhanced display 26 for correlation is displayed within the bird's-eye video image 11 and an enhanced display 27 for correlation is displayed within the normal video image 12. COPYRIGHT: (C)2008,JPO&INPIT

Proceedings ArticleDOI
14 Jun 2006
TL;DR: A real-time omni-directional camera calibration method, and a shape-from-silhouette technique for 3D modeling and a micro-facet billboarding technique for rendering are proposed.
Abstract: We propose a new free-view video system that generates immersive 3D video from arbitrary point of view, using outer cameras and an inner omni-directional camera. The system reconstructs 3D modes from the captured video streams and generates realistic free-view video of those objects from a virtual camera. In this paper, we propose a real-time omni-directional camera calibration method, and describe a shape-from-silhouette technique for 3D modeling and a micro-facet billboarding technique for rendering. Owing to the movability and high resolution of the inner omni-directional camera, the proposed system reconstructs more elaborate 3D models and generates natural and vivid video with an immersive sensation.

Proceedings ArticleDOI
14 Jun 2006
TL;DR: An approximate model for coherent general cameras is described, which projects efficiently with user chosen accuracy and is efficient because the number of simple cameras is orders of magnitude lower than the original number of rays camera, and because each simple camera offers closed-form projection.
Abstract: Camera models are essential infrastructure in computer vision, computer graphics, and visualization. The most frequently used camera models are based on the single- viewpoint constraint. Removing this constraint brings the advantage of improved flexibility in camera design. However, prior camera models that eliminate the single- viewpoint constraint are inefficient. We describe an approximate model for coherent general cameras, which projects efficiently with user chosen accuracy. The rays of the general camera are partitioned into simple cameras that approximate the camera locally. The simple cameras are modeled with k- ray cameras, a novel class of non-pinhole cameras. The rays of a k-ray camera interpolate between k construction rays. We analyze several variants of k-ray cameras. The resulting compound camera model is efficient because the number of simple cameras is orders of magnitude lower than the original number of rays camera, and because each simple camera offers closed-form projection.

Proceedings ArticleDOI
20 Aug 2006
TL;DR: This work uses a camera that has two imaging sensors with different spatio-temporal resolutions to generate high-resolution video sequences from these two image sequences.
Abstract: In imaging devices such as CCDs, there is a trade-off between the image resolution and frame rate because of the limitation of data transfer speed. This creates difficulties in producing a high-quality imaging system using only one sensor. Therefore, we use a camera that has two imaging sensors with different spatio-temporal resolutions. The camera captures a high-resolution image sequence at a low frame rate and a low-resolution image sequence at a video rate. We propose using an image-morphing method to generate high-resolution video sequences from these two image sequences.

Patent
29 Nov 2006
TL;DR: In this paper, a system and method for automatically determining the camera field of view in a camera network is presented, where a plurality of spatially separated cameras and direction sensors, carried on respective cameras, are configured to measure the angle directions of the field of views of the cameras.
Abstract: A system and method for automatically determining the camera field of view in a camera network The system has a plurality of spatially separated cameras and direction sensors, carried on respective cameras, configured to measure the angle directions of the field of views of the cameras Elevation sensors are operably coupled to respective cameras to measure the elevation angles of thereof A controller is configured to process direction and elevation measurement signals transmitted from the cameras to automatically determine the cameras' fields of views One or more cameras having a field of view containing or nearby an event of interest can be selected from the determined field of views and indicated to a user via a graphical user interface Selected cameras which are rotatably mounted can be rotated if need be to automatically bring the event of interest into the field of views of the selected cameras

Patent
18 Oct 2006
TL;DR: An omni-directional stereo camera and a control method thereof is described in this paper, where a supporting member is installed within a shooting range between the omni directional cameras and including compensation patterns formed at the surfaces.
Abstract: An omni-directional stereo camera and a control method thereof. The omni-directional stereo camera includes two or more omni-directional cameras, and a supporting member installed within a shooting range between the omni-directional cameras to interconnect the omni-directional cameras and including compensation patterns formed at the surfaces.

Proceedings ArticleDOI
04 Jan 2006
TL;DR: A multiple active (pan-tilt) camera assignment scheme to assign each camera to a specific part of the moving object so as to allow the best visibility of the whole object.
Abstract: We are designing a self controlling active camera system for a 3D video of a moving object (mainly human body). We made up our system of cameras with long focal length lenses for high resolution input images. However, such cameras can get only partial views of the object. We present, in this paper, a multiple active (pan-tilt) camera assignment scheme. The goal is to assign each camera to a specific part of the moving object so as to allow the best visibility of the whole object. For each camera, we evaluate the visibility to the different regions of the object, corresponding to different camera orientations and with respect to the field of view of the camera in question. Thereafter, we assign each camera to one orientation in such a way to maximize the visibility to the whole object.

Proceedings ArticleDOI
Jun-Sik Kim1, In So Kweon1
20 Aug 2006
TL;DR: Two corresponding imaged rectangles whose aspect ratios are unknown are sufficient to calibrate a camera by warping the images properly and it is shown that the information from the imaged Rectangles can be transformed to the form of camera constraints.
Abstract: In this paper, we propose new camera calibration methods assuming a static camera. Two corresponding imaged rectangles whose aspect ratios are unknown are sufficient to calibrate a camera. By warping the images properly, we show that the information from the imaged rectangles can be transformed to the form of camera constraints. Based on this results, we propose two methods, one for three or more images and the other for only two images. The proposed methods are verified with synthetic and real images, and the results are comparable with less assumptions on cameras and on scenes.

Proceedings Article
01 Sep 2006
TL;DR: A high-speed camera based on CMOS sensor with embedded processing, implemented into a FPGA embedded inside the camera and dedicated to feature extraction like edge detection, markers extraction, or image analysis, wavelet analysis and object tracking is proposed.
Abstract: High-speed video cameras are powerful tools for investigating for instance the biomechanics analysis or the movements of mechanical parts in manufacturing processes. In the past years, the use of CMOS sensors instead of CCDs has made possible the development of high-speed video cameras offering digital outputs, readout flexibility and lower manufacturing costs. In this paper, we proposed a high-speed camera based on CMOS sensor with embedded processing. Two types algorithms have been implemented. The compression algorithm represents the first class for our camera and allows to transfer images using serial output link. The second type is dedicated to feature extraction like edge detection, markers extraction, or image analysis, wavelet analysis and object tracking. These image processing algorithms have been implemented into a FPGA embedded inside the camera. This FPGA technology allows us to process in real time 500 images per second with a 1.280H × 1.024V resolution.

Book ChapterDOI
12 Sep 2006
TL;DR: A novel camera model is introduced, which encompasses the standard pinhole camera model, an extension of the division model for lens distortion, and the model for catadioptric cameras with parabolic mirror, which is modeled by essentially varying two parameters.
Abstract: In this paper a novel camera model, the inversion camera model, is introduced, which encompasses the standard pinhole camera model, an extension of the division model for lens distortion, and the model for catadioptric cameras with parabolic mirror. All these different camera types can be modeled by essentially varying two parameters. The feasibility of this camera model is presented in experiments where object pose, camera focal length and lens distortion are estimated simultaneously.

Patent
27 Sep 2006
TL;DR: In this paper, an imaging system employing a black-and-white camera to photograph colorful images was proposed, where with a monochrome camera, shooting the target with red light, green light, and blue light respectively, composing the gray-level images irradiated by opposite lights to obtain the color image with high resolution as three times to the Bayer filter pattern color camera without near photoelectron overflow.
Abstract: This invention relates to an imaging system employing black-and-white camera to photograph colorful images. Wherein, with a monochrome camera, shooting the target with red light, green light and blue light respectively; composing the gray-level images irradiated by opposite lights to obtain the color image with high resolution as three times to the Bayer Filter Pattern color camera without near photoelectron overflow. This invention obtains vivid image, improves image saturation level and sensitivity to light, and needs cost just as one sixth of the 3CCD camera.

Book ChapterDOI
14 Nov 2006
TL;DR: The prototype orientation sensor is described and some methods for the calibration of the whole camera are proposed and it is shown that the camera is used to create oriented spherical panoramas and for image based localization.
Abstract: We introduce a new type of smart cameras. These cameras have an embedded orientation sensor which provides an estimate of the orientation of the camera. In this paper, we describe our prototype orientation sensor and propose some methods for the calibration of the whole camera. We then show two applications. First, the camera is used to create oriented spherical panoramas. Second, it is used for image based localization, in which only the position of the camera has to be retrieved.