scispace - formally typeset
Search or ask a question

Showing papers on "Three-CCD camera published in 2012"


Proceedings ArticleDOI
01 Dec 2012
TL;DR: A novel scheme for data reception in a mobile phone using visible light communications (VLC) is proposed, exploiting the rolling shutter effect of CMOS sensors, and a data rate much higher than the camera frame rate is achieved.
Abstract: In this paper, a novel scheme for data reception in a mobile phone using visible light communications (VLC) is proposed. The camera of the smartphone is used as a receiver in order to capture the continuous changes in state (on-off) of the light, which are invisible to the human eye. The information is captured in the camera in the form of light and dark bands which are then decoded by the smartphone and the received message is displayed. By exploiting the rolling shutter effect of CMOS sensors, a data rate much higher than the camera frame rate is achieved.

446 citations


Proceedings ArticleDOI
TL;DR: A new type of plenoptic camera with an extended depth of field and a maximal effective resolution of up to a quarter of the sensor resolution is introduced.
Abstract: Placing a micro lens array in front of an image sensor transforms a normal camera into a single lens 3D camera, which also allows the user to change the focus and the point of view after a picture has been taken. While the concept of such plenoptic cameras is known since 1908, only recently the increased computing power of low-cost hardware and the advances in micro lens array production, have made the application of plenoptic cameras feasible. This text presents a detailed analysis of plenoptic cameras as well as introducing a new type of plenoptic camera with an extended depth of field and a maximal effective resolution of up to a quarter of the sensor resolution.

412 citations


Patent
30 Mar 2012
TL;DR: A camera with multiple lenses and multiple sensors wherein each lens/sensor pair generates a sub-image of a final photograph or video is described in this article, where different embodiments include: manufacturing all lenses as a single component, manufacturing all sensors as one piece of silicon, different lenses incorporate filters for different wavelengths, including IR and UV; non-circular lenses; different lenses are different focal lengths; different lens focus at different distances; selection of sharpest subimage; blurring of selected sub-images.
Abstract: A camera with multiple lenses and multiple sensors wherein each lens/sensor pair generates a sub-image of a final photograph or video. Different embodiments include: manufacturing all lenses as a single component; manufacturing all sensors as one piece of silicon; different lenses incorporate filters for different wavelengths, including IR and UV; non-circular lenses; different lenses are different focal lengths; different lenses focus at different distances; selection of sharpest sub-image; blurring of selected sub-images; different lens/sensor pairs have different exposures; selection of optimum exposure sub-images; identification of distinct objects based on distance; stereo imaging in more than one axis; and dynamic optical center-line calibration.

136 citations


Patent
20 Jan 2012
TL;DR: In this paper, the authors present a method for transmitting a plurality of views from a video camera. But the method is limited to a single view of a region of interest, and each view correspond to a separate virtual camera view.
Abstract: Methods and systems of transmitting a plurality of views from a video camera are disclosed. The camera captures a plurality of views from the lens and scales the view to a specified size. Each view can correspond to a separate virtual camera view of a region of interest and separately controlled. The camera composites at least a portion of the plurality of views into one or more views within the camera, and then transmits one or more of the views from the video camera to a base station for compositing views from the camera into a single view within the base station. Cameras can be grouped into a network, with network addresses assigned to the camera and any virtual cameras generated therein.

117 citations


Patent
05 Dec 2012
TL;DR: Spatio-temporal light field cameras as mentioned in this paper can be used to capture the light field within its spatio temporally extended angular extent, which can capture and digitally record the intensity and color from multiple directional views within a wide angle.
Abstract: Spatio-temporal light field cameras that can be used to capture the light field within its spatio temporally extended angular extent. Such cameras can be used to record 3D images, 2D images that can be computationally focused, or wide angle panoramic 2D images with relatively high spatial and directional resolutions. The light field cameras can be also be used as 2D/3D switchable cameras with extended angular extent. The spatio-temporal aspects of the novel light field cameras allow them to capture and digitally record the intensity and color from multiple directional views within a wide angle. The inherent volumetric compactness of the light field cameras make it possible to embed in small mobile devices to capture either 3D images or computationally focusable 2D images. The inherent versatility of these light field cameras makes them suitable for multiple perspective light field capture for 3D movies and video recording applications.

112 citations


Proceedings ArticleDOI
02 Jul 2012
TL;DR: Results show that as little as three low-cost depth cameras can recover a more accurate 3D body shape than twenty regular cameras.
Abstract: In the last decade, gait analysis has become one of the most active research topics in biomedical research engineering partly due to recent developpement of sensors and signal processing devices and more recently depth cameras. The latters can provide real-time distance measurements of moving objects. In this context, we present a new way to reconstruct body volume in motion using multiple active cameras from the depth maps they provide. A first contribution of this paper is a new and simple external camera calibration method based on several plane intersections observed with a low-cost depth camera which is experimentally validated. A second contribution consists in a body volume reconstruction method based on visual hull that is adapted and enhanced with the use of depth information. Preliminary results based on simulations are presented and compared with classical visual hull reconstruction. These results show that as little as three low-cost depth cameras can recover a more accurate 3D body shape than twenty regular cameras.

54 citations


Patent
30 Nov 2012
TL;DR: In this article, a video camera band includes left and right side portions, a bridge, at least one video camera, a processor, local storage and a network interface, which is transmitted via the network interface to a remote computing device.
Abstract: A video camera band includes left and right side portions, a bridge, at least one video camera, a processor, local storage and a network interface. Video recorded by the video camera band is transmitted via the network interface to a remote computing device.

48 citations


Thomas Läbe1
01 Jan 2012
TL;DR: In the use of consumer cameras for photogrammetric measurements and vision systems, laboratory calibrations of different cameras have been carried out and the resulting calibration parameters and their accuracies are given.
Abstract: During the last years the number of available low-cost digital consumer cameras has significantly increased while their prices decrease. Therefore for many applications with no high-end accuracy requirements it is an important consideration whether to use low-cost cameras. This paper investigates in the use of consumer cameras for photogrammetric measurements and vision systems. An important aspect of the suitability of these cameras is their geometric stability. Two aspects should be considered: The change of calibration parameters when using the camera’s features such as zoom or auto focus and the time invariance of the calibration parameters. Therefore laboratory calibrations of different cameras have been carried out at different times. The resulting calibration parameters, especially the principal distance and the principal point, and their accuracies are given. The usefulness of the information given in the image header, especially the focal length, is compared to the results of the calibration.

47 citations


Patent
30 Apr 2012
TL;DR: In this article, a camera that is mounted on a vehicle is used to monitor the vehicle, the driver and the contents therein, and the camera includes an external housing, a camera module having a camera lens, a lighting component, a dimmer switch and a transmission medium.
Abstract: A camera that is mounted on a vehicle is used to monitor the vehicle, the driver and the contents therein. The camera includes an external housing, a camera module having a camera lens, a lighting component, a dimmer switch and a transmission medium. The external housing may be made of alloy. The camera module may capture video data and the transmission medium may be coupled to the camera module to transmit the captured video data as a live video stream to an external device. The lighting component included in the fleet camera apparatus may include LEDs and have infra-red capabilities to provide a night vision mode. The dimmer switch is included to control the LEDs' brightness. Other embodiments are disclosed.

45 citations


Patent
24 Oct 2012
TL;DR: In this paper, the camera includes a lens for enabling the camera to capture at least one image, and a connector for mounting the camera onto a phone and for enabling communication with the phone.
Abstract: Embodiments generally relate to a camera. In one embodiment, the camera includes a lens for enabling the camera to capture at least one image. The camera also includes a connector for mounting the camera onto a phone and for enabling the camera to communicate with the phone. The camera also includes a shutter button for triggering the camera to capture the least one image. The camera also activates the phone and puts the phone into a camera mode when the shutter button is pressed.

41 citations


Proceedings ArticleDOI
01 Nov 2012
TL;DR: This paper proposes a new distinction method for road surface conditions at night-time, such as dry, wet and snow that uses only video information acquired by an inexpensive car-mounted video camera and uses the difference in road surface features for each condition.
Abstract: Many conventional distinction methods of road surface condition using car-mounted cameras have been already proposed. However, most of these methods are only effective for daytime and bright conditions. Therefore, we need to expand these methods for impractical night-time that is much more dangerous environment. In this paper, we propose a new distinction method for road surface conditions at night-time, such as dry, wet and snow. This method uses only video information acquired by an inexpensive car-mounted video camera and uses the difference in road surface features for each condition. The image features of the road surface are depending on the illuminant conditions such as street lamps, signal lights, reflections and other lighting sources. Therefore, we analyze the image features based on color information and the presence of other light sources. As a result, the distinction of road surface conditions was achieved with high accuracy, including the areas illuminated by street lamps and other light sources.

Journal ArticleDOI
TL;DR: This letter shows that current sensors are already surprisingly close to a hard theoretical limit for classical video camera systems: during a certain exposure time only a certain number of photons will reach the sensor.
Abstract: Image sensors for digital cameras are built with ever decreasing pixel sizes. The size of the pixels seems to be limited by technology only. However, there is also a hard theoretical limit for classical video camera systems: During a certain exposure time only a certain number of photons will reach the sensor. The resulting shot noise thus limits the signal-to-noise ratio. In this letter we show that current sensors are already surprisingly close to this limit.

Patent
11 Jan 2012
TL;DR: In this article, an image delivery camera selection unit selects some cameras as image delivery cameras from among the cameras based on the information on the position of the moving object transmitted from each of the cameras.
Abstract: Each camera comprises an image generation unit for generating an image by capturing the image, a moving object detection unit for detecting a moving object from the image and transmitting information on the position of the moving object to the server, and an image transmission unit for, when the camera is selected by the server, transmitting an image. The server comprises an image delivery camera selection unit for selecting some cameras as image delivery cameras from among the cameras based on the information on the position of the moving object transmitted from each of the cameras, an image delivery camera notification unit for notifying the selection result to the cameras, and an image input unit for inputting the images transmitted from the image delivery cameras.

Patent
30 Apr 2012
TL;DR: In this article, a camera system for recording images consisting of at least one single camera is presented, where each camera is arranged in different directions such that they record a continuous overall image, with the overall image comprising the frames from the single cameras.
Abstract: In the case of a camera system for recording images consisting of at least one single camera, a solution is intended to be provided which allows a good sharp image to be recorded. This is achieved by virtue of the single cameras (1) each being arranged in different directions such that they record a continuous overall image, with the overall image comprising the frames from the single cameras (1), and there being a central control unit (3) which can be used to capture the motion profile of the camera system by means of at least one sensor (2) and to ascertain the trigger times of the single cameras (1) on the basis of a prescribed target function, said camera system moving autonomously over the entire time progression.

Patent
13 Mar 2012
TL;DR: In this article, a system and camera comprising in the light path a diffuser is described, where the camera comprises a means (6) to modulate the diffusing properties of the diffuser on an image projected by the lens on the sensor during exposure of the image.
Abstract: A system and camera wherein the camera comprises in the light path a diffuser (4). The system or camera comprises a means (6) to modulate the diffusing properties of the diffuser (4) on an image projected by the lens on the sensor during exposure of the image. To the captured blurred image (10) an inverse point spread function is applied to deconvolute (24) the blurred image to a sharper image. Motion invariant image can so be achieved.

Patent
19 Mar 2012
TL;DR: In this paper, the authors proposed a video surveillance apparatus and method using a dual camera, which is capable of applying any camera videos with a result of video analysis made based on one selected from videos provided from a special purpose camera, such as a thermal image camera, and a visible light camera.
Abstract: The present invention relates to a video surveillance apparatus and method using a dual camera, which is capable of applying any camera videos with a result of video analysis made based on one selected from videos provided from a special purpose camera, such as a thermal image camera, and a visible light camera. Since the video surveillance apparatus and method is capable of arranging a visible light camera and a special-purpose camera in the same surveillance area, securing FOV differences between these cameras as pixel matching parameters between images by the medium of the same space corresponding to an image of each camera, performing selective video analysis for one of these parameters, and checking a result of the analysis in any camera images through the matching information in the same way, it is possible to guarantee autonomy of video switching, continuity of object tracking in video switching, and high reliability.

Proceedings ArticleDOI
03 Jun 2012
TL;DR: A general method for scene understanding using 3D reconstruction of the environment around the vehicle based on pixel-wise image labeling using a conditional random field (CRF) that is able to create a simple 3D model of the scene and also to provide semantic labels of the different objects and areas in the image.
Abstract: Modern vehicles are equipped with multiple cameras which are already used in various practical applications. Advanced driver assistance systems (ADAS) are of particular interest because of the safety and comfort features they offer to the driver. Camera based scene understanding is an important scientific problem that has to be addressed in order to provide the information needed for camera based driver assistance systems. While frontal cameras are widely used, there are applications where cameras observing lateral space can deliver better results. Fish eye cameras mounted in the side mirrors are particularly interesting, because they can observe a big area on the side of the vehicle and can be used for several applications for which the traditional front facing cameras are not suitable. We present a general method for scene understanding using 3D reconstruction of the environment around the vehicle. It is based on pixel-wise image labeling using a conditional random field (CRF). Our method is able to create a simple 3D model of the scene and also to provide semantic labels of the different objects and areas in the image, like for example cars, sidewalks, and buildings. We demonstrate how our method can be used for two applications that are of high importance for various driver assistance systems — car detection and free space estimation. We show that our system is able to perform in real time for speeds of up to 63 km/h.

Patent
04 Apr 2012
TL;DR: In this article, the authors proposed a light emitting diode (LED) video camera system comprising a movable video camera apparatus including a video camera having a lens and a plurality of LED lights arranged around at least a portion of a periphery of the lens.
Abstract: Embodiments of the present invention provide a light emitting diode (LED) video camera system comprising a movable video camera apparatus including a video camera having a lens. The movable video camera apparatus further includes a plurality of LED lights arranged around at least a portion of a periphery of the lens of the video camera. The LED video camera system further comprises an actuator for panning and tilting the video camera, and a control unit for controlling the actuator, the lens of the video camera, and the LED lights. The control unit comprises a plurality of drivers, including an LED driver for controlling lighting effects of the LED lights, an actuator driver for controlling movement of the actuator, and an optics driver for controlling zooming and focusing of the lens of the video camera.

Patent
09 Oct 2012
TL;DR: In this article, a stereographic camera system and method of operating a stereo camera system is presented, where a camera platform may include a first camera head including first left and right cameras separated by a first interocular distance.
Abstract: A stereographic camera system and method of operating a stereographic camera system A camera platform may include a first camera head including first left and right cameras separated by a first interocular distance, the first camera head providing first left and right video streams, and a second camera head aligned with the first camera head, the second camera head including second left and right cameras separated by a second interocular distance, the second camera head providing second left and right video streams An output selector may select either the first left and right video streams or the second left and right video streams to output as a 3D video output The first interocular distance may be settable over a first range, and the second interocular distance may be settable over a second range, at least a portion of the second range smaller than the first range

Patent
20 Mar 2012
TL;DR: In this article, a camera unit and method of control is described, where the camera unit has a camera flash sub-unit configurable to emit flash light having an adjustable characteristic.
Abstract: A camera unit and method of control are described. The camera unit has a camera flash sub-unit configurable to emit flash light having an adjustable characteristic. A camera sensor sub-unit generates raw color data when exposed to light for processing into a digital image. The camera unit also includes a camera controller for coordinating operation of the camera flash sub-unit and the camera sensor sub-unit. The camera controller monitors one or more ambient light characteristics in a vicinity of the camera unit. Prior to receiving a command instructing the camera unit to generate the digital image, the camera controller repeatedly configures the camera flash sub-unit based on the monitored ambient light characteristics to adjust the characteristics of the emitted flash light. Once the camera controller receives the command, the camera sensor sub-unit is instructed to expose an image sensor using the pre-adjusted camera flash light to increase illumination.

Patent
30 Nov 2012
TL;DR: In this article, the first image and the second image have an overlap region and the overlap region is evaluated to generate calibration parameters that accommodate for any vertical, horizontal or rotational misalignment between the first and second images.
Abstract: A method includes collecting a first image from a first video camera of a test pattern and a second image from a second video camera of the test pattern. The first image and the second image have an overlap region. The overlap region is evaluated to generate calibration parameters that accommodate for any vertical, horizontal or rotational misalignment between the first image and the second image. Calibration parameters are applied to video streams from the first video camera and the second video camera.

Patent
02 Apr 2012
TL;DR: In this article, a system and a process configuration generates a unitary rendered image for a video from at least two cameras and determines a master camera and a slave camera, respectively.
Abstract: A system and a process configuration generates a unitary rendered image for a video from at least two cameras. The configuration detects a communication coupling of at least two cameras and determines a master camera and a slave camera. The configuration determines an orientation of camera sensor of the master camera and the slave camera and determines a first frame of a video for a synchronization point for a start of a video capture. The configuration captures and reads images from the master camera sensor and the slave camera sensor in response to the start of the video capture and orientation of the camera sensors.

Journal ArticleDOI
TL;DR: In this article, a multi-camera system with a total weight of 1 kg under photogrammetric aspects is presented. But, the system is limited to a single set of calibration parameters for all cameras.
Abstract: Due to regulations micro-UAV's with a maximum take-off weight of < 5kg are commonly bound to applications within the line of sight. An extension of the ground coverage is possible by using a set of oblique cameras. The development of such a multi camera system with a total weight of 1 kg under photogrammetric aspects is quite challenging. The introduced four vision camera system consists of four industrial grade oblique 1.3 mega pixel cameras (four vision) with 9 mm lenses and one nadir looking camera with a 6 mm lens. Despite common consumer grade cameras triggering and image data is stored externally on a small PC with a hard disk of 64 GB and a weight of only 250 g. The key question to be answered in this paper is, how good in a photogrammetric and radiometric sense are the small cameras and do they need an individual calibration treatment or is a single set of calibration parameters sufficient for all cameras?

Proceedings ArticleDOI
10 Dec 2012
TL;DR: An automatic solution for user tracking and camera control that uses a depth camera for users tracking, and a scalable networking architecture based on publish/subscribe messaging for controlling multiple video cameras is presented.
Abstract: Today, talks, presentations, and lectures are often captured on video to give a broad audience the possibility to (re-)access the content. As presenters are often moving around during a talk it is necessary to guide recording cameras. We present an automatic solution for user tracking and camera control. It uses a depth camera for user tracking, and a scalable networking architecture based on publish/subscribe messaging for controlling multiple video cameras. Furthermore, we present our experiences with the system during actual lectures at an university.

Proceedings ArticleDOI
01 Nov 2012
TL;DR: This paper uses high-resolution still images to form a regularization function and use it for the de-blurring stage of the super-resolution process, and shows that performance of the proposed algorithm is superior in terms of recovering high-frequency details of the original video and edge reconstruction.
Abstract: In this paper video super-resolution using sequences generated by dual-mode cameras is studied Dual-mode cameras are capable to shoot high-resolution still images at a low rate while taking low-resolution video from the scene An algorithm for video super-resolution is proposed which uses high-resolution still images generated by the dual-mode cameras for super-resolution process We use high-resolution still images to form a regularization function and use it for the de-blurring stage of the super-resolution process The simulation results show that performance of the proposed algorithm is superior in terms of recovering high-frequency details of the original video and edge reconstruction

Proceedings ArticleDOI
20 May 2012
TL;DR: The results and images after thresholding show that depending on the application even a mid-performance CMOS camera can provide enough image quality when the target is localization of fluorescent stained biological details.
Abstract: In biological applications and systems where even the smallest details have a meaning, CCD cameras are mostly preferred and they hold most of the market share despite their high costs. In this paper, we propose a custom-designed CMOS camera to compete with the default CCD camera of an inverted microscope for fluorescence imaging. The custom-designed camera includes a commercially available mid-performance CMOS image sensor and a Field-Programmable Gate Array (FPGA) based hardware platform (FPGA4U). The high cost CCD camera of the microscope is replaced by the custom-designed CMOS camera and the two are quantitatively compared for a specific application where an Estrogen Reception (ER) expression in breast cancer diagnostic samples that emits light at 665nm has been imaged by both cameras. The gray-scale images collected by both cameras show a very similar intensity distribution. In addition, normalized white pixels after thresholding resulted in 4.96% for CCD and 3.38% for CMOS. The results and images after thresholding show that depending on the application even a mid-performance CMOS camera can provide enough image quality when the target is localization of fluorescent stained biological details. Therefore the cost of the cameras can be drastically reduced while benefiting from the inherent advantages of CMOS devices plus adding more features and flexibility to the camera systems with FPGAs.

Patent
31 Jan 2012
TL;DR: In this article, a method and system for automatically binding a video camera to the absolute coordinate system and determining changes in the video camera binding is presented, which relates to the forest video monitoring.
Abstract: The invention relates to the forest video monitoring. A method and system are provided for automatically binding a video camera to the absolute coordinate system and determining changes in the video camera binding. In one aspect, the method comprises the steps of: in each of at least two predetermined time moments, aiming the video camera at an object a position of which in the absolute coordinate system centered in a point in which the video camera resides is known at said moment, and determining an orientation of the video camera in a native coordinate system of the video camera; and, based on the determined orientations of the video camera and positions of the object, calculating a rotation of the native coordinate system of the video camera in the absolute coordinate system. The calculated rotation of the video camera's native coordinate system is used to recalculate coordinates of an observed object from the video camera's native coordinate system into the absolute coordinate system. The technical result relates to the improved accuracy of locating the observed object.

Patent
11 Apr 2012
TL;DR: In this paper, an integrated 2D/3D camera system is proposed, which includes a 2D camera and a 3D camera affixed to the 2D cameras. But the 3D cameras are not remotely controllable.
Abstract: There is disclosed an integrated 2D/3D camera system which may include a 2D camera and a 3D camera affixed to the 2D camera. An inter-camera convergence angle between the 2D camera and the 3D camera may be preset. At least some imaging parameters of one of the 2D camera and the 3D camera may be preset. Imaging parameters of the other of the 2D camera and the 3D camera may be remotely controllable.

Patent
29 Nov 2012
TL;DR: In this article, a system and method for calibrating a camera includes an energy source and a camera to be calibrated, with at least one of the energy sources and the camera being mounted on a mechanical actuator so that it is movable relative to the other.
Abstract: A system and method for calibrating a camera includes an energy source and a camera to be calibrated, with at least one of the energy source and the camera being mounted on a mechanical actuator so that it is movable relative to the other. A processor is connected to the energy source, the mechanical actuator and the camera and is programmed to control the mechanical actuator to move at least one of the energy source and the camera relative to the other through a plurality of discrete points on a calibration target pattern. The processor further, at each of the discrete points, controls the camera to take a digital image and perform a lens distortion characterisation on each image. A focal length of the camera is determined including any lens connected to the camera and an extrinsic camera position for each image is then determined.

Patent
19 Jun 2012
TL;DR: In this paper, a camera platform may include a primary scene camera having a first field of view, a context camera optically aligned with the primary camera, and a pointing mechanism.
Abstract: Remotely operated camera systems and methods of operating a remote camera. A camera platform may include a primary scene camera having a first field of view, a context camera optically aligned with the primary scene camera and having a second field of view larger than the first field of view, and a pointing mechanism. A control station remote from the camera platform may include a display system to display images captured by the primary scene camera and the context camera, and an operator interface configured to accept operator inputs to control the primary scene camera, the context camera, and the pointing mechanism.