scispace - formally typeset
Search or ask a question

Showing papers on "Three-CCD camera published in 2011"


Proceedings ArticleDOI
17 May 2011
TL;DR: A novel method to select camera sensors from an arbitrary deployment to form a camera barrier is proposed, and redundancy reduction techniques to effectively reduce the number of cameras used are presented.
Abstract: Barrier coverage has attracted much attention in the past few years However, most of the previous works focused on traditional scalar sensors We propose to study barrier coverage in camera sensor networks One fundamental difference between camera and scalar sensor is that cameras from different positions can form quite different views of the object As a result, simply combining the sensing range of the cameras across the field does not necessarily form an effective camera barrier since the face image (or the interested aspect) of the object may be missed To address this problem, we use the angle between the object's facing direction and the camera's viewing direction to measure the quality of sensing An object is full-view covered if there is always a camera to cover it no matter which direction it faces and the camera's viewing direction is sufficiently close to the object's facing direction We study the problem of constructing a camera barrier, which is essentially a connected zone across the monitored field such that every point within this zone is full-view covered We propose a novel method to select camera sensors from an arbitrary deployment to form a camera barrier, and present redundancy reduction techniques to effectively reduce the number of cameras used We also present techniques to deploy cameras for barrier coverage in a deterministic environment, and analyze and optimize the number of cameras required for this specific deployment under various parameters

133 citations


Journal ArticleDOI
TL;DR: This paper uses the abstract notion of light field representation as a general tool to describe computational camera designs, where each camera can be formulated as a projection of a high-dimensional light field to a 2-D image sensor.
Abstract: A computational camera uses a combination of optics and processing to produce images that cannot be captured with traditional cameras. In the last decade, computational imaging has emerged as a vibrant field of research. A wide variety of computational cameras has been demonstrated to encode more useful visual information in the captured images, as compared with conventional cameras. In this paper, we survey computational cameras from two perspectives. First, we present a taxonomy of computational camera designs according to the coding approaches, including object side coding, pupil plane coding, sensor side coding, illumination coding, camera arrays and clusters, and unconventional imaging systems. Second, we use the abstract notion of light field representation as a general tool to describe computational camera designs, where each camera can be formulated as a projection of a high-dimensional light field to a 2-D image sensor. We show how individual optical devices transform light fields and use these transforms to illustrate how different computational camera designs (collections of optical devices) capture and encode useful visual information.

127 citations


Patent
13 Sep 2011
TL;DR: In this article, a wearable digital video camera (10) is equipped with wireless connection protocol and global navigation and location positioning system technology to provide remote image acquisition control and viewing, and a rotating mount (300) with a locking member (330) on the camera housing (22) allows adjustment of the pointing angle of the wearable video camera when it is attached to a mounting surface.
Abstract: A wearable digital video camera (10) is equipped with wireless connection protocol and global navigation and location positioning system technology to provide remote image acquisition control and viewing. The Bluetooth® packet-based open wireless technology standard protocol (400) is preferred for use in providing control signals or streaming data to the digital video camera and for accessing image content stored on or streaming from the digital video camera. The GPS technology (402) is preferred for use in tracking of the location of the digital video camera as it records image information. A rotating mount (300) with a locking member (330) on the camera housing (22) allows adjustment of the pointing angle of the wearable digital video camera when it is attached to a mounting surface.

126 citations


Proceedings ArticleDOI
20 Jun 2011
TL;DR: This work presents a new approach to capture video at high spatial and spectral resolutions using a hybrid camera system that propagates the multispectral information into the RGB video to produce a video with both high spectral and spatial resolution.
Abstract: We present a new approach to capture video at high spatial and spectral resolutions using a hybrid camera system. Composed of an RGB video camera, a grayscale video camera and several optical elements, the hybrid camera system simultaneously records two video streams: an RGB video with high spatial resolution, and a multispectral video with low spatial resolution. After registration of the two video streams, our system propagates the multispectral information into the RGB video to produce a video with both high spectral and spatial resolution. This propagation between videos is guided by color similarity of pixels in the spectral domain, proximity in the spatial domain, and the consistent color of each scene point in the temporal domain. The propagation algorithm is designed for rapid computation to allow real-time video generation at the original frame rate, and can thus facilitate real-time video analysis tasks such as tracking and surveillance. Hardware implementation details and design tradeoffs are discussed. We evaluate the proposed system using both simulations with ground truth data and on real-world scenes. The utility of this high resolution multispectral video data is demonstrated in dynamic white balance adjustment and tracking.

83 citations


Proceedings ArticleDOI
06 Nov 2011
TL;DR: Experiments show near perfect accuracy in identifying cameras of different brands and models, and proposed method performances quite well in distinguishing among camera devices of the same model.
Abstract: Source camera identification finds many applications in real world. Although many identification methods have been proposed, they work with only a small set of cameras, and are weak at identifying cameras of the same model. Based on the observation that a digital image would not change if the same Auto-White Balance (AWB) algorithm is applied for the second time, this paper proposes to identify the source camera by approximating the AWB algorithm used inside the camera. To the best of our knowledge, this is the first time that a source camera identification method based on AWB has been reported. Experiments show near perfect accuracy in identifying cameras of different brands and models. Besides, proposed method performances quite well in distinguishing among camera devices of the same model, as AWB is done at the end of imaging pipeline, any small differences induced earlier will lead to different types of AWB output. Furthermore, the performance remains stable as the number of cameras grows large.

56 citations


Patent
Masato Uchihara1
22 Apr 2011
TL;DR: In this article, a camera control system having a terminal device connected to a plurality of cameras via a network is provided, where the terminal device displays peripheral camera information, which corresponds to a shooting direction in which the operation has been requested, together with video shot by the camera being controlled.
Abstract: A camera control system having a terminal device connected to a plurality of cameras via a network is provided. In response to an operation that exceeds a maximum control value (maximum control angle) of the PTZ of a camera that is the target of control, the terminal device displays peripheral camera information, which corresponds to a shooting direction in which the operation has been requested, together with video shot by the camera being controlled. The peripheral camera information includes installation camera position information, viewable angle information, control status information and peripheral map information as well as at least one item of captured video from a camera other than the camera being controlled.

53 citations


Patent
07 Jun 2011
TL;DR: In this paper, a network device combines the first video feed and the second video feed to generate a synchronized combined video feed that overlays the images of the subject of the second visual feed in images from the first site.
Abstract: A network device receives, from a first video camera system, position information for a first video camera at a first site and sends, to a second video camera system, position instructions for a second video camera at a second site. The position instructions are configured to locate the second video camera within the second site to correspond to a relative position of the first camera in the first site. The network device receives, from the first video camera system, a first video feed including images of the first site and receives, from the second video camera system, a second video feed including images of a subject of the second site. The network device combines the first video feed and the second video feed to generate a synchronized combined video feed that overlays the images of the subject of the second video feed in images of the first site.

53 citations


Journal ArticleDOI
TL;DR: Experimental results show that the proposed hybrid camera system producing multi-view video sequences with more accurate depth maps, especially along the boundary of objects, is suitable for generating more natural 3-D views for3-D TV than previous works.

48 citations


Patent
07 Apr 2011
TL;DR: In this article, a 3D rendering method is proposed to increase the performance when projecting and compositing multiple images or video sequences from real-world cameras on top of a precise 3D model of the real world.
Abstract: A 3D rendering method is proposed to increase the performance when projecting and compositing multiple images or video sequences from real-world cameras on top of a precise 3D model of the real world. Unlike previous methods that relied on shadow- mapping and that were limited in performance due to the need to re-render the complex scene multiple times per frame, the proposed method uses, for each camera, one Camera Projection Mesh ("CPM") of fixed and limited complexity per camera. The CPM that surrounds each camera is effectively molded over the surrounding 3D world surfaces or areas visible from the video camera. Rendering and compositing of the CPMs may be entirely performed on the Graphic Processing Unit ("GPU") using custom shaders for optimal performance. The method also enables improved view- shed analysis and fast visualization of the coverage of multiple cameras.

39 citations


Patent
24 Oct 2011
TL;DR: In this paper, the authors present a method to directly capture data volume-reduced or compressed color images or video image frames by a camera system compared to those from conventional color camera systems with the same image resolution.
Abstract: System and method to (1) directly capture data-volume-reduced or compressed color images or video image frames by a camera system compared to those from conventional color camera systems with the same image resolution, and (2) retrieve uncompressed color images or video image frames for display, visualization, or other image/video processing at the end user side.

36 citations


Patent
31 May 2011
TL;DR: In this article, a system for displaying a 3D thermal image by using two thermal imaging cameras and extracting distance/depth data in the thermal images is presented, where one thermal imaging camera is used as a master camera serving as a reference and the other as a slave camera to correct gain and offset of the thermal image.
Abstract: A system for displaying a 3D thermal image by using two thermal imaging cameras and extracting distance/depth data in the thermal images. The system includes two thermal imaging cameras where one thermal imaging camera is used as a master camera serving as a reference and the other is used as a slave camera to correct gain and offset of the thermal images and ensure uniformity. In addition, provided is an apparatus and method for correcting gain and offset of the thermal images to be identical to each other and ensuring uniformity by using a process module separately from two thermal imaging cameras.

Patent
Seger Ulrich1, Bauer Nikolai1
04 Apr 2011
TL;DR: In this paper, a camera for a vehicle is described, in which the camera has at least one optoelectronic image converter, a camera housing in whose interior the image converter is accommodated, and a camera optical element, which is provided by an optical mounting reference surface of the camera housing and is provided for imaging a primary detection area on the image converters.
Abstract: A camera for a vehicle is described The camera has at least one optoelectronic image converter, a camera housing in whose interior the image converter is accommodated, a camera optical element, which is accommodated in an optical mounting reference surface of the camera housing and is provided for imaging a primary detection area on the image converter, and a camera position reference surface for positioning the camera in relation to a vehicle window At least one receiving surface for at least one light-guiding device eg, one or a plurality of mirrors, is formed on the camera housing for deflecting light from at least one additional detection area to the camera optical element The optical mounting reference surface, camera position reference surface, and receiving surface are preferably formed on a single housing component, eg, an upper shell

Journal ArticleDOI
TL;DR: In this paper, three different sensors have been tested: a CCD sensor equipped with a Bayer filter, a Foveon sensor and a 3CCD sensor, and best results have been obtained with the 3CDD sensor.
Abstract: In digital holographic interferometry, the resolution of the reconstructed hologram depends on the pixel size and pixel number of the sensor used for recording. When different wavelengths are simultaneously used as a luminous source for the interferometer, the shape and the overlapping of three filters of a color sensor strongly influence the three reconstructed images. This problem can be directly visualized in 2D Fourier planes on red, green and blue channels. To better understand this problem and to avoid parasitic images generated at the reconstruction, three different sensors have been tested: a CCD sensor equipped with a Bayer filter, a Foveon sensor and a 3CCD sensor. The first one is a Bayer mosaic where one half of the pixels detect the green color and only one-quarter detect the red or blue color. As the missing data are interpolated among color detection positions, offsets and artifacts are generated. The second one is a specific sensor constituted with three stacked photodiode layers. Its technology is different from that of the classical color mosaic sensor because each pixel location detects the three colors simultaneously. So, the three colors are recorded simultaneously with identical spatial resolution, which corresponds to the spatial resolution of the sensor. However, the spectral curve of the sensor is large along each wavelength since the color segmentation is based on the penetration depth of the photons in silicon. Finally, with a 3CCD sensor, each image is recorded on three different sensors with the same resolution. In order to test the sensor influence, we have developed a specific optical bench which allows the near wake flow around a circular cylinder at Mach 0.45 to be characterized. Finally, best results have been obtained with the 3CDD sensor.

Patent
Ziji Huang1
14 Nov 2011
TL;DR: In this article, the authors utilize multiple built-in cameras on a mobile device, e.g., a phone, to capture images, individuals of which include a portion from each of the cameras.
Abstract: Various embodiments utilize multiple built-in cameras on a mobile device, e.g., a phone, to capture images, individuals of which include a portion from each of the cameras. The ratio and layout of the portions from different cameras can be adjusted before the image is captured and stored. In at least some embodiments, individual cameras face different directions, and images captured by one of the cameras can be incorporated into images captured by another of the cameras. For example, in at least some embodiments, particularly those in which a user's image is captured, the user's image can be extracted from the view of a first camera, such as a front-facing camera on a mobile device, and displayed to a user in the foreground of the image captured by a second camera, such as a landscape image captured by a back-facing camera on the mobile device.

Proceedings ArticleDOI
01 Jun 2011
TL;DR: Experimental results indicate that using this calibration cameras under Scheimpflug condition can be accurately calibrated, and the new camera model and its calibration are defined.
Abstract: An easy way to achieve higher depth of field in a camera-laser configuration is to tilt the image sensor respect to the lens plane, such that the image plane, laser plane and the lens plane intersect in a unique line (Scheimpflug condition [1]). If something has to be measured with this kind of cameras, a proper camera calibration must be done. The usual calibration methods are not valid in this case because they are based on the pin-hole camera model, this model being valid only for normal cameras, i.e. cameras that have the image plane and the lens plane parallel. Thus, a new camera model and its respective calibration must be developed, which includes the Scheimplug angle in the intrinsic camera parameters. In this article, the new camera model and its calibration are defined. Experimental results indicate that using this calibration cameras under Scheimpflug condition can be accurately calibrated.

Patent
Sen Wang1
17 Nov 2011
TL;DR: In this article, a smoothing operation is applied to the input camera path to determine a smoothed camera path, and a corresponding sequence of smoothed position positions are determined corresponding to each of the smoothed positions.
Abstract: A method for stabilizing an input digital video. Input camera positions are determined for each of the input video frames, and an input camera path is determined representing input camera position as a function of time. A smoothing operation is applied to the input camera path to determine a smoothed camera path, and a corresponding sequence of smoothed camera positions. A stabilized video frame is determined corresponding to each of the smoothed camera positions by: selecting an input video frame having a camera position near to the smoothed camera position; warping the selected input video frame responsive to the input camera position; warping a set of complementary video frames captured from different camera positions than the selected input video frame; and combining the warped input video frame and the warped complementary video frames to form the stabilized video frame.

Patent
27 Sep 2011
TL;DR: In this article, a camera image which has not been converted into one of the individual over view images is displayed in an enlarged state, together with an image obtained by the front camera 13F in an image display section 20a of a monitor.
Abstract: Aback camera 13B, left-side and right-side cameras 13L and 13R, and a front camera 13F are provided on an upper swiveling body 3. By converting viewpoints of camera images obtained by the cameras 13B, 13L, and 13R, individual over view images are generated and are combined with one another to form a surveillance panorama image disposed around a display character 21. The surveillance panorama image is displayed, together with a camera image, which is an image obtained by the front camera 13F, in an image display section 20a of a monitor 20. By appropriately operating switches SW1 through SW5 provided in an operation panel section 20b, a camera image which has not been converted into one of the individual over view images is displayed in an enlarged state.

Patent
Kevin Reece1
31 Aug 2011
TL;DR: In this paper, an aerial camera system consisting of a camera cluster, including a plurality of cameras, each camera orientated in a direction selected from a variety of different camera directions having a downward component, is described.
Abstract: An aerial camera system is disclosed comprising: a camera cluster, including a plurality of cameras, each camera orientated in a direction selected from a plurality of different camera directions having a downward component; one or more rotators that rotate the camera cluster about respective one or more axes in response to one or more signals, and a control module that successively provides one or more signals to the one or more rotators to rotate the camera cluster and cause the cameras in the camera cluster to acquire respective aerial images.

Patent
22 Dec 2011
TL;DR: In this paper, an approach for enabling users to generate an interpolated view of content based on footage captured by multiple cameras is described, where an interpolation platform receives a plurality of images from a pluralityof video cameras providing overlapping fields of view.
Abstract: An approach for enabling users to generate an interpolated view of content based on footage captured by multiple cameras is described. An interpolation platform receives a plurality of images from a plurality of video cameras providing overlapping fields of view. A camera angle that is different from angles provided by the plurality of cameras is selected. The interpolation platform then generates an interpolated image corresponding to the selected camera angle using a portion or all of the plurality of images from the plurality of cameras.

Proceedings ArticleDOI
10 May 2011
TL;DR: An efficient, range-free and anchor-free method for self-calibrating the extrinsic parameters of the cameras in a non-overlapping camera sensor networks and can be applied even when the target takes sharp turns out of any camera's Field of View with little steps.
Abstract: Accurately extrinsic camera self-calibration, namely determining the positions and orientations of the networked cameras by themselves, is essential for many applications such as surveillance, intelligent environments and traffic monitoring. This paper describes an efficient, range-free and anchor-free method for self-calibrating the extrinsic parameters of the cameras in a non-overlapping camera sensor networks. The proposed method is based on the method proposed by Ali on the 2004's CVPR. Knowledge of the locations or angles, got from the assisted sensor (accelerometer or angular-accelerometer) installed on the moving object provide additional effective constraints on the optimization problem in order to compute the cameras' poses. Simulation results show that the iteration times, the calibration error, the volume of the data needed by the improved method are far less than the original method. The advantage of the method is that it can be applied even when the target takes sharp turns out of any camera's Field of View (FoV) with little steps.

Patent
01 Jun 2011
TL;DR: In this article, a system for automatically determining the placement of cameras receives data relating to a plurality of polygons, where each polygon represents one or more of a surveillance area, non-surveillance area, a blank area, and an obstacle.
Abstract: A system for automatically determining the placement of cameras receives data relating to a plurality of polygons. T polygons represent one or more of a surveillance area, a non-surveillance area, a blank area, and an obstacle. The system selects one or more initial cameras, including one or more initial camera positions, initial camera orientations, and initial camera features, wherein the initial camera positions and the initial camera orientations cause one or more fields of view of the initial cameras to cover at least a part of the surveillance area. The system alters one or more of a number of cameras, an orientation of the cameras, a location of the cameras, a type of the cameras, and a crossover of two or more cameras. The system uses a fitness function to evaluate all of the cameras and all of the camera positions, and selects one or more cameras and the locations and orientations of the one or more cameras as a function of the fitness function.

Patent
25 Feb 2011
TL;DR: In this article, a technique for easily detecting a person who attempts illegal camcording, by using device(s) that removes non-visible light as an anti-camcording interference image from a video image containing the nonvisible light.
Abstract: Provided is a technique for easily detecting a person who attempts illegal camcording, by using device(s) that removes non-visible light as an anti-camcording interference image from a video image containing the non-visible light. A video image display system includes: a screen that displays a video image; an infrared ray emitting unit that projects infrared rays together with the video image toward an observer; and an infrared ray camera that detects infrared rays reflected by a video camera owned by the observer. The infrared ray camera particularly detects reflection light from an infrared ray cut filter attached to the video camera of the observer.

Patent
Billy Chen1, Eyal Ofek1
23 May 2011
TL;DR: Simulated high resolution, multi-view video based on video input from low resolution, single-direction cameras is provided in this article, where video received from traffic cameras, security cameras, monitoring cameras, and comparable ones is fused with patches from a database of pre-captured images and/or temporally shifted video to create higher quality video, as well as multiple viewpoints for the same camera.
Abstract: Simulated high resolution, multi-view video based on video input from low resolution, single-direction cameras is provided. Video received from traffic cameras, security cameras, monitoring cameras, and comparable ones is fused with patches from a database of pre-captured images and/or temporally shifted video to create higher quality video, as well as multiple viewpoints for the same camera.

Patent
30 Jun 2011
TL;DR: In this article, a controller controls a camera that produces a sequence of images and that has output coupled to a video encoder, such that the camera produces additional images of the sequences of images for the video decoder using the adjusted one or more imaging parameters.
Abstract: A controller controls a camera that produces a sequence of images and that has output coupled to a video encoder. The camera has an operating condition including a field of view and lighting, and one or more imaging parameters. The video encoder encodes images from the camera into codewords. The controller receives one or more encoding properties from the video encoder, and causes adjusting one or more of the imaging parameters based on at least one of the received encoding properties, such that the camera produces additional images of the sequence of images for the video encoder using the adjusted one or more imaging parameters.

Patent
Takayuki Kimura1, Takekazu Terui1
27 Apr 2011
TL;DR: In this paper, the color of the light source is detected based on a plurality of pixel sensors and erroneous detection due to light falling on only a single R, G or B pixel sensor is prevented.
Abstract: A camera apparatus installated on a vehicle includes an image sensor having a RGB Bayer array of pixel sensors and a beam-splitting optical filter disposed between the camera lens assembly and the Bayer array. An incident light beam from a source such as a distant vehicle tail lamp becomes split into a plurality of light beams which become focused on respectively separate pixel sensors. Since the color of the light source is detected based on a plurality of pixel sensors, erroneous detection due to light falling on only a single R, G or B pixel sensor is prevented.

Proceedings ArticleDOI
26 Jul 2011
TL;DR: A novel technique to calibrate a network of cameras by fusion of inertial-visual data that is notably fast, its applicability for dynamic moving cameras (robots) and consequently localizing the robots, as long as that the two marked points are visible by them.
Abstract: This paper proposes a novel technique to calibrate a network of cameras by fusion of inertial-visual data. There is a set of still cameras (structure) and one (or more) mobile agent(s) camera in the network. Each camera within the network is assumed to be rigidly coupled with an Inertial Sensor (IS). By fusion of inertial and visual data, it becomes possible to consider a virtual camera beside of each camera within the network, using the concept of infinite homography. The mentioned virtual camera is downward-looking, its optical axis is parallel to the gravity and has a horizontal image plane. Taking advantage of the defined virtual cameras, the transformations between cameras are estimated by knowing just the heights of two arbitrary points with respect to one camera within the structure network. The proposed approach is notably fast and it requires a minimum human interaction. Another novelty of this method is its applicability for dynamic moving cameras (robots) in order to calibrate the cameras and consequently localizing the robots, as long as that the two marked points are visible by them.

Patent
15 Sep 2011
TL;DR: In this paper, a rigid model of the relationships between the mini-frames of the plural cameras is proposed to accelerate real-time video stitching by reusing the movement relationship of a first mini-frame of the first camera on corresponding mini frames of the other cameras in the system.
Abstract: A video imaging system for use with or in a mobile video capturing system (e.g., an airplane or UAV). A multi-camera rig containing a number of cameras (e.g., 4) receives a series of mini-frames (e.g., from respective field steerable mirrors (FSMs)). The mini-frames received by the cameras are supplied to (1) an image registration system that calibrates the system by registering relationships corresponding to the cameras and/or (2) an image processor that processes the mini-frames in real-time to produce a video signal. The cameras can be infra-red (IR) cameras or other electro-optical cameras. By creating a rigid model of the relationships between the mini-frames of the plural cameras, real-time video stitching can be accelerated by reusing the movement relationship of a first mini-frame of a first camera on corresponding mini-frames of the other cameras in the system.

01 Jan 2011
TL;DR: The methodology demonstrates why smaller pixel cameras provide better sampling at low magnifications and why these cameras are less efficient at collecting light than medium and larger pixel cameras.
Abstract: This article develops a methodology for understanding the balance between imaging and radiometric properties in microscopy systems where digital sensors (CCD, EMCCD, and scientific-grade CMOS) are utilized. The methodology demonstrates why smaller pixel cameras provide better sampling at low magnifications and why these cameras are less efficient at collecting light than medium and larger pixel cameras. The article also explores how different optical configurations can improve light throughput while maintaining adequate sampling with smaller pixel cameras.

Proceedings ArticleDOI
07 Aug 2011
TL;DR: The introduction of multiple depth sensors into the system allows us to obtain approximate depth information for many pixels, thereby providing a valuable hint for estimating pixel correspondences between cameras.
Abstract: In this ongoing work, we present our efforts to incorporate depth sensors [Microsoft Corp 2010] into a multi camera system for free view-point video [Lipski et al. 2010]. Both the video cameras and the depth sensors are consumer grade.Our free-viewpoint system, the Virtual Video Camera, uses image-based rendering to create novel views between widely spaced (up to 15 degrees) cameras, using dense image correspondences. The introduction of multiple depth sensors into the system allows us to obtain approximate depth information for many pixels, thereby providing a valuable hint for estimating pixel correspondences between cameras.

Patent
06 Jun 2011
TL;DR: The Accident Prevention Camera as discussed by the authors is a video recording system for automotive vehicles that comprises a closed circuit recording system comprised of four integrated video cameras, a digital recording unit, a video monitor, GPS tracker, and interconnecting wiring.
Abstract: The Accident Prevention Camera records digitized video images by means of a video recording system for automotive vehicles that comprises a closed circuit recording system comprised of four integrated video cameras, a digital recording unit, a video monitor, GPS tracker, and interconnecting wiring. The cameras are strategically mounted at selected positions on a vehicle which affords optimal coverage of the area in front and rear sides of a vehicle. The video images captured by the camera and recorded on the digital recording unit provide comprehensive video proof which is used to determine if a driver was at fault, or not, in the event of an accident. The Accident Prevention Camera employs the use of highly sophisticated equipment integrated with software that enables the system to record everything that happens from four different digital cameras, situated on each side of the vehicle.