scispace - formally typeset
Search or ask a question

Showing papers on "Smart camera published in 2011"


Proceedings ArticleDOI
20 Jun 2011
TL;DR: A novel algorithm for automatically applying constrainable, L1-optimal camera paths to generate stabilized videos by removing undesired motions without the need of user interaction or costly 3D reconstruction of the scene is presented.
Abstract: We present a novel algorithm for automatically applying constrainable, L1-optimal camera paths to generate stabilized videos by removing undesired motions. Our goal is to compute camera paths that are composed of constant, linear and parabolic segments mimicking the camera motions employed by professional cinematographers. To this end, our algorithm is based on a linear programming framework to minimize the first, second, and third derivatives of the resulting camera path. Our method allows for video stabilization beyond the conventional filtering of camera paths that only suppresses high frequency jitter. We incorporate additional constraints on the path of the camera directly in our algorithm, allowing for stabilized and retargeted videos. Our approach accomplishes this without the need of user interaction or costly 3D reconstruction of the scene, and works as a post-process for videos from any camera or from an online source.

370 citations


Patent
08 Jul 2011
TL;DR: In this paper, an approach and methods for allowing smart phone users to "capture the moment" by allowing easy access to a camera application when a mobile device is in an above-lock (or locked) mode, while also preventing unauthorized access to other smart phone functionality.
Abstract: Apparatus and methods are disclosed for allowing smart phone users to “capture the moment” by allowing easy access to a camera application when a mobile device is in an above-lock (or locked) mode, while also preventing unauthorized access to other smart phone functionality. According to one embodiment of the disclosed technology, a method of operating a mobile device having an above-lock state and a below-lock state comprises receiving input data requesting invocation of an camera application when the mobile device is in the above-lock state and invoking the requested camera application on the device, where one or more functions of the requested application are unavailable as a result of the mobile device being in the above-lock state.

202 citations


Proceedings ArticleDOI
17 May 2011
TL;DR: A novel method to select camera sensors from an arbitrary deployment to form a camera barrier is proposed, and redundancy reduction techniques to effectively reduce the number of cameras used are presented.
Abstract: Barrier coverage has attracted much attention in the past few years However, most of the previous works focused on traditional scalar sensors We propose to study barrier coverage in camera sensor networks One fundamental difference between camera and scalar sensor is that cameras from different positions can form quite different views of the object As a result, simply combining the sensing range of the cameras across the field does not necessarily form an effective camera barrier since the face image (or the interested aspect) of the object may be missed To address this problem, we use the angle between the object's facing direction and the camera's viewing direction to measure the quality of sensing An object is full-view covered if there is always a camera to cover it no matter which direction it faces and the camera's viewing direction is sufficiently close to the object's facing direction We study the problem of constructing a camera barrier, which is essentially a connected zone across the monitored field such that every point within this zone is full-view covered We propose a novel method to select camera sensors from an arbitrary deployment to form a camera barrier, and present redundancy reduction techniques to effectively reduce the number of cameras used We also present techniques to deploy cameras for barrier coverage in a deterministic environment, and analyze and optimize the number of cameras required for this specific deployment under various parameters

133 citations


Journal ArticleDOI
TL;DR: This paper uses the abstract notion of light field representation as a general tool to describe computational camera designs, where each camera can be formulated as a projection of a high-dimensional light field to a 2-D image sensor.
Abstract: A computational camera uses a combination of optics and processing to produce images that cannot be captured with traditional cameras. In the last decade, computational imaging has emerged as a vibrant field of research. A wide variety of computational cameras has been demonstrated to encode more useful visual information in the captured images, as compared with conventional cameras. In this paper, we survey computational cameras from two perspectives. First, we present a taxonomy of computational camera designs according to the coding approaches, including object side coding, pupil plane coding, sensor side coding, illumination coding, camera arrays and clusters, and unconventional imaging systems. Second, we use the abstract notion of light field representation as a general tool to describe computational camera designs, where each camera can be formulated as a projection of a high-dimensional light field to a 2-D image sensor. We show how individual optical devices transform light fields and use these transforms to illustrate how different computational camera designs (collections of optical devices) capture and encode useful visual information.

127 citations


Patent
13 Sep 2011
TL;DR: In this article, a wearable digital video camera (10) is equipped with wireless connection protocol and global navigation and location positioning system technology to provide remote image acquisition control and viewing, and a rotating mount (300) with a locking member (330) on the camera housing (22) allows adjustment of the pointing angle of the wearable video camera when it is attached to a mounting surface.
Abstract: A wearable digital video camera (10) is equipped with wireless connection protocol and global navigation and location positioning system technology to provide remote image acquisition control and viewing. The Bluetooth® packet-based open wireless technology standard protocol (400) is preferred for use in providing control signals or streaming data to the digital video camera and for accessing image content stored on or streaming from the digital video camera. The GPS technology (402) is preferred for use in tracking of the location of the digital video camera as it records image information. A rotating mount (300) with a locking member (330) on the camera housing (22) allows adjustment of the pointing angle of the wearable digital video camera when it is attached to a mounting surface.

126 citations


Patent
09 Sep 2011
TL;DR: In this paper, a mobile electronic device is in a first operation state, and it receives sensor data from one or more sensors of the mobile electronic devices, and in response to a positive determination, initializes the camera subsystem so that the camera is ready to capture a face as soon as the user directs the camera lens to his or her face.
Abstract: In one embodiment, while a mobile electronic device is in a first operation state, it receives sensor data from one or more sensors of the mobile electronic device. The mobile electronic device in a locked state analyzes the sensor data to estimate whether an unlock operation is imminent, and in response to a positive determination, initializes the camera subsystem so that the camera is ready to capture a face as soon as the user directs the camera lens to his or her face. In particular embodiments, the captured image is utilized by a facial recognition algorithm to determine whether the user is authorized to use the mobile device. In particular embodiments, the captured facial recognition image may be leveraged for use on a social network.

117 citations


Patent
02 Jun 2011
TL;DR: In this article, a pre-image-acquisition information is obtained by a digital camera and transmitted to a system external to the digital camera, where the system is configured to provide image acquisition settings to the camera.
Abstract: Pre-image-acquisition information is obtained by a digital camera and transmitted to a system external to the digital camera. The system is configured to provide image-acquisition settings to the digital camera. In this regard, the digital camera receives the image-acquisition settings from the external system and performs an image-acquisition sequence based at least upon the received image-acquisition settings. Accordingly, the determination of image-acquisition settings can be performed remotely from the digital camera, where data-processing resources can greatly exceed those within the digital camera.

106 citations


Journal ArticleDOI
TL;DR: This article shows how to develop an end-to-end framework for integrated sensing and analysis in a distributed camera network so as to maximize various scene-understanding performance criteria (e.g., tracking accuracy, best shot, and image resolution).
Abstract: Over the past decade, large-scale camera networks have become increasingly prevalent in a wide range of applications, such as security and surveillance, disaster response, and environmental modeling. In many applications, bandwidth constraints, security concerns, and difficulty in storing and analyzing large amounts of data centrally at a single location necessitate the development of distributed camera network architectures. Thus, the development of distributed scene-analysis algorithms has received much attention lately. However, the performance of these algorithms often suffers because of the inability to effectively acquire the desired images, especially when the targets are dispersed over a wide field of view (FOV). In this article, we show how to develop an end-to-end framework for integrated sensing and analysis in a distributed camera network so as to maximize various scene-understanding performance criteria (e.g., tracking accuracy, best shot, and image resolution).

84 citations


Patent
16 Aug 2011
TL;DR: In this paper, an improved automatic image capture system for an intelligent mobile device having a camera guides a user to position the camera so only a single image need to be automatically captured.
Abstract: An improved automatic image capture system for an intelligent mobile device having a camera guides a user to position the camera so only a single image needs to be automatically captured. Syntactic features, using a view finder on a display of the intelligent mobile device, are used to guide a user to maximize the occupancy of the view finder with the document so that the document is maximized within the view finder based upon detected corners of the document. When occupancy is maximized, the camera automatically captures the image of the document for post-processing using semantic knowledge of the document. A confidence level is computed based on the semantic knowledge to qualify an image with greater accuracy, and without user intervention, prior to transmission to a remote site.

79 citations


Journal ArticleDOI
TL;DR: An advanced geometric camera technique which employs a frontal image concept and a hyper-precise control point detection scheme with digital image correlation is presented and Simulation and real results have successfully demonstrated the superior of the proposed technique.
Abstract: In many machine vision applications, a crucial step is to accurately determine the relation between the image of the object and its physical dimension by performing a calibra- tion process. Over time, various calibration techniques have been developed. Nevertheless, the existing methods cannot satisfy the ever-increasing demands for higher accuracy per- formance. In this letter, an advanced geometric camera cali- bration technique which employs a frontal image concept and a hyper-precise control point detection scheme with digital image correlation is presented. Simulation and real experi- mental results have successfully demonstrated the superior of the proposed technique. C 2011 Society of Photo-Optical Instrumen-

74 citations


Patent
12 Jan 2011
TL;DR: In this paper, a method of automatically capturing images with precision using an intelligent mobile device having a camera loaded with an appropriate image capture application is presented, where each image is qualified to determine whether it is in focus and entirely within the field of view of the camera.
Abstract: A method of automatically capturing images with precision uses an intelligent mobile device having a camera loaded with an appropriate image capture application. When a user initializes the application, the camera starts taking images of the object. Each image is qualified to determine whether it is in focus and entirely within the field of view of the camera. Two or more qualified images are captured and stored for subsequent processing. The qualified images are aligned with each other by an appropriate perspective transformation so they each fill a common frame. Averaging of the aligned images reduces noise and a sharpening filter enhances edges, which produces a sharper image. The processed image is then converted into a two-level, black and white image, which may be presented to the user for approval prior to submission via wireless or WiFi to a remote location.

Patent
26 Apr 2011
TL;DR: In this article, an imaging system of a mobile communications device includes a first camera having a control interface to a controller and a data interface to the controller, a second camera having an interface with the first camera, and a processor to combine an image from the second camera received through the second-camera data interface with an image extracted from the first-camera and send the combined image to the control interface through the data interface.
Abstract: Image overlay in a mobile device is described. In one embodiment an imaging system of a mobile communications device includes a first camera having a control interface to a controller and a data interface to the controller, a second camera having a data interface to the first camera, and a processor to combine an image from the second camera received through the second camera data interface with an image from the first camera and to send the combined image to the controller through the data interface.

Patent
13 Jan 2011
TL;DR: In this paper, a camera system generates video data for an object from a viewpoint of the camera system at a location of the object, and the information is displayed on images in the video data on display systems at a number of locations.
Abstract: A method and apparatus for displaying information. A camera system generates video data for an object from a viewpoint of the camera system at a location of the object. Information is identified about the object. The information is displayed on images in the video data on a display system at a number of locations. The display of the images with the information on the images in the video data at the number of locations is from the viewpoint of the camera system.

Proceedings ArticleDOI
01 Dec 2011
TL;DR: An artificially intelligent camera-based system that automatically detects if a person within the field-of-view has fallen and outlines the algorithms used and presents empirical validation of its effectiveness.
Abstract: Falling in the home is one of the major challenges to independent living among older adults. The associated costs, coupled with a rapidly growing elderly population, are placing a burden on healthcare systems worldwide that will swiftly become unbearable. To facilitate expeditious emergency care, we have developed an artificially intelligent camera-based system that automatically detects if a person within the field-of-view has fallen. The system addresses concerns raised in earlier work and the requirements of a widely deployable in-home solution. The presented prototype utilizes a consumer-grade camera modified with a wide-angle lens. Machine learning techniques applied to carefully engineered features allow the system to classify falls at high accuracy while maintaining invariance to lighting, environment and the presence of multiple moving objects. This paper describes the system, outlines the algorithms used and presents empirical validation of its effectiveness.

Patent
07 Dec 2011
TL;DR: In this paper, examples for determining a suggested camera pose or suggested camera settings for a user to capture one or more images are disclosed for determining the user's interest and gathered information associated with user's interests.
Abstract: Examples are disclosed for determining a suggested camera pose or suggested camera settings for a user to capture one or more images. In some examples, the suggested camera pose or suggested camera settings may be based on an indication of the user's interest and gathered information associated with the user's interests. The user may be guided to adjust an actual camera pose or actual camera settings to match the suggested camera pose or suggested camera settings.

Patent
13 Jan 2011
TL;DR: In this article, a method for calibrating a vehicular camera after the camera has been installed in a vehicle, wherein the camera is of a type that applies an overlay to an image and outputs the image with the overlay to the in-vehicle display.
Abstract: In a first aspect, the invention is directed to a vehicular camera and a method for calibrating the camera after it has been installed in a vehicle. In particular, the invention is directed to calibrating a vehicular camera after the camera has been installed in a vehicle, wherein the camera is of a type that applies an overlay to an image and outputs the image with the overlay to an in-vehicle display.

Proceedings ArticleDOI
06 Nov 2011
TL;DR: Experiments show near perfect accuracy in identifying cameras of different brands and models, and proposed method performances quite well in distinguishing among camera devices of the same model.
Abstract: Source camera identification finds many applications in real world. Although many identification methods have been proposed, they work with only a small set of cameras, and are weak at identifying cameras of the same model. Based on the observation that a digital image would not change if the same Auto-White Balance (AWB) algorithm is applied for the second time, this paper proposes to identify the source camera by approximating the AWB algorithm used inside the camera. To the best of our knowledge, this is the first time that a source camera identification method based on AWB has been reported. Experiments show near perfect accuracy in identifying cameras of different brands and models. Besides, proposed method performances quite well in distinguishing among camera devices of the same model, as AWB is done at the end of imaging pipeline, any small differences induced earlier will lead to different types of AWB output. Furthermore, the performance remains stable as the number of cameras grows large.

Patent
09 Dec 2011
TL;DR: In this paper, a system for autonomous camera control is presented, where a first robot has a surgical tool mounted as an end effector and a second robot has an end-effector camera.
Abstract: A system for autonomous camera control is provided. The system may include a first robot having a surgical tool mounted as an end effector and a second robot having a camera mounted as an end effector. A controller may be provided for manipulating the second robot, where the controller stores a first kinematic model for the first robot and a second kinematic model for the second robot. The controller may be configured to automatically manipulate the second robot to position the camera based on the second kinematic model and an expected position of the surgical tool according the first kinematic model of the first robot.

Patent
01 Mar 2011
TL;DR: In this paper, a telepresence system that includes a cart with a robot face and an overhead camera is described. The system also includes a remote station that is coupled to the robot face, which can display video images captured by the robot camera and/or overhead camera.
Abstract: A tele-presence system that includes a cart. The cart includes a robot face that has a robot monitor, a robot camera, a robot speaker, a robot microphone, and an overhead camera. The system also includes a remote station that is coupled to the robot face and the overhead camera. The remote station includes a station monitor, a station camera, a station speaker and a station microphone. The remote station can display video images captured by the robot camera and/or overhead camera. By way of example, the cart can be used in an operating room, wherein the overhead camera can be placed in a sterile field and the robot face can be used in a non-sterile field. The user at the remote station can conduct a teleconference through the robot face and also obtain a view of a medical procedure through the overhead camera.

Patent
13 Jan 2011
TL;DR: In this paper, a detection system for determining data embedded into the light output of a light source in a form of a repeating sequence of N symbols is presented, where a camera is configured to acquire a series of images of the scene via specific open/closure patterns of the shutter.
Abstract: The invention relates to a detection system for determining data embedded into the light output of a light source in a form of a repeating sequence of N symbols The detection system includes a camera and a processing unit The camera is configured to acquire a series of images of the scene via specific open/closure patterns of the shutter The processing unit is configured to process the acquired series of images to determine the repeating sequence of N symbols By carefully triggering when a shutter of the camera is open to capture the different symbols of the encoded light within each frame time of a camera, a conventional camera with a relatively long frame time may be employed Therefore, the techniques presented herein are suitable for detecting the invisible “high frequency” coded light while using less expensive cameras as those used in the prior art

Journal ArticleDOI
TL;DR: A novel visual servo controller that is designed to control the pose of the camera to keep multiple objects in the field of view (FOV) of a mobile camera and a proof of stability is presented for tracking three or fewer targets.
Abstract: This study introduces a novel visual servo controller that is designed to control the pose of the camera to keep multiple objects in the field of view (FOV) of a mobile camera. In contrast with other visual servo methods, the control objective is not formulated in terms of a goal pose or a goal image. Rather, a set of underdetermined task functions are developed to regulate the mean and variance of a set of image features. Regulating these task functions inhibits feature points from leaving the camera FOV. An additional task function is used to maintain a high level of motion perceptibility, which ensures that desired feature point velocities can be achieved. These task functions are mapped to camera velocity, which serves as the system input. A proof of stability is presented for tracking three or fewer targets. Experiments of tracking eight or more targets have verified the performance of the proposed method.

Journal ArticleDOI
TL;DR: A multi aperture system called "Optical Cluster Eye" which is based on conventional micro-optical fabrication techniques, which can be fabricated on wafer-level with high yield due to small aperture diameters and low sags and captures images at VGA video resolution.
Abstract: Wafer-level optics is considered as a cost-effective approach to miniaturized cameras, because fabrication and assembly are carried out for thousands of lenses in parallel. However, in most cases the micro-optical fabrication process is not mature enough to reach the required accuracy of the optical elements, which may have complex profiles and sags in the mm-scale. Contrary, the creation of microlens arrays is well controllable so that we propose a multi aperture system called "Optical Cluster Eye" which is based on conventional micro-optical fabrication techniques. The proposed multi aperture camera consists of many optical channels each transmitting a segment of the whole field of view. The design of the system provides the stitching of the partial images, so that a seamless image is formed and a commercially available image sensor can be used. The system can be fabricated on wafer-level with high yield due to small aperture diameters and low sags. The realized optics has a lateral size of 2.2 × 2.9 mm2, a total track length of 1.86 mm, and captures images at VGA video resolution.

Journal ArticleDOI
TL;DR: This work presents an embedded camera prototype that uses Trusted Computing to provide security guarantees for streamed videos and incorporates image timestamping, detection of platform reboots, and reporting of the system status.
Abstract: Camera systems are used in many applications including video surveillance for crime prevention and investigation, traffic monitoring on highways or building monitoring and automation. With the shift from analog towards digital systems, the capabilities of cameras are constantly increasing. Today's smart camera systems come with considerable computing power, large memory, and wired or wireless communication interfaces. With onboard image processing and analysis capabilities, cameras not only open new possibilities but also raise new challenges. Often overlooked are potential security issues of the camera system. The increasing amount of software running on the cameras turns them into attractive targets for attackers. Therefore, the protection of camera devices and delivered data is of critical importance. In this work we present an embedded camera prototype that uses Trusted Computing to provide security guarantees for streamed videos. With a hardware-based security solution, we ensure integrity, authenticity, and confidentiality of videos. Furthermore, we incorporate image timestamping, detection of platform reboots, and reporting of the system status. This work is not limited to theoretical considerations but also describes the implementation of a prototype system. Extensive evaluation results illustrate the practical feasibility of the approach.

Proceedings ArticleDOI
26 Oct 2011
TL;DR: This work presents a real-time tracking method that performs motion estimation of a consumer RGB-D camera with respect to an unknown environment while at the same time reconstructing this environment as a dense textured mesh and shows the superiority of the proposed tracking in terms of accuracy, robustness and usability.
Abstract: Compared to standard color cameras, RGB-D cameras are designed to additionally provide the depth of imaged pixels which in turn results in a dense colored 3D point cloud representing the environment from a certain viewpoint. We present a real-time tracking method that performs motion estimation of a consumer RGB-D camera with respect to an unknown environment while at the same time reconstructing this environment as a dense textured mesh. Unlike parallel tracking and mapping performed with a standard color or grey scale camera, tracking with an RGB-D camera allows a correctly scaled camera motion estimation. Therefore, there is no need for measuring the environment by any additional tool or equipping the environment by placing objects in it with known sizes. The tracking can be directly started and does not require any preliminary known and/or constrained camera motion. The colored point clouds obtained from every RGB-D image are used to create textured meshes representing the environment from a certain camera view and the real-time estimated camera motion is used to correctly align these meshes over time in order to combine them into a dense reconstruction of the environment. We quantitatively evaluated the proposed method using real image sequences of a challenging scenario and their corresponding ground truth motion obtained with a mechanical measurement arm. We also compared it to a commonly used state-of-the-art method where only the color information is used. We show the superiority of the proposed tracking in terms of accuracy, robustness and usability. We also demonstrate its usage in several Augmented Reality scenarios where the tracking allows a reliable camera motion estimation and the meshing increases the realism of the augmentations by correctly handling their occlusions.

Patent
22 Jul 2011
TL;DR: In this paper, a computer-implemented image-processing system and method for automatic generation of video sequences that can be associated with a televised event is presented. But the method is limited to a single keyframe.
Abstract: What is disclosed is a computer-implemented image-processing system and method for the automatic generation of video sequences that can be associated with a televised event. The methods can include the steps of: Defining a reference keyframe from a reference view from a source image sequence; From one or more keyframes, automatically computing one or more sets of virtual camera parameters; Generating a virtual camera flight path, which is described by a change of virtual camera parameters over time, and which defines a movement of a virtual camera and a corresponding change of a virtual view; and Rendering and storing a virtual video stream defined by the virtual camera flight path.

Proceedings ArticleDOI
12 Apr 2011
TL;DR: This work presents a centralized control architecture for assigning PTZ cameras to targets so that the specification is met for any admissible behavior of the targets and proposes a distributed synthesis methodology to decompose the global specification into local specifications for each PTZ camera.
Abstract: We considered the problem of designing control protocols for pan-tilt-zoom (PTZ) cameras within a smart camera network where the goal is to guarantee certain temporal logic specifications related to a given surveillance task. We first present a centralized control architecture for assigning PTZ cameras to targets so that the specification is met for any admissible behavior of the targets. Then, in order to alleviate the computational complexity associated with LTL synthesis and to enable implementation of local control protocols on individual PTZ cameras, we propose a distributed synthesis methodology. The main idea is to decompose the global specification into local specifications for each PTZ camera. These decompositions allow the protocols for each camera to be separately synthesized and locally implemented while guaranteeing the global specifications to hold. A thorough design example is presented to illustrate the steps of the proposed procedure.

Journal ArticleDOI
01 Feb 2011
TL;DR: A tracking-based surveillance system that is capable of tracking multiple moving objects, with almost real-time response, through the effective cooperation of multiple pan-tilt cameras, and a hierarchical camera selection and task assignment strategy, known as the online position strategy, to integrate all of the distributed camera agents are presented.
Abstract: This paper presents a tracking-based surveillance system that is capable of tracking multiple moving objects, with almost real-time response, through the effective cooperation of multiple pan-tilt cameras. To construct this surveillance system, the distributed camera agent, which tracks multiple moving objects independently, is first developed. The particle filter is extended with target depth estimate to track multiple targets that may overlap with one another. A strategy to select the suboptimal camera action is then proposed for a camera mounted on a pan-tilt platform that has been assigned to track multiple targets within its limited field of view simultaneously. This strategy is based on the mutual information and the Monte Carlo method to maintain coverage of the tracked targets. Finally, for a surveillance system with a small number of active cameras to effectively monitor a wide space, this system is aimed to maximize the number of targets to be tracked. We further propose a hierarchical camera selection and task assignment strategy, known as the online position strategy, to integrate all of the distributed camera agents. The overall performance of the multicamera surveillance system has been verified with computer simulations and extensive experiments.

Proceedings ArticleDOI
11 Oct 2011
TL;DR: A smart camera, LabVIEW and vision software tools are utilized to generate eye detection and tracking algorithms and the implemented algorithms and performance results of these algorithms on the smart camera are presented.
Abstract: Real-time eye and iris tracking is important for handsoff gaze-based password entry, instrument control by paraplegic patients, Internet user studies, as well as homeland security applications. In this project, a smart camera, LabVIEW and vision software tools are utilized to generate eye detection and tracking algorithms. The algorithms are uploaded to the smart camera for on-board image processing. Eye detection refers to finding eye features in a single frame. Eye tracking is achieved by detecting the same eye features across multiple image frames and correlating them to a particular eye. The algorithms are tested for eye detection and tracking under different conditions including different angles of the face, head motion speed, and eye occlusions to determine their usability for the proposed applications. This paper presents the implemented algorithms and performance results of these algorithms on the smart camera.

Patent
27 Jun 2011
TL;DR: In this paper, a secondary sensor senses the object first, and wakes up the processor in the camera so that when the object is sensed by a main sensor, the processor is ready to take the picture immediately.
Abstract: A surveillance camera has plural triggering sensors that sense moving objects. A secondary sensor senses the object first, and wakes up the processor in the camera so that when the object is sensed by a main sensor, the processor is ready to take the picture immediately.

Patent
07 Apr 2011
TL;DR: In this article, a 3D rendering method is proposed to increase the performance when projecting and compositing multiple images or video sequences from real-world cameras on top of a precise 3D model of the real world.
Abstract: A 3D rendering method is proposed to increase the performance when projecting and compositing multiple images or video sequences from real-world cameras on top of a precise 3D model of the real world. Unlike previous methods that relied on shadow- mapping and that were limited in performance due to the need to re-render the complex scene multiple times per frame, the proposed method uses, for each camera, one Camera Projection Mesh ("CPM") of fixed and limited complexity per camera. The CPM that surrounds each camera is effectively molded over the surrounding 3D world surfaces or areas visible from the video camera. Rendering and compositing of the CPMs may be entirely performed on the Graphic Processing Unit ("GPU") using custom shaders for optimal performance. The method also enables improved view- shed analysis and fast visualization of the coverage of multiple cameras.