scispace - formally typeset
Search or ask a question

Showing papers on "Smart camera published in 2012"


Proceedings ArticleDOI
01 Dec 2012
TL;DR: A novel scheme for data reception in a mobile phone using visible light communications (VLC) is proposed, exploiting the rolling shutter effect of CMOS sensors, and a data rate much higher than the camera frame rate is achieved.
Abstract: In this paper, a novel scheme for data reception in a mobile phone using visible light communications (VLC) is proposed. The camera of the smartphone is used as a receiver in order to capture the continuous changes in state (on-off) of the light, which are invisible to the human eye. The information is captured in the camera in the form of light and dark bands which are then decoded by the smartphone and the received message is displayed. By exploiting the rolling shutter effect of CMOS sensors, a data rate much higher than the camera frame rate is achieved.

446 citations


Journal ArticleDOI
TL;DR: This paper proposes and validates a method that overcomes this limitation by employing stereo cameras and converting 2D pixel coordinates to three-dimensional (3D) metric coordinates and demonstrates the suitability of this method for on-site tracking purposes.
Abstract: Vision trackers have been proposed as a promising alternative for tracking at large-scale, congested construction sites. They provide the location of a large number of entities in a camera view across frames. However, vision trackers provide only two-dimensional (2D) pixel coordinates, which are not adequate for construction applications. This paper proposes and validates a method that overcomes this limitation by employing stereo cameras and converting 2D pixel coordinates to three-dimensional (3D) metric coordinates. The proposed method consists of four steps: camera calibration, camera pose estimation, 2D tracking, and triangulation. Given that the method employs fixed, calibrated stereo cameras with a long baseline, appropriate algorithms are selected for each step. Once the first two steps reveal camera system parameters, the third step determines 2D pixel coordinates of entities in subsequent frames. The 2D coordinates are triangulated on the basis of the camera system parameters to obtain 3D coordinates. The methodology presented in this paper has been implemented and tested with data collected from a construction site. The results demonstrate the suitability of this method for on-site tracking purposes.

99 citations


Patent
01 Aug 2012
TL;DR: In this paper, the authors present methods, apparatuses, systems, and computer-readable media for taking and sharing pictures of objects of interest at an event or an occasion.
Abstract: Embodiments of the invention disclose methods, apparatuses, systems, and computer-readable media for taking and sharing pictures of objects of interest at an event or an occasion. A device implementing embodiments of the invention may enable a user to select objects of interest from a view displayed by a display unit coupled to the device. The device may also have pre-programmed objects including objects that the device detects. In addition, the device may detect people using the users' social networks by retrieving images from social networks like Facebook® and LinkedIn®.

91 citations


Patent
23 Dec 2012
TL;DR: In this paper, a depth map of the scene is produced using an output of the 3D camera, and coordinated with a 2D image captured by the 2D camera to identify a 3D object in the scene that meets predetermined criteria for projection of images thereon.
Abstract: Embodiments of the invention provide apparatus and methods for interactive reality augmentation, including a 2-dimensional camera (36) and a 3-dimensional camera (38), associated depth projector and content projector (48), and a processor (40) linked to the 3-dimensional camera and the 2-dimensional camera. A depth map of the scene is produced using an output of the 3-dimensional camera, and coordinated with a 2-dimensional image captured by the 2-dimensional camera to identify a 3-dimensional object in the scene that meets predetermined criteria for projection of images thereon. The content projector projects a content image onto the 3-dimensional object responsively to instructions of the processor, which can be mediated by automatic recognition of user gestures

90 citations


Proceedings ArticleDOI
Huan Ma1, Meng Yang1, Deying Li1, Yi Hong1, Wenping Chen1 
25 Mar 2012
TL;DR: This paper proposes an algorithm to find a feasible solution for the Minimum Camera Barrier Coverage Problem (MCBCP) in wireless camera sensor networks in which the camera sensors are deployed randomly in a target field and proposes an optimal algorithm for the MCBCP problem.
Abstract: Barrier coverage is an important issue in wireless sensor network. In wireless camera sensor networks, the cameras take the images or videos of target objects, the position and angle of camera sensor impact on the sense range. Therefore, the barrier coverage problem in camera sensor network is different from scalar sensor network. In this paper, based on the definition of full-view coverage, we focus on the Minimum Camera Barrier Coverage Problem (MCBCP) in wireless camera sensor networks in which the camera sensors are deployed randomly in a target field. Firstly, we partition the target field into disjoint subregions which are full-view-covered regions or not-full-view-covered regions. Then we model the full-view-covered regions and their relationship as a weighted directed graph. Based on the graph, we propose an algorithm to find a feasible solution for the MCBCP problem. We also proved the correctness of the solution for the MCBCP problem. Furthermore, we propose an optimal algorithm for the MCBCP problem. Finally, simulation results demonstrate that our algorithm outperforms the existing algorithm.

87 citations


Journal ArticleDOI
TL;DR: Through the use of simulation, the effects of individual digital camera components on system performance and image quality can be quantified, which can be helpful for both camera design and imagequality assessment.
Abstract: We describe a simulation of the complete image processing pipeline of a digital camera, beginning with a radiometric description of the scene captured by the camera and ending with a radiometric description of the image rendered on a display. We show that there is a good correspondence between measured and simulated sensor performance. Through the use of simulation, we can quantify the effects of individual digital camera components on system performance and image quality. This computational approach can be helpful for both camera design and image quality assessment.

87 citations


Journal ArticleDOI
TL;DR: A high-performance stereo camera system to capture image sequences with high temporal and spatial resolution for the evaluation of various image processing tasks, primarily designed for complex outdoor and traffic scenes that frequently occur in the automotive industry.
Abstract: We describe a high-performance stereo camera system to capture image sequences with high temporal and spatial resolution for the evaluation of various image processing tasks. The system was primarily designed for complex outdoor and traffic scenes that frequently occur in the automotive industry, but is also suited for other applications. For this task the system is equipped with a very accurate inertial measurement unit and global positioning system, which provides exact camera movement and position data. The system is already in active use and has produced several terabytes of challenging image sequences which are partly available for download.

78 citations


Patent
Jeremy Burr1
13 Dec 2012
TL;DR: In this article, a hand-based navigational gesture processing of a video stream is described. Butler et al. proposed a step-and-distributed pipeline process to reduce platform power by employing a stepped and distributed pipeline process, wherein CPU intensive processing is selectively performed.
Abstract: Techniques are disclosed for processing a video stream to reduce platform power by employing a stepped and distributed pipeline process, wherein CPU-intensive processing is selectively performed. The techniques are particularly well-suited for efficient hand-based navigational gesture processing of a video stream, in accordance with some embodiments. The stepped and distributed nature of the process allows for a reduction in power needed to transfer image data from a given camera to memory prior to image processing. In one example case, for instance, the techniques are implemented in a user's computer system wherein initial threshold detection (image disturbance) and optionally user presence (hand image) processing components are proximate to or within the system's camera, and the camera is located in or proximate to the system's primary display. The computer system may be any mobile or stationary computing system having a display and camera that are internal and/or external to the system.

78 citations


Proceedings ArticleDOI
05 Nov 2012
TL;DR: A system for easily preparing arbitrary wide-area environments for subsequent real-time tracking with a handheld device and shows that minimal user effort is required to initialize a camera tracking session in an unprepared environment.
Abstract: We propose a system for easily preparing arbitrary wide-area environments for subsequent real-time tracking with a handheld device. Our system evaluation shows that minimal user effort is required to initialize a camera tracking session in an unprepared environment. We combine panoramas captured using a handheld omnidirectional camera from several viewpoints to create a point cloud model. After the offline modeling step, live camera pose tracking is initialized by feature point matching, and continuously updated by aligning the point cloud model to the camera image. Given a reconstruction made with less than five minutes of video, we achieve below 25 cm translational error and 0.5 degrees rotational error for over 80% of images tested. In contrast to camera-based simultaneous localization and mapping (SLAM) systems, our methods are suitable for handheld use in large outdoor spaces.

77 citations


Journal ArticleDOI
02 Aug 2012-Sensors
TL;DR: An intelligent video surveillance system able to detect and identify abnormal and alarming situations by analyzing object movement, designed to minimize video processing and transmission, thus allowing a large number of cameras to be deployed on the system, and suitable for its usage as an integrated safety and security solution in Smart Cities.
Abstract: This paper presents a proposal of an intelligent video surveillance system able to detect and identify abnormal and alarming situations by analyzing object movement. The system is designed to minimize video processing and transmission, thus allowing a large number of cameras to be deployed on the system, and therefore making it suitable for its usage as an integrated safety and security solution in Smart Cities. Alarm detection is performed on the basis of parameters of the moving objects and their trajectories, and is performed using semantic reasoning and ontologies. This means that the system employs a high-level conceptual language easy to understand for human operators, capable of raising enriched alarms with descriptions of what is happening on the image, and to automate reactions to them such as alerting the appropriate emergency services using the Smart City safety network.

64 citations


Proceedings Article
03 Oct 2012
TL;DR: Kernel based object tracking algorithm using mean shift method is described, which aims to generate the trajectory of an object over time by locating its position in every frame of the video.
Abstract: In this age of dramatic technology shift, one of the most significant development has been the emergence of digital video as an important aspect of daily life While the Internet has significantly changed the way in which we obtain the information, it is much more attractive because of the powerful medium of video In this paper we have described kernel based object tracking algorithm using mean shift method The goal of an object tracking algorithm is to generate the trajectory of an object over time by locating its position in every frame of the video There are various applications of object tracking in the field of computer vision A smart camera is a very important component for many applications such as, video surveillance, traffic monitoring system and for mobile robots

Patent
09 May 2012
TL;DR: In this article, a hand-held wireless communications device includes a camera integrated within a housing of the wireless communication device, a front-facing lens assembly to focus light onto the camera, a display, and a controller connected to the camera and the display.
Abstract: A hand-held wireless communications device includes a camera integrated within a housing of the wireless communications device, a front-facing lens assembly to focus light onto the camera, a display, and a controller connected to the camera and the display. The controller controls the camera to capture an image of another device that is in front of its display. The device can then analyze the image to extract information specific to the other device, and utilize that information to identify and authenticate the other device. Once identified and authenticated, the devices are able to share data and information with each other.

Patent
10 Apr 2012
TL;DR: In this article, the authors present a target position acquisition system for search and rescue applications. But the target may be a man overboard, and the system may not be suitable for the use of a fixed-mounted camera.
Abstract: Systems and methods disclosed herein provide, for some embodiments, infrared cameras and target position acquisition techniques for various applications. For example, in one embodiment, a system may include a portable imaging/viewing subsystem having a target position finder and may also include a fixed mount camera subsystem having a camera and a camera positioner. A communications link may be configured to communicate a signal from the target position finder to the camera positioner. The signal may be representative of a position of a target being imaged/viewed with the portable imaging/viewing subsystem. The camera positioner may aim the camera toward the target in response to the signal. The target may, for example, be a man overboard. Thus, the system may be useful in search and rescue operations.

Patent
30 Apr 2012
TL;DR: In this article, a camera that is mounted on a vehicle is used to monitor the vehicle, the driver and the contents therein, and the camera includes an external housing, a camera module having a camera lens, a lighting component, a dimmer switch and a transmission medium.
Abstract: A camera that is mounted on a vehicle is used to monitor the vehicle, the driver and the contents therein. The camera includes an external housing, a camera module having a camera lens, a lighting component, a dimmer switch and a transmission medium. The external housing may be made of alloy. The camera module may capture video data and the transmission medium may be coupled to the camera module to transmit the captured video data as a live video stream to an external device. The lighting component included in the fleet camera apparatus may include LEDs and have infra-red capabilities to provide a night vision mode. The dimmer switch is included to control the LEDs' brightness. Other embodiments are disclosed.

Book
13 Feb 2012
TL;DR: This book focuses on the basic research problems in camera networks, review the current state-of-the-art and present a detailed description of some of the recently developed methodologies, as well as highlighting the major directions for future research.
Abstract: As networks of video cameras are installed in many applications like security and surveillance, environmental monitoring, disaster response, and assisted living facilities, among others, image understanding in camera networks is becoming an important area of research and technology development. There are many challenges that need to be addressed in the process. Some of them are listed below: - Traditional computer vision challenges in tracking and recognition, robustness to pose, illumination, occlusion, clutter, recognition of objects, and activities; - Aggregating local information for wide area scene understanding, like obtaining stable, long-term tracks of objects; - Positioning of the cameras and dynamic control of pan-tilt-zoom (PTZ) cameras for optimal sensing; - Distributed processing and scene analysis algorithms; - Resource constraints imposed by different applications like security and surveillance, environmental monitoring, disaster response, assisted living facilities, etc. In this book, we focus on the basic research problems in camera networks, review the current state-of-the-art and present a detailed description of some of the recently developed methodologies. The major underlying theme in all the work presented is to take a network-centric view whereby the overall decisions are made at the network level. This is sometimes achieved by accumulating all the data at a central server, while at other times by exchanging decisions made by individual cameras based on their locally sensed data. Chapter One starts with an overview of the problems in camera networks and the major research directions. Some of the currently available experimental testbeds are also discussed here. One of the fundamental tasks in the analysis of dynamic scenes is to track objects. Since camera networks cover a large area, the systems need to be able to track over such wide areas where there could be both overlapping and non-overlapping fields of view of the cameras, as addressed in Chapter Two: Distributed processing is another challenge in camera networks and recent methods have shown how to do tracking, pose estimation and calibration in a distributed environment. Consensus algorithms that enable these tasks are described in Chapter Three. Chapter Four summarizes a few approaches on object and activity recognition in both distributed and centralized camera network environments. All these methods have focused primarily on the analysis side given that images are being obtained by the cameras. Efficient utilization of such networks often calls for active sensing, whereby the acquisition and analysis phases are closely linked. We discuss this issue in detail in Chapter Five and show how collaborative and opportunistic sensing in a camera network can be achieved. Finally, Chapter Six concludes the book by highlighting the major directions for future research. Table of Contents: An Introduction to Camera Networks / Wide-Area Tracking / Distributed Processing in Camera Networks / Object and Activity Recognition / Active Sensing / Future Research Directions

Patent
25 Jul 2012
TL;DR: In this paper, a camera detects devices, such as other cameras, smart devices, and access points, with which the camera may communicate and switches between operating as a wireless station and a wireless access point.
Abstract: A camera detects devices, such as other cameras, smart devices, and access points, with which the camera may communicate. The camera may alternate between operating as a wireless station and a wireless access point. The camera may connect to and receive credentials from a device for another device to which it is not connected. In one embodiment, the camera is configured to operate as a wireless access point, and is configured to receive credentials from a smart device operating as a wireless station. The camera may then transfer the credentials to additional cameras, each configured to operate as wireless stations. The camera and additional cameras may connect to a smart device directly or indirectly (for instance, through an access point), and the smart device may change the camera mode of the cameras. The initial modes of the cameras may be preserved and restored by the smart device upon disconnection.

Patent
24 Oct 2012
TL;DR: In this paper, the camera includes a lens for enabling the camera to capture at least one image, and a connector for mounting the camera onto a phone and for enabling communication with the phone.
Abstract: Embodiments generally relate to a camera. In one embodiment, the camera includes a lens for enabling the camera to capture at least one image. The camera also includes a connector for mounting the camera onto a phone and for enabling the camera to communicate with the phone. The camera also includes a shutter button for triggering the camera to capture the least one image. The camera also activates the phone and puts the phone into a camera mode when the shutter button is pressed.

Proceedings ArticleDOI
18 Jun 2012
TL;DR: This paper builds a prototype using a state-of-the-art mobile phone, which has to be manually displaced in order to record images from different lines of sight, and investigates the effect of different camera displacements on the accuracy of distance measurements.
Abstract: Computer stereo vision is an important technique for robotic navigation and other mobile scenarios where depth perception is needed, but it usually requires two cameras with a known horizontal displacement. In this paper, we present a solution for mobile devices with just one camera, which is a first step towards making computer stereo vision available to a wide range of devices that are not equipped with stereo cameras. We have built a prototype using a state-of-the-art mobile phone, which has to be manually displaced in order to record images from different lines of sight. Since the displacement between the two images is not known in advance, it is measured using the phone's inertial sensors. We evaluated the accuracy of our single-camera approach by performing distance calculations to everyday objects in different indoor and outdoor scenarios, and compared the results with that of a stereo camera phone. As a main advantage of a single moving camera is the possibility to vary its relative position between taking the two pictures, we investigated the effect of different camera displacements on the accuracy of distance measurements.

Patent
16 Mar 2012
TL;DR: In this article, the authors describe methods and devices for producing an enhanced image from two-dimensional images using a first camera and a second camera, which are then merged to produce an enhanced 2D image.
Abstract: Methods and devices for producing an enhanced image are described. In one example aspect, a method includes: providing a three-dimensional operating mode in which stereoscopic images are obtained using a first camera and a second camera; and providing a two-dimensional operating mode and while operating within the two-dimensional operating mode: receiving substantially simultaneously captured two-dimensional images from the first camera and the second camera; and merging the two-dimensional images to produce an enhanced two-dimensional image.

Patent
25 Jul 2012
TL;DR: In this article, a camera detects devices, such as other cameras, smart devices, and access points, with which the camera may communicate and switches between operating as a wireless station and a wireless access point.
Abstract: A camera detects devices, such as other cameras, smart devices, and access points, with which the camera may communicate. The camera may alternate between operating as a wireless station and a wireless access point. The camera may connect to and receive credentials from a device for another device to which it is not connected. In one embodiment, the camera is configured to operate as a wireless access point, and is configured to receive credentials from a smart device operating as a wireless station. The camera may then transfer the credentials to additional cameras, each configured to operate as wireless stations. The camera and additional cameras may connect to a smart device directly or indirectly (for instance, through an access point), and the smart device may change the camera mode of the cameras. The initial modes of the cameras may be preserved and restored by the smart device upon disconnection.

Journal ArticleDOI
Guojian Wang1, Linmi Tao1, Huijun Di1, Xiyong Ye1, Yuanchun Shi1 
TL;DR: An application-oriented service share model for the generalization of vision processing is presented and a vision system architecture is presented that can readily integrate computer vision processing and make application modules share services and exchange messages transparently.
Abstract: The complexity of intelligent computer vision systems demands novel system architectures that are capable of integrating various computer vision algorithms into a working system with high scalability. The real-time applications of human-centered computing are based on multiple cameras in current systems, which require a transparent distributed architecture. This paper presents an application-oriented service share model for the generalization of vision processing. Based on the model, a vision system architecture is presented that can readily integrate computer vision processing and make application modules share services and exchange messages transparently. The architecture provides a standard interface for loading various modules and a mechanism for modules to acquire inputs and publish processing results that can be used as inputs by others. Using this architecture, a system can load specific applications without considering the common low-layer data processing. We have implemented a prototype vision system based on the proposed architecture. The latency performance and 3-D track function were tested with the prototype system. The architecture is scalable and open, so it will be useful for supporting the development of an intelligent vision system, as well as a distributed sensor system.

Patent
24 Feb 2012
TL;DR: In this article, a user interface for a digital camera is presented, where the user interface simultaneously displays an electronic viewfinder image and at least one other image such as a previously captured image.
Abstract: The present disclosure provides a user interface for a digital camera such as a digital camera built into a smartphone or other multipurpose portable electronic device. The user interface simultaneously displays an electronic viewfinder image and at least one other image such as a previously captured image. The previously captured image is located within the electronic viewfinder image. Designated input causes the previously captured image to be enlarged from an initial size to an enlarged size.

Patent
23 May 2012
TL;DR: In this article, a perimeter monitoring device for a work vehicle is configured to monitor a surrounding of the work vehicle and display a monitored result on a display device, which includes cameras, a bird's-eye image display unit, obstacle detecting sensors, a camera image specifying unit, and a camera displaying unit.
Abstract: A perimeter monitoring device for a work vehicle is configured to monitor a surrounding of the work vehicle and display a monitored result on a display device. The perimeter monitoring device includes cameras, a bird's-eye image display unit, obstacle detecting sensors, a camera image specifying unit, and a camera image displaying unit. The camera image specifying unit is configured to specify one or more camera images in which one or more of obstacles are captured when the one or more of obstacles are detected by the obstacle detecting sensors. The camera image displaying unit is configured to display a relevant camera image in alignment with the bird's-eye image on the display device when a plurality of camera images are specified by the camera image specifying unit, the relevant camera image being ranked in a high priority ranking based on a priority order set in accordance with travelling states.

Journal ArticleDOI
TL;DR: This paper introduces an approach that extracts rails by matching edge features to candidate rail patterns modeled as sequences of parabola segments, designed to address the challenges posed by the open environment without requiring explicit knowledge about train speed or camera parameters/position and running fast enough for practical use without specialized hardware.
Abstract: Rail extraction, i.e., determining the position of the rails ahead of a train, is one of the basic tasks of vision-based driver support in railways. This paper introduces an approach that extracts rails by matching edge features to candidate rail patterns modeled as sequences of parabola segments. Patterns are precomputed in a semiautomatic offline stage for areas near the camera and generated on the fly for more distant regions. Our approach was designed to address the challenges posed by the open environment without requiring explicit knowledge about train speed or camera parameters/position and running fast enough for practical use without specialized hardware. Evaluation was performed on hours of video captured under real operation conditions, considering the requirements of a system in which a camera with zoom lens mounted on a pan-tilt unit captures images from the area ahead with increased resolution.

Patent
16 Nov 2012
TL;DR: In this article, a system including a camera device was described to perform image processing for detection of an event involving a human subject in one embodiment and in another embodiment, a camera equipped system was employed for fall detection.
Abstract: There is set forth herein a system including a camera device. In one embodiment the system is operative to perform image processing for detection of an event involving a human subject. There is set forth herein in one embodiment, a camera equipped system employed for fall detection.

Patent
07 Mar 2012
TL;DR: In this article, a subject feedback mechanism generates framing feedback associated with the image data that includes information about a current framing of an image associated with image data, and video capture logic also obtains motion information from motion sensors integrated into the camera device.
Abstract: Disclosed are various systems and methods implemented in a camera device. An image sensor is configured to capture image data of a target scene from a lens system and a subject feedback mechanism generates framing feedback associated with the image data that includes information about a current framing of an image associated with the image data. Video capture logic also obtains motion information from motion sensors integrated into the camera device and skip frame capture when motion levels exceed a threshold.

Patent
21 Sep 2012
TL;DR: In this article, a method for controlling the number of consecutive images captured in burst operating mode is described, based on motion data from a motion sensor on the electronic device, and based on the motion data, the camera module is configured to temporarily capture a number of images to an image buffer when operation of the camera is triggered.
Abstract: Methods and devices for controlling the number of consecutive images captured in a burst operating mode are described. In one example embodiment, the present disclosure describes a method implemented by a processor of an electronic device. The electronic device has a camera module. The camera module is configured to temporarily capture a number of consecutive images to an image buffer when operation of the camera module is triggered. The method includes: obtaining motion data from a motion sensor on the electronic device; and based on the motion data, controlling the number of consecutive images captured by the camera module when operation of the camera module is triggered.

Journal ArticleDOI
TL;DR: An efficient solution to extract, in real-time, high-level information from an observed scene, and generate the most appropriate commands for a set of pan-tilt-zoom (PTZ) cameras in a surveillance scenario is presented.

Patent
28 Aug 2012
TL;DR: In this paper, the authors present a system and method for controlling an infrared camera by using a mobile phone, which includes a client and an IR camera used as a server and connected to the client through a communication network.
Abstract: The present application provides a system and method for controlling an infrared camera by using a mobile phone The system includes a client and an infrared camera used as a server and connected to the client through a communication network The infrared camera is mounted in a location to perform infrared measuring and/or monitoring, so as to provide infrared image videos of a monitored object and temperature data of the points contained in an infrared image The client is mounted in a position far away from the location of the infrared camera to provide a remote control for the infrared camera The present application allows monitoring personnel or a user to remotely monitor and control an infrared camera by using a mobile phone

Journal ArticleDOI
TL;DR: The development of an automatic method to design the optimum camera network for a given object of interest, focusing currently on buildings and statues is reported on, using a mathematical penalty method of optimization with constraints.
Abstract: . Digital cultural heritage documentation in 3D is subject to research and practical applications nowadays. Image-based modeling is a technique to create 3D models, which starts with the basic task of designing the camera network. This task is – however – quite crucial in practical applications because it needs a thorough planning and a certain level of expertise and experience. Bearing in mind todays computational (mobile) power we think that the optimal camera network should be designed in the field, and, therefore, making the preprocessing and planning dispensable. The optimal camera network is designed when certain accuracy demands are fulfilled with a reasonable effort, namely keeping the number of camera shots at a minimum. In this study, we report on the development of an automatic method to design the optimum camera network for a given object of interest, focusing currently on buildings and statues. Starting from a rough point cloud derived from a video stream of object images, the initial configuration of the camera network assuming a high-resolution state-of-the-art non-metric camera is designed. To improve the image coverage and accuracy, we use a mathematical penalty method of optimization with constraints. From the experimental test, we found that, after optimization, the maximum coverage is attained beside a significant improvement of positional accuracy. Currently, we are working on a guiding system, to ensure, that the operator actually takes the desired images. Further next steps will include a reliable and detailed modeling of the object applying sophisticated dense matching techniques.