scispace - formally typeset
Open Access

Sensor planning and modeling for machine vision tasks

TLDR
Techniques to automatically synthesize desirable camera views of a known scene by posing tile problem in a constrained optimization setting and obtaining viewpoints that are globally admissible and locally optimal are presented.
Abstract
In this thesis, we present techniques to automatically synthesize desirable camera views of a known scene. Desirability of a camera view of a scene is represented in terms of a set of constraints. These constraints express whether certain scene features of interest are detectable or not in the resulting image. The feature detectability constraints that are chosen are fairly generic to vision tasks and require that the features are: (1) not occluded--the visibility constraint; (2) resolvable to a given specification--the resolution constraint; (3) in-focus-- the focus constraint; (4) within the field-of-view of the camera--the field-of-view constraint. An in-depth analysis of each of the above constraints results in the locus of viewpoints that satisfy each constraint separately. In this work, a viewpoint is an eight-dimensional quantity that consists of the three positional and two orientational degrees of freedom of camera placement, and three optical parameters of camera and lens setting. In order to determine globally admissible viewpoints, the loci of the individual constraints are combined by posing tile problem in a constrained optimization setting. Using existing optimization schemes, viewpoints that are globally admissible and locally optimal are obtained. In order to realize such a computed viewpoint in an actual sensor setup, the relationships mapping the planned parameters to the parameters that can be controlled, are obtained. This mapping is determined in the case of a sensor setup that consists of a camera in a hand-eye arrangement equipped with a lens that has zoom, focus and aperture control. The lens is modeled by a general thick lens model with non-coinciding pupils and principal points. The sensor planning and sensor modeling techniques that have been developed compose the MVP system. MVP is used in a robotic vision system that consists of a camera with a controllable lens mounted on a robot manipulator. The camera is positioned and its lens is set according to the results generated by MVP. Camera views taken from the computed viewpoints verify that the feature detectability constraints are indeed satisfied.

read more

Citations
More filters
Journal ArticleDOI

View planning for automated three-dimensional object reconstruction and inspection

TL;DR: This paper surveys and compares view planning techniques for automated 3D object reconstruction and inspection by means of active, triangulation-based range sensors and suggests adequate solutions to semiautomate the scan-register-integrate tasks.
Proceedings ArticleDOI

Modeling and calibration of automated zoom lenses

TL;DR: This paper presents a methodology for producing accurate camera models for systems with automated, variable- parameter lenses and applies it to produce an `adjustable,' perspective-projection camera model based on Tsai's fixed camera model.
Proceedings ArticleDOI

Dynamic sensor planning

TL;DR: A method of extending the sensor planning abilities of the MVP (machine vision planning) system to plan viewpoints for monitoring a pre-planned robot task is described and experimental results monitoring a simulated robot operation are presented.
Journal ArticleDOI

Computing Camera Viewpoints in an Active Robot Work Cell

TL;DR: A dynamic sensor-planning system that is capable of planning the locations and settings of vision sensors for use in an environment containing objects moving in known ways is presented.
Proceedings ArticleDOI

User-centric viewpoint computation for haptic exploration and manipulation

TL;DR: This work presents several techniques for user-centric viewing of the virtual objects or datasets under haptic exploration and manipulation, which compute automatic placement of the user viewpoint to navigate through the scene, to display the near-optimal views, and to reposition the viewpoint for haptic visualization.