scispace - formally typeset
Proceedings ArticleDOI

Camera evidence: visibility analysis through a multicamera viewpoint

TLDR
The next phase of this project will bring this type of analysis into a machine environment more conducive to interactivity: a backhoe simulator with levers to control the vehicle and bucket positions, viewed through a virtual reality environment.
Abstract
A major criterion in the design of backhoes (and other heavy machinery) is the ability of the operator to see all critical portions of the vehicle and the surrounding environment. Computer graphics provides a method for analyzing this ability prior to the building of full-scale wooden models. By placing the computer graphic camera at the operator's eyepoint, designers can detect poor placement of supports, blind spots, etc. In this type of analysis, the camera becomes an active, yet somewhat imperfect, participant in our understanding of what an operator of the backhoe 'sees'. In order to simulate a backhoe operator's vision from within a cab, one needs to expand the angle of view of the camera to mimic unfocused, peripheral vision. A traditional wide-angle lens creates extreme distortions that are not present in 'natural' vision, and is therefore hardly an adequate representation. The solution we arrived at uses seven cameras fanned out horizontally in order to capture a relatively undistorted 155 degree angle of view. In addition, another camera displays and numerically analyzes the percentage of the loader bucket visible and blocked. These two views are presented simultaneously in order to address both the 'naturalistic' and quantitative needs of the designers, as well as to point to the incompleteness of any one representation of a scene. In the next phase of this project we will bring this type of analysis into a machine environment more conducive to interactivity: a backhoe simulator with levers to control the vehicle and bucket positions, viewed through a virtual reality environment.

read more

Citations
More filters
Patent

Method of determining relative z-ordering in an image and method of using same

TL;DR: In this paper, a method comprises forward and backward motion matching of image regions to determine overlap, followed by the creation of relationships (e.g., pairwise relationships) between regions and comparing the result with the original image to determine the relative z-ordering.