scispace - formally typeset
Search or ask a question

Showing papers by "Charles R. Dyer published in 2001"


Book ChapterDOI
01 Jan 2001
TL;DR: A review of methods for volumetric scene reconstruction from multiple views is presented in this paper, where occupancy descriptions of the voxels in a scene volume are constructed using shape-from-silhouette techniques for binary images, and shape from-photo-consistency combined with visibility testing for color images.
Abstract: A review of methods for volumetric scene reconstruction from multiple views is presented. Occupancy descriptions of the voxels in a scene volume are constructed using shape-from-silhouette techniques for binary images, and shape-from-photo-consistency combined with visibility testing for color images.

191 citations


Book ChapterDOI
TL;DR: 2D curve representations usually take algebraic forms in ways not related to visual perception, but this paper shows that 2D curves can be represented compactly by imposing shaping constraints in curvature space, which can be readily computed directly from input images.
Abstract: 2D curve representations usually take algebraic forms in ways not related to visual perception. This poses great difficulties in connecting curve representation with object recognition where information computed from raw images must be manipulated in a perceptually meaningful way and compared to the representation. In this paper we show that 2D curves can be represented compactly by imposing shaping constraints in curvature space, which can be readily computed directly from input images. The inverse problem of reconstructing a 2D curve from the shaping constraints is solved by a method using curvature shaping, in which the 2D image space is used in conjunction with its curvature space to generate the curve dynamically. The solution allows curve length to be determined and used subsequently for curve modeling using polynomial basis functions. Polynomial basis functions of high orders are shown to be necessary to incorporate perceptual information commonly available at the biological visual front-end.

9 citations


Proceedings ArticleDOI
01 Dec 2001
TL;DR: This paper introduces a method for metric self-calibration that is based on a novel decomposition of the fundamental matrix between two views taken by a camera with fixed internal parameters that works directly from fundamental matrices and uses a reduced-parameter representation for stability.
Abstract: This paper introduces a method for metric self-calibration that is based on a novel decomposition of the fundamental matrix between two views taken by a camera with fixed internal parameters. The method blends important advantages of the Kruppa constraints and the modulus constraint: it works directly from fundamental matrices and uses a reduced-parameter representation for stability. General properties of the new decomposition are also developed, including an intuitive interpretation of the three free parameters of internal calibration. The approach is demonstrated on both real and synthetic data.

8 citations


Proceedings ArticleDOI
07 Jul 2001
TL;DR: A novel linear algorithm for determining the affine calibration between two camera views of a dynamic scene by computed directly from the fundamental matrices associated with various moving objects in the scene.
Abstract: This paper introduces a novel linear algorithm for determining the affine calibration between two camera views of a dynamic scene. The affine calibration is computed directly from the fundamental matrices associated with various moving objects in the scene, as well as from the fundamental matrix for the static background if the cameras are at different locations. A minimum of two fundamental matrices are required, but any number of additional fundamental matrices can be incorporated into the linear system to improve the stability of the computation. The technique is demonstrated on both real and synthetic data.

6 citations


Proceedings ArticleDOI
07 Oct 2001
TL;DR: A new method for the segmentation is proposed here, which combines both the magnitude and phase parts of the optical flow, and proves to be effective in making various surface geometry boundaries explicit.
Abstract: The information conveyed by optical flow is analytically linked to observer motion by decomposing the optical flow field into its vector field components. It is shown that the observer may recover his ego-motion by interpreting the decomposed optical flow field, and he may further utilize his mobility to actively control the shape of the optical flow field, which directly reflects the surface shape of the object. The information of surface geometry discontinuity can be derived more directly from optical flow field by segmenting the whole field. A new method for the segmentation is proposed here, which combines both the magnitude and phase parts of the optical flow. The integration of these two different kinds of information proves to be effective in making various surface geometry boundaries explicit.

1 citations


01 Jan 2001
TL;DR: An algorithm for animating the rotation of a polyhedral scene that displays frames at a rate roughly equivalent to the rate at which the same hardware could display line-drawings of the scene without hidden-line removal is presented.
Abstract: 1. Introduction The three-dimensional structure of an object is easier to perceive when the object is rotating or when the viewer moves to see it from a range of viewpoints. Thus, CAD systems are usually able to display an object as it rotates. Topologists may more easily perceive the structure of a polyhedral model of a knot and biologists the structure of a molecule using such rotations. In problems such as these, real-time motion and hidden-line or hidden-surface removal for polyhe-dral scenes are both desirable goals, but achieving both simultaneously for large models on inexpensive hardware is difficult. In this paper, we discuss the problem of displaying from a moving viewpoint a series of line-drawing images of a polyhedral object or scene with hidden lines removed. We will refer to this as the problem of animating rotation. We present an algorithm for animating the rotation of a polyhedral scene that displays frames at a rate roughly equivalent to the rate at which the same hardware could display line-drawings of the scene without hidden-line removal. We consider here only the case of a rotation about one axis of the coordinate system under orthographic projection. The naive approach to animating rotation is to treat frames independently. For each frame, hidden lines are removed and the frame is displayed. The real-time goal is to do this at video rates, such as 15 or 30 frames per second. However, since the viewpoints are closely spaced, there will be little difference between two successive frames and repeated hidden-line removal may perform much redundant work. This frame coherence makes it possible to devise a more efficient algorithm. The algorithm presented here takes advantage of frame coherence by computing the initial appearance of the scene in the first frame and the viewpoints at which the topological structure of the image changes, which are called events. The event viewpoints are computed through the construction of the aspect representation for the scene, a representation that makes explicit the set of vertices, edges, and faces visible from every viewpoint. The algorithm has two phases: a preprocessing phase, in which the initial appearance of the polyhedron and the events are computed and a list of visible edges is constructed; and an on-line phase, in which a sequence of frames is displayed in real time. The approach presented here is an object-space, vector-based approach and does not use raster-based algorithms such as …