scispace - formally typeset
Search or ask a question
Author

Charles R. Dyer

Bio: Charles R. Dyer is an academic researcher from University of Wisconsin-Madison. The author has contributed to research in topics: Motion estimation & Motion field. The author has an hindex of 43, co-authored 141 publications receiving 9919 citations. Previous affiliations of Charles R. Dyer include University of Wisconsin System & University of Maryland, College Park.


Papers
More filters
Proceedings ArticleDOI
02 Jun 1991
TL;DR: The authors provide a critique of the aspect graph approach that describes a graph structure in which each node represents a general view of the object as seen from some maximal, connected cell of viewpoint space.
Abstract: The aspect graph of an object is a graph structure in which each node represents a general view of the object as seen from some maximal, connected cell of viewpoint space; each arc represents an accidental view (or visual event) which occurs on the boundary between two cells of general viewpoint; there is a node for each possible general view of the object, and there is an arc for each possible visual event. The authors provide a critique of the aspect graph approach. >

37 citations

Journal ArticleDOI
TL;DR: An automated segmentation approach for thermal coagulations on 3-D elastographic data to obtain both area and volume information rapidly and is shown to be comparable to manual delineation of coagulation on elastograms by medical physicists.
Abstract: Delineation of radiofrequency-ablation-induced coagulation (thermal lesion) boundaries is an important clinical problem that is not well addressed by conventional imaging modalities. Elastography, which produces images of the local strain after small, externally applied compressions, can be used for visualization of thermal coagulations. This paper presents an automated segmentation approach for thermal coagulations on 3-D elastographic data to obtain both area and volume information rapidly. The approach consists of a coarse-to-fine method for active contour initialization and a gradient vector flow, active contour model for deformable contour optimization with the help of prior knowledge of the geometry of general thermal coagulations. The performance of the algorithm has been shown to be comparable to manual delineation of coagulations on elastograms by medical physicists (r = 0.99 for volumes of 36 radiofrequency-induced coagulations). Furthermore, the automatic algorithm applied to elastograms yielded results that agreed with manual delineation of coagulations on pathology images (r = 0.96 for the same 36 lesions). This algorithm has also been successfully applied on in vivo elastograms.

37 citations

Proceedings ArticleDOI
19 Oct 1992
TL;DR: A technique for defining graphical depictions for all the data types defined in an algorithm is presented, andappings from the scalar types into the display model type provide a simple user interface for controlling how all data types are depicted, without the need for type-specific graphics logic.
Abstract: A technique for defining graphical depictions for all the data types defined in an algorithm is presented. The ability to display arbitrary combinations of an algorithm's data objects in a common frame of reference, coupled with interactive control of algorithm execution, provides a powerful way to understand algorithm behavior. Type definitions are constrained so that all primitive values occurring in data objects are assigned scalar types. A graphical display, including user interaction with the display, is modeled by a special data type. Mappings from the scalar types into the display model type provide a simple user interface for controlling how all data types are depicted, without the need for type-specific graphics logic. >

35 citations

Proceedings ArticleDOI
15 Jun 1992
TL;DR: The scale space aspect graph is introduced, and an interpretation of the scale dimension in terms of the spatial extent of image features is considered.
Abstract: Currently the aspect graph is computed under the assumption of perfect resolution in the viewpoint, the projected image, and the object shape. Visual detail is represented that an observer might never see in practice. By introducing scale into this framework, a mechanism is provided for selecting levels of detail that are large enough to merit explicit representation, effectively allowing control over the size of the aspect graph. To this end the scale space aspect graph is introduced, and an interpretation of the scale dimension in terms of the spatial extent of image features is considered. A brief example is given for polygons in a plane. >

33 citations

Journal ArticleDOI
TL;DR: Advantages of this approach include the use of multiresolution descriptions to model different parts of an object at different scales, the ability to detect partially occluded objects, the able to dynamically control the coarse-to-fine matching process, and the increase in recognition speed over conventional single resolution recognition algorithms.
Abstract: A multiresolution, model-based matching technique is described for coarse-to-fine object recognition. Each two-dimensional object is modeled as a directed acyclic graph. Each node in the graph stores a boundary segment of the object model at a selected level of spatial resolution. The root node of the graph contains the coarsest resolution representation of the boundary of the object, leaf nodes contain sections of the boundary at the highest resolution, and intermediate nodes contain features at intermediate levels of resolution. Arcs are directed from boundary segments at one level of resolution to spatially related boundary segments at finer levels of resolution. A generalized Hough transform is used to match the model nodes with regions in the corresponding level of resolution in a given input image pyramid. First, the root node of the model graph is matched with the coarsest level of the input image pyramid and an ordered list of hypothesized positions and orientations for the object is generated. These hypotheses limit the area in which the search for subobjects (children nodes) must be conducted. If the subobjects of a hypothesis are not found, the next best hypothesis for the position and orientation of the object at the coarsest level is tried. Advantages of this approach include the use of multiresolution descriptions to model different parts of an object at different scales, the ability to detect partially occluded objects, the ability to dynamically control the coarse-to-fine matching process, and the increase in recognition speed over conventional single resolution recognition algorithms.

33 citations


Cited by
More filters
Journal ArticleDOI
Paul J. Besl1, H.D. McKay1
TL;DR: In this paper, the authors describe a general-purpose representation-independent method for the accurate and computationally efficient registration of 3D shapes including free-form curves and surfaces, based on the iterative closest point (ICP) algorithm, which requires only a procedure to find the closest point on a geometric entity to a given point.
Abstract: The authors describe a general-purpose, representation-independent method for the accurate and computationally efficient registration of 3-D shapes including free-form curves and surfaces. The method handles the full six degrees of freedom and is based on the iterative closest point (ICP) algorithm, which requires only a procedure to find the closest point on a geometric entity to a given point. The ICP algorithm always converges monotonically to the nearest local minimum of a mean-square distance metric, and the rate of convergence is rapid during the first few iterations. Therefore, given an adequate set of initial rotations and translations for a particular class of objects with a certain level of 'shape complexity', one can globally minimize the mean-square distance metric over all six degrees of freedom by testing each initial registration. One important application of this method is to register sensed data from unfixtured rigid objects with an ideal geometric model, prior to shape inspection. Experimental results show the capabilities of the registration algorithm on point sets, curves, and surfaces. >

17,598 citations

Journal ArticleDOI
TL;DR: An object detection system based on mixtures of multiscale deformable part models that is able to represent highly variable object classes and achieves state-of-the-art results in the PASCAL object detection challenges is described.
Abstract: We describe an object detection system based on mixtures of multiscale deformable part models. Our system is able to represent highly variable object classes and achieves state-of-the-art results in the PASCAL object detection challenges. While deformable part models have become quite popular, their value had not been demonstrated on difficult benchmarks such as the PASCAL data sets. Our system relies on new methods for discriminative training with partially labeled data. We combine a margin-sensitive approach for data-mining hard negative examples with a formalism we call latent SVM. A latent SVM is a reformulation of MI--SVM in terms of latent variables. A latent SVM is semiconvex, and the training problem becomes convex once latent information is specified for the positive examples. This leads to an iterative training algorithm that alternates between fixing latent values for positive examples and optimizing the latent SVM objective function.

10,501 citations

Journal ArticleDOI
TL;DR: This paper has designed a stand-alone, flexible C++ implementation that enables the evaluation of individual components and that can easily be extended to include new algorithms.
Abstract: Stereo matching is one of the most active research areas in computer vision. While a large number of algorithms for stereo correspondence have been developed, relatively little work has been done on characterizing their performance. In this paper, we present a taxonomy of dense, two-frame stereo methods designed to assess the different components and design decisions made in individual stereo algorithms. Using this taxonomy, we compare existing stereo methods and present experiments evaluating the performance of many different variants. In order to establish a common software platform and a collection of data sets for easy evaluation, we have designed a stand-alone, flexible C++ implementation that enables the evaluation of individual components and that can be easily extended to include new algorithms. We have also produced several new multiframe stereo data sets with ground truth, and are making both the code and data sets available on the Web.

7,458 citations

Journal ArticleDOI
TL;DR: This paper evaluates the performance both of some texture measures which have been successfully used in various applications and of some new promising approaches proposed recently.

6,650 citations

MonographDOI
01 Jan 2006
TL;DR: This coherent and comprehensive book unifies material from several sources, including robotics, control theory, artificial intelligence, and algorithms, into planning under differential constraints that arise when automating the motions of virtually any mechanical system.
Abstract: Planning algorithms are impacting technical disciplines and industries around the world, including robotics, computer-aided design, manufacturing, computer graphics, aerospace applications, drug design, and protein folding. This coherent and comprehensive book unifies material from several sources, including robotics, control theory, artificial intelligence, and algorithms. The treatment is centered on robot motion planning but integrates material on planning in discrete spaces. A major part of the book is devoted to planning under uncertainty, including decision theory, Markov decision processes, and information spaces, which are the “configuration spaces” of all sensor-based planning problems. The last part of the book delves into planning under differential constraints that arise when automating the motions of virtually any mechanical system. Developed from courses taught by the author, the book is intended for students, engineers, and researchers in robotics, artificial intelligence, and control theory as well as computer graphics, algorithms, and computational biology.

6,340 citations