scispace - formally typeset
Search or ask a question
Author

Charles R. Dyer

Bio: Charles R. Dyer is an academic researcher from University of Wisconsin-Madison. The author has contributed to research in topics: Motion estimation & Motion field. The author has an hindex of 43, co-authored 141 publications receiving 9919 citations. Previous affiliations of Charles R. Dyer include University of Wisconsin System & University of Maryland, College Park.


Papers
More filters
Journal ArticleDOI
TL;DR: This paper introduces a generalization of cellular automata in which each celi is a tape-bounded Turing machine rather than a finite-state machine, suggesting that this model of parallel computation is a very suitable one for studying the advantages of parallelism in this domain.
Abstract: This paper introduces a generalization of cellular automata in which each celi is a tape-bounded Turing machine rather than a finite-state machine. Fast algorithms are given for performing various basic image processing tasks by such automata. It is suggested that this model of parallel computation is a very suitable one for studying the advantages of parallelism in this domain.

76 citations

Proceedings ArticleDOI
27 Oct 1986
TL;DR: This paper gives upper and lower bounds on the maximum size of aspect graphs and gives worstcase optimal algorithms for their construction, first in the convex case and then in the general case and shows a different way to label the aspect graph.
Abstract: In this paper we present tight bounds on the maximum size of aspect graphs and give worstcase optimal algorithms for their construction, first in the convex case and then in the general case. In particular, we give upper and lower bounds on the maximum size (including vertex labels) of Θ(n3) and Θ(n5) and algorithms for constructing the aspect graph which run in time O(n3) and O(n5) for the convex and general cases respectively. The algorithm for the general case makes use of a new 3D object representation called the aspect representation or asp. We also show a different way to label the aspect graph in order to save a factor of n in the asymptotic size (at the expense of label retrieval time) in both the convex and general cases, and we suggest alternatives to the aspect graph which require less space and store more information.

75 citations

Proceedings ArticleDOI
23 Jun 2008
TL;DR: A probabilistic fusion approach (PFA) that produces a high performance estimator for human age prediction based on Bayespsila rule without the mutual independence assumption that is very common for traditional classifier combination methods.
Abstract: Human age prediction is useful for many applications. The age information could be used as a kind of semantic knowledge for multimedia content analysis and understanding. In this paper we propose a probabilistic fusion approach (PFA) that produces a high performance estimator for human age prediction. The PFA framework fuses a regressor and a classifier. We derive the predictor based on Bayespsila rule without the mutual independence assumption that is very common for traditional classifier combination methods. Using a sequential fusion strategy, the predictor reduces age estimation errors significantly. Experiments on the large UIUC-IFP-Y aging database and the FG-NET aging database show the merit of the proposed approach to human age prediction.

75 citations

Proceedings ArticleDOI
18 Jun 2003
TL;DR: The linear programming technique used in this paper, which is called feature selection via linear programming (FSLP), can determine the number of features and which features to use in the resulting classification function based on recent results in optimization.
Abstract: A linear programming technique is introduced that jointly performs feature selection and classifier training so that a subset of features is optimally selected together with the classifier. Because traditional classification methods in computer vision have used a two-step approach: feature selection followed by classifier training, feature selection has often been ad hoc using heuristics or requiring a time-consuming forward and backward search process. Moreover, it is difficult to determine which features to use and how many features to use when these two steps are separated. The linear programming technique used in this paper, which we call feature selection via linear programming (FSLP), can determine the number of features and which features to use in the resulting classification function based on recent results in optimization. We analyze why FSLP can avoid the curse of dimensionality problem based on margin analysis. As one demonstration of the performance of this FSLP technique for computer vision tasks, we apply it to the problem of face expression recognition. Recognition accuracy is compared with results using support vector machines, the AdaBoost algorithm, and a Bayes classifier.

73 citations

Proceedings ArticleDOI
23 Jun 1999
TL;DR: The problem of view interpolation for dynamic scenes is introduced, specifically concerned with interpolating between two reference views captured at different times, so that there is a missing interval of time between when the views were taken.
Abstract: We introduce the problem of view interpolation for dynamic scenes. Our solution to this problem extends the concept of view morphing and retains the practical advantages of that method. We are specifically concerned with interpolating between two reference views captured at different times, so that there is a missing interval of time between when the views were taken. The synthetic interpolations produced by our algorithm portray one possible physically-valid version of what transpired in the scene during the missing time. It is assumed that each object in the original scene underwent a series of rigid translations. Dynamic view morphing can work with widely-spaced reference views, sparse point correspondences, and uncalibrated cameras. When the camera-to-camera transformation can be determined, the synthetic interpolation will portray scene objects moving along straight-line, constant-velocity trajectories in world space.

72 citations


Cited by
More filters
Journal ArticleDOI
Paul J. Besl1, H.D. McKay1
TL;DR: In this paper, the authors describe a general-purpose representation-independent method for the accurate and computationally efficient registration of 3D shapes including free-form curves and surfaces, based on the iterative closest point (ICP) algorithm, which requires only a procedure to find the closest point on a geometric entity to a given point.
Abstract: The authors describe a general-purpose, representation-independent method for the accurate and computationally efficient registration of 3-D shapes including free-form curves and surfaces. The method handles the full six degrees of freedom and is based on the iterative closest point (ICP) algorithm, which requires only a procedure to find the closest point on a geometric entity to a given point. The ICP algorithm always converges monotonically to the nearest local minimum of a mean-square distance metric, and the rate of convergence is rapid during the first few iterations. Therefore, given an adequate set of initial rotations and translations for a particular class of objects with a certain level of 'shape complexity', one can globally minimize the mean-square distance metric over all six degrees of freedom by testing each initial registration. One important application of this method is to register sensed data from unfixtured rigid objects with an ideal geometric model, prior to shape inspection. Experimental results show the capabilities of the registration algorithm on point sets, curves, and surfaces. >

17,598 citations

Journal ArticleDOI
TL;DR: An object detection system based on mixtures of multiscale deformable part models that is able to represent highly variable object classes and achieves state-of-the-art results in the PASCAL object detection challenges is described.
Abstract: We describe an object detection system based on mixtures of multiscale deformable part models. Our system is able to represent highly variable object classes and achieves state-of-the-art results in the PASCAL object detection challenges. While deformable part models have become quite popular, their value had not been demonstrated on difficult benchmarks such as the PASCAL data sets. Our system relies on new methods for discriminative training with partially labeled data. We combine a margin-sensitive approach for data-mining hard negative examples with a formalism we call latent SVM. A latent SVM is a reformulation of MI--SVM in terms of latent variables. A latent SVM is semiconvex, and the training problem becomes convex once latent information is specified for the positive examples. This leads to an iterative training algorithm that alternates between fixing latent values for positive examples and optimizing the latent SVM objective function.

10,501 citations

Journal ArticleDOI
TL;DR: This paper has designed a stand-alone, flexible C++ implementation that enables the evaluation of individual components and that can easily be extended to include new algorithms.
Abstract: Stereo matching is one of the most active research areas in computer vision. While a large number of algorithms for stereo correspondence have been developed, relatively little work has been done on characterizing their performance. In this paper, we present a taxonomy of dense, two-frame stereo methods designed to assess the different components and design decisions made in individual stereo algorithms. Using this taxonomy, we compare existing stereo methods and present experiments evaluating the performance of many different variants. In order to establish a common software platform and a collection of data sets for easy evaluation, we have designed a stand-alone, flexible C++ implementation that enables the evaluation of individual components and that can be easily extended to include new algorithms. We have also produced several new multiframe stereo data sets with ground truth, and are making both the code and data sets available on the Web.

7,458 citations

Journal ArticleDOI
TL;DR: This paper evaluates the performance both of some texture measures which have been successfully used in various applications and of some new promising approaches proposed recently.

6,650 citations

MonographDOI
01 Jan 2006
TL;DR: This coherent and comprehensive book unifies material from several sources, including robotics, control theory, artificial intelligence, and algorithms, into planning under differential constraints that arise when automating the motions of virtually any mechanical system.
Abstract: Planning algorithms are impacting technical disciplines and industries around the world, including robotics, computer-aided design, manufacturing, computer graphics, aerospace applications, drug design, and protein folding. This coherent and comprehensive book unifies material from several sources, including robotics, control theory, artificial intelligence, and algorithms. The treatment is centered on robot motion planning but integrates material on planning in discrete spaces. A major part of the book is devoted to planning under uncertainty, including decision theory, Markov decision processes, and information spaces, which are the “configuration spaces” of all sensor-based planning problems. The last part of the book delves into planning under differential constraints that arise when automating the motions of virtually any mechanical system. Developed from courses taught by the author, the book is intended for students, engineers, and researchers in robotics, artificial intelligence, and control theory as well as computer graphics, algorithms, and computational biology.

6,340 citations