scispace - formally typeset
Search or ask a question
Author

Ruzena Bajcsy

Bio: Ruzena Bajcsy is an academic researcher from University of California, Berkeley. The author has contributed to research in topics: Image segmentation & Segmentation. The author has an hindex of 68, co-authored 500 publications receiving 18552 citations. Previous affiliations of Ruzena Bajcsy include French Institute for Research in Computer Science and Automation & Center for Information Technology.


Papers
More filters
01 Aug 1988
TL;DR: In this article, a search of such sequences of steps that would minimize a loss function while still seeking the most information is formulated as a search for the sequence of steps to minimize the loss function.
Abstract: Active perception (active vision specifically) is defined as a study of modeling and control strategies for perception. Local methods are distinguished from global models by their extent of application in space and time. The local models represent procedures and parameters such as optical distortions of the lens, focal lens, spatial resolution, bandpass filter, etc, The global models, on the other hand, characterize the overall performance and make predictions on how the individual modules interact. The control strategies are formulated as a search of such sequences of steps that would minimize a loss function while still seeking the most information. Examples are shown as the existence proof of the proposed theory on obtaining range from focus and stereo/vergence on 2-D segmentation of an image and 3-D shape parameterization. >

1,247 citations

Journal ArticleDOI
TL;DR: Results show that all normal brains, at least at a certain level of representation, have the same topological structure, but may differ in shape details, and the matching process can account for these differences.
Abstract: Matching of locally variant data to an explicit 3-dimensional pictorial model is developed for X-ray computed tomography scans of the human brain, where the model is a voxel representation of an anatomical human brain atlas. The matching process is 3-dimensional without any preference given to the slicing plane. After global alignment the brain atlas is deformed like a piece of rubber, without tearing or folding. Deformation proceeds step-by-step in a coarse-to-fine strategy, increasing the local similarity and global coherence. The assumption underlying this approach is that all normal brains, at least at a certain level of representation, have the same topological structure, but may differ in shape details. Results show that we can account for these differences.

1,218 citations

Journal ArticleDOI
TL;DR: A method for accurately acquiring and reconstructing the geometry of the human face and for display of this reconstruction in a 3-dimensional format is described and applied in a sample of 70 actors and 69 actresses expressing happiness, sadness, anger, fear and disgust.

607 citations

Journal ArticleDOI
TL;DR: In this paper, a method for recovery of compact volumetric models for shape representation of single-part objects in computer vision is introduced, where the model recovery is formulated as a least-squares minimization of a cost function for all range points belonging to a single part.
Abstract: A method for recovery of compact volumetric models for shape representation of single-part objects in computer vision is introduced. The models are superquadrics with parametric deformations (bending, tapering, and cavity deformation). The input for the model recovery is three-dimensional range points. Model recovery is formulated as a least-squares minimization of a cost function for all range points belonging to a single part. During an iterative gradient descent minimization process, all model parameters are adjusted simultaneously, recovery position, orientation, size, and shape of the model, such that most of the given range points lie close to the model's surface. A specific solution among several acceptable solutions, where are all minima in the parameter space, can be reached by constraining the search to a part of the parameter space. The many shallow local minima in the parameter space are avoided as a solution by using a stochastic technique during minimization. Results using real range data show that the recovered models are stable and that the recovery procedure is fast. >

596 citations

Proceedings ArticleDOI
03 Nov 2004
TL;DR: This paper proposes a distributed and scalable algorithm that eliminates congestion within a sensor network, and that ensures the fair delivery of packets to a central node, or base station, and says that fairness is achieved when equal number of packets are received from each node.
Abstract: In this paper we propose a distributed and scalable algorithm that eliminates congestion within a sensor network, and that ensures the fair delivery of packets to a central node, or base station. We say that fairness is achieved when equal number of packets are received from each node. Since in general we have many sensors transmitting data to the base station, we consider the scenario where we have many-to-one multihop routing, noting that it can easily be extended to unicast or many-to-many routing. Such routing structures often result in the sensors closer to the base station experiencing congestion, which inevitably cause packets originating from sensors further away from the base station to have a higher probability of being dropped. Our algorithm exists in the transport layer of the traditional network stack model, and is designed to work with any MAC protocol in the data-link layer with minor modifications. Our solution is scalable, each sensor mote requires state proportional to the number of its neighbors. Finally, we demonstrate the effectiveness of our solution with both simulations and actual implementation in UC Berkeley's sensor motes.

541 citations


Cited by
More filters
Book
01 Jan 1998
TL;DR: An introduction to a Transient World and an Approximation Tour of Wavelet Packet and Local Cosine Bases.
Abstract: Introduction to a Transient World. Fourier Kingdom. Discrete Revolution. Time Meets Frequency. Frames. Wavelet Zoom. Wavelet Bases. Wavelet Packet and Local Cosine Bases. An Approximation Tour. Estimations are Approximations. Transform Coding. Appendix A: Mathematical Complements. Appendix B: Software Toolboxes.

17,693 citations

Journal ArticleDOI
Paul J. Besl1, H.D. McKay1
TL;DR: In this paper, the authors describe a general-purpose representation-independent method for the accurate and computationally efficient registration of 3D shapes including free-form curves and surfaces, based on the iterative closest point (ICP) algorithm, which requires only a procedure to find the closest point on a geometric entity to a given point.
Abstract: The authors describe a general-purpose, representation-independent method for the accurate and computationally efficient registration of 3-D shapes including free-form curves and surfaces. The method handles the full six degrees of freedom and is based on the iterative closest point (ICP) algorithm, which requires only a procedure to find the closest point on a geometric entity to a given point. The ICP algorithm always converges monotonically to the nearest local minimum of a mean-square distance metric, and the rate of convergence is rapid during the first few iterations. Therefore, given an adequate set of initial rotations and translations for a particular class of objects with a certain level of 'shape complexity', one can globally minimize the mean-square distance metric over all six degrees of freedom by testing each initial registration. One important application of this method is to register sensed data from unfixtured rigid objects with an ideal geometric model, prior to shape inspection. Experimental results show the capabilities of the registration algorithm on point sets, curves, and surfaces. >

17,598 citations

Journal Article
TL;DR: This book by a teacher of statistics (as well as a consultant for "experimenters") is a comprehensive study of the philosophical background for the statistical design of experiment.
Abstract: THE DESIGN AND ANALYSIS OF EXPERIMENTS. By Oscar Kempthorne. New York, John Wiley and Sons, Inc., 1952. 631 pp. $8.50. This book by a teacher of statistics (as well as a consultant for \"experimenters\") is a comprehensive study of the philosophical background for the statistical design of experiment. It is necessary to have some facility with algebraic notation and manipulation to be able to use the volume intelligently. The problems are presented from the theoretical point of view, without such practical examples as would be helpful for those not acquainted with mathematics. The mathematical justification for the techniques is given. As a somewhat advanced treatment of the design and analysis of experiments, this volume will be interesting and helpful for many who approach statistics theoretically as well as practically. With emphasis on the \"why,\" and with description given broadly, the author relates the subject matter to the general theory of statistics and to the general problem of experimental inference. MARGARET J. ROBERTSON

13,333 citations

Journal ArticleDOI
TL;DR: Machine learning addresses many of the same research questions as the fields of statistics, data mining, and psychology, but with differences of emphasis.
Abstract: Machine Learning is the study of methods for programming computers to learn. Computers are applied to a wide range of tasks, and for most of these it is relatively easy for programmers to design and implement the necessary software. However, there are many tasks for which this is difficult or impossible. These can be divided into four general categories. First, there are problems for which there exist no human experts. For example, in modern automated manufacturing facilities, there is a need to predict machine failures before they occur by analyzing sensor readings. Because the machines are new, there are no human experts who can be interviewed by a programmer to provide the knowledge necessary to build a computer system. A machine learning system can study recorded data and subsequent machine failures and learn prediction rules. Second, there are problems where human experts exist, but where they are unable to explain their expertise. This is the case in many perceptual tasks, such as speech recognition, hand-writing recognition, and natural language understanding. Virtually all humans exhibit expert-level abilities on these tasks, but none of them can describe the detailed steps that they follow as they perform them. Fortunately, humans can provide machines with examples of the inputs and correct outputs for these tasks, so machine learning algorithms can learn to map the inputs to the outputs. Third, there are problems where phenomena are changing rapidly. In finance, for example, people would like to predict the future behavior of the stock market, of consumer purchases, or of exchange rates. These behaviors change frequently, so that even if a programmer could construct a good predictive computer program, it would need to be rewritten frequently. A learning program can relieve the programmer of this burden by constantly modifying and tuning a set of learned prediction rules. Fourth, there are applications that need to be customized for each computer user separately. Consider, for example, a program to filter unwanted electronic mail messages. Different users will need different filters. It is unreasonable to expect each user to program his or her own rules, and it is infeasible to provide every user with a software engineer to keep the rules up-to-date. A machine learning system can learn which mail messages the user rejects and maintain the filtering rules automatically. Machine learning addresses many of the same research questions as the fields of statistics, data mining, and psychology, but with differences of emphasis. Statistics focuses on understanding the phenomena that have generated the data, often with the goal of testing different hypotheses about those phenomena. Data mining seeks to find patterns in the data that are understandable by people. Psychological studies of human learning aspire to understand the mechanisms underlying the various learning behaviors exhibited by people (concept learning, skill acquisition, strategy change, etc.).

13,246 citations

Journal ArticleDOI
TL;DR: It is proved the convergence of a recursive mean shift procedure to the nearest stationary point of the underlying density function and, thus, its utility in detecting the modes of the density.
Abstract: A general non-parametric technique is proposed for the analysis of a complex multimodal feature space and to delineate arbitrarily shaped clusters in it. The basic computational module of the technique is an old pattern recognition procedure: the mean shift. For discrete data, we prove the convergence of a recursive mean shift procedure to the nearest stationary point of the underlying density function and, thus, its utility in detecting the modes of the density. The relation of the mean shift procedure to the Nadaraya-Watson estimator from kernel regression and the robust M-estimators; of location is also established. Algorithms for two low-level vision tasks discontinuity-preserving smoothing and image segmentation - are described as applications. In these algorithms, the only user-set parameter is the resolution of the analysis, and either gray-level or color images are accepted as input. Extensive experimental results illustrate their excellent performance.

11,727 citations