scispace - formally typeset
Search or ask a question
Author

Kenneth Steiglitz

Bio: Kenneth Steiglitz is an academic researcher from Princeton University. The author has contributed to research in topics: Signal processing & Very-large-scale integration. The author has an hindex of 46, co-authored 202 publications receiving 14495 citations. Previous affiliations of Kenneth Steiglitz include Telcordia Technologies & Northwestern University.


Papers
More filters
Journal ArticleDOI
TL;DR: A method is presented for determination of an n th order rational transform approximation for a time function, given at least n + 1 of its Laguerre coefficients, based on approximating the discrete set of Laguers coefficients with a rational generating function.
Abstract: A method is presented for determination of an n th order rational transform approximation for a time function, given at least n + 1 of its Laguerre coefficients. The method is based on approximating the discrete set of Laguerre coefficients with a rational generating function. The method does not require predetermination of the poles; and allows the use of as many Laguerre coefficients as are available, without increasing the complexity of the model. Applications to time domain synthesis and transfer function identification are discussed.

14 citations

01 Jan 2008
TL;DR: In this paper, an agent-based model of a minimal economy containing households, retail banks, and producers of consumer and capital goods is presented, where household behavior is based on the buffer-stock savings model by Deaton (1961), while the profit-maximizing firms employ reinforcement learning to determine pricing and production.
Abstract: We present an agent-based model of a minimal economy containing households, retail banks, and producers of consumer and capital goods. Household behavior is based on the buffer-stock savings model by Deaton (1961), while the profit-maximizing firms employ reinforcement learning to determine pricing and production. Competitive retail banks facilitate the flow of funds between households and producers through a fractional-reserve system. Stability of the simulated markets depends only on the self-adjusting, boundedly rational behavior of the agents in the completely closed economic system.

14 citations

Journal ArticleDOI
14 Apr 1991
TL;DR: This work introduces probabilistic models for two alternative clock distribution schemes: tree and straight-line clocking, and presents analytic bounds for the Probability of Failure and the Mean Time to Failure.
Abstract: Achieving efficient and reliable synchronization is a critical problem in building long systolic arrays. This problem is addressed in the context of synchronous systems by introducing probabilistic models for two alternative clock distribution schemes: tree and straight-line clocking. Analytic bounds are presented for the probability of failure, and an examination is made of the tradeoffs between reliability and throughput in both schemes. The basic conclusion is that as the one-dimensional systolic array gets very long, tree clocking becomes preferable to straight-line clocking. >

13 citations

Proceedings ArticleDOI
26 Oct 1994
TL;DR: Modifications are presented to existing clustering and mapping algorithms which improve their efficiency and running-time for the practical models adopted and new heuristics are necessary that will take into account more practical models of communication costs.
Abstract: This paper presents a comparison study of popular clustering and mapping heuristics which are used to map task-flow graphs to message-passing multiprocessors. To this end, we use task-graphs which are representative of important scientific algorithms running on data-sets of practical interest. The annotation which assigns weights to nodes and edges of the task-graphs is realistic. It reflects current trends in processor, communication channel, and message-passing interface technology and takes into consideration hardware characteristics of state-of-the-art multiprocessors. Our experiments show that applying realistic models for task-graph annotation affects the effectiveness and functionality of clustering and mapping techniques. Therefore, new heuristics are necessary that will take into account more practical models of communication costs. We present modifications to existing clustering and mapping algorithms which improve their efficiency and running-time for the practical models adopted. >

13 citations


Cited by
More filters
Book
01 Nov 2008
TL;DR: Numerical Optimization presents a comprehensive and up-to-date description of the most effective methods in continuous optimization, responding to the growing interest in optimization in engineering, science, and business by focusing on the methods that are best suited to practical problems.
Abstract: Numerical Optimization presents a comprehensive and up-to-date description of the most effective methods in continuous optimization. It responds to the growing interest in optimization in engineering, science, and business by focusing on the methods that are best suited to practical problems. For this new edition the book has been thoroughly updated throughout. There are new chapters on nonlinear interior methods and derivative-free methods for optimization, both of which are used widely in practice and the focus of much current research. Because of the emphasis on practical methods, as well as the extensive illustrations and exercises, the book is accessible to a wide audience. It can be used as a graduate text in engineering, operations research, mathematics, computer science, and business. It also serves as a handbook for researchers and practitioners in the field. The authors have strived to produce a text that is pleasant to read, informative, and rigorous - one that reveals both the beautiful nature of the discipline and its practical side.

17,420 citations

Book
24 Aug 2012
TL;DR: This textbook offers a comprehensive and self-contained introduction to the field of machine learning, based on a unified, probabilistic approach, and is suitable for upper-level undergraduates with an introductory-level college math background and beginning graduate students.
Abstract: Today's Web-enabled deluge of electronic data calls for automated methods of data analysis. Machine learning provides these, developing methods that can automatically detect patterns in data and then use the uncovered patterns to predict future data. This textbook offers a comprehensive and self-contained introduction to the field of machine learning, based on a unified, probabilistic approach. The coverage combines breadth and depth, offering necessary background material on such topics as probability, optimization, and linear algebra as well as discussion of recent developments in the field, including conditional random fields, L1 regularization, and deep learning. The book is written in an informal, accessible style, complete with pseudo-code for the most important algorithms. All topics are copiously illustrated with color images and worked examples drawn from such application domains as biology, text processing, computer vision, and robotics. Rather than providing a cookbook of different heuristic methods, the book stresses a principled model-based approach, often using the language of graphical models to specify models in a concise and intuitive way. Almost all the models described have been implemented in a MATLAB software package--PMTK (probabilistic modeling toolkit)--that is freely available online. The book is suitable for upper-level undergraduates with an introductory-level college math background and beginning graduate students.

8,059 citations

Journal ArticleDOI
TL;DR: In this paper, the authors considered factoring integers and finding discrete logarithms on a quantum computer and gave an efficient randomized algorithm for these two problems, which takes a number of steps polynomial in the input size of the integer to be factored.
Abstract: A digital computer is generally believed to be an efficient universal computing device; that is, it is believed able to simulate any physical computing device with an increase in computation time by at most a polynomial factor. This may not be true when quantum mechanics is taken into consideration. This paper considers factoring integers and finding discrete logarithms, two problems which are generally thought to be hard on a classical computer and which have been used as the basis of several proposed cryptosystems. Efficient randomized algorithms are given for these two problems on a hypothetical quantum computer. These algorithms take a number of steps polynomial in the input size, e.g., the number of digits of the integer to be factored.

7,427 citations

Journal ArticleDOI
TL;DR: This paper presents work on computing shape models that are computationally fast and invariant basic transformations like translation, scaling and rotation, and proposes shape detection using a feature called shape context, which is descriptive of the shape of the object.
Abstract: We present a novel approach to measuring similarity between shapes and exploit it for object recognition. In our framework, the measurement of similarity is preceded by: (1) solving for correspondences between points on the two shapes; (2) using the correspondences to estimate an aligning transform. In order to solve the correspondence problem, we attach a descriptor, the shape context, to each point. The shape context at a reference point captures the distribution of the remaining points relative to it, thus offering a globally discriminative characterization. Corresponding points on two similar shapes will have similar shape contexts, enabling us to solve for correspondences as an optimal assignment problem. Given the point correspondences, we estimate the transformation that best aligns the two shapes; regularized thin-plate splines provide a flexible class of transformation maps for this purpose. The dissimilarity between the two shapes is computed as a sum of matching errors between corresponding points, together with a term measuring the magnitude of the aligning transform. We treat recognition in a nearest-neighbor classification framework as the problem of finding the stored prototype shape that is maximally similar to that in the image. Results are presented for silhouettes, trademarks, handwritten digits, and the COIL data set.

6,693 citations

MonographDOI
01 Jan 2006
TL;DR: This coherent and comprehensive book unifies material from several sources, including robotics, control theory, artificial intelligence, and algorithms, into planning under differential constraints that arise when automating the motions of virtually any mechanical system.
Abstract: Planning algorithms are impacting technical disciplines and industries around the world, including robotics, computer-aided design, manufacturing, computer graphics, aerospace applications, drug design, and protein folding. This coherent and comprehensive book unifies material from several sources, including robotics, control theory, artificial intelligence, and algorithms. The treatment is centered on robot motion planning but integrates material on planning in discrete spaces. A major part of the book is devoted to planning under uncertainty, including decision theory, Markov decision processes, and information spaces, which are the “configuration spaces” of all sensor-based planning problems. The last part of the book delves into planning under differential constraints that arise when automating the motions of virtually any mechanical system. Developed from courses taught by the author, the book is intended for students, engineers, and researchers in robotics, artificial intelligence, and control theory as well as computer graphics, algorithms, and computational biology.

6,340 citations