scispace - formally typeset
Search or ask a question
Author

Kenneth Steiglitz

Bio: Kenneth Steiglitz is an academic researcher from Princeton University. The author has contributed to research in topics: Signal processing & Very-large-scale integration. The author has an hindex of 46, co-authored 202 publications receiving 14495 citations. Previous affiliations of Kenneth Steiglitz include Telcordia Technologies & Northwestern University.


Papers
More filters
Journal ArticleDOI
TL;DR: This note discusses alternate implementations for digital filter sections substituting memory for logic and points out the possible advantage of doing so.
Abstract: A recently proposed method to implement digital filters using ROM can also be employed to implement multiplication by a constant. This note discusses alternate implementations for digital filter sections substituting memory for logic and points out the possible advantage of doing so.

5 citations

Journal ArticleDOI
TL;DR: A conjectured upper bound on the minimum distance is given that is easily computed given the impulse response of the channel, the number of inputs, and the input length, and is shown to be valid for a limited class of impulse response functions.
Abstract: Given a discrete-time, linear, shift-invariant channel with finite impulse response, the problem of designing finite-length input signals with bounded amplitude (l/sub /spl infin// norm) such that the corresponding output signals are maximally separated in amplitude (l/sub /spl infin// sense) is considered. In general, this is a nonconvex optimization problem, and appears to be computationally difficult. An optimization algorithm that seems to perform well is described. Optimized signal sets and associated minimum distances (minimum l/sub /spl infin// separation between two distinct channel outputs) are presented for some example impulse responses. A conjectured upper bound on the minimum distance is given that is easily computed given the impulse response of the channel, the number of inputs, and the input length. This upper bound is shown to be valid for a limited class of impulse response functions. >

4 citations

Journal ArticleDOI
TL;DR: The proposed estimate, while apparently not optimal! may still be useful if the uncon-strained minimum is close to the constrained minimum, and one way of checking this possibility for a particular set of observations would be to compute Koopmans' optimal state wctor after finding the estimate.
Abstract: constraint on the estimated state x-ector p which has been overlooked. Because the first hr elements of p(a) are the output, at R successive sampling instants, of a dynamic system described by a K t h order difference equation [fl) of the original paper], each is completely determined by the values of the K preceding outputs, the K preceding inputs, and the current input. These 2K+1 values for each of the output variables in g l J are contained in p(a-lj and the last K elements of Y (~ J. Therefore, while pI1! can be selected arbitrarily to minimize the criterion D , oitly the last K eIements of ~ (~ 1 (5 + 1 j can be so chosen. But the procedure described by Koop-mans2 require? that p'\": be constrained o d y bq' the equation Hence. (1) of the paper, Koopmans' result of minimization with respect to % (a) , is inap-plicable; the vectors a;*: and the corresponding D are not correct unless the uncon-strained minimum of (4) happens to coincide with the constrained minimum. The additional constraint can, of course, be ignored if the nono\\-erlapping observation sets 3.'\"' are chosen to be so widely separated that LCa' is independent of 3.Ce-lj. However, this choice nleans a much longer period of obserx-ation to obtain an estimate with a given x~ariance, and implies a n a priori assumption about the effective settling time of the system. The proposed estimate, while apparently not optimal! may still be useful if the uncon-strained minimum is close to the constrained (true) minimum. One way of checking this possibility for a particular set of observations would be to compute Koopmans' optimal state wctor after finding the estimate ?. These vectors ai*' specif\\-a n estimated set of inputs and outputs c (i) and i(i) which can be substituted into (1) along with the a's and ,a's specified by T. I t is correct, as L. E. hIcBride, Jr. observes , that in Section I\\-of the paper referred to,l the maximum likelihood estimates were determined xithout taking into account the linear constraints bettveen elements of adjacent vectors. This was not overlooked by the author, but apparently the discussions of this point in Sections IY and \\:I1 require amplification. The estimates of Section TI-utilize only a part of the information available from the .v(?t) and y (a) sequences to estimate the pulse transfer function coefficient 1-ector y, …

4 citations

Journal ArticleDOI
TL;DR: Computational results are presented which show that N. Zadeh's pathological examples for the simplex algorithm apparently take a number of pivots approximately proportional to the number of columns in the tableau when its column order is randomized.
Abstract: Computational results are presented which show that N. Zadeh's pathological examples for the simplex algorithm apparently take a number of pivots approximately proportional to the number of columns in the tableau when its column order is randomized.

4 citations

Journal ArticleDOI
TL;DR: It is shown that DR provides a natural lower bound on the time complexity of any distributed reconfiguration algorithm and that there is no difference between being FR and LR on dynamic graphs.
Abstract: The authors study fault-tolerant redundant structures for maintaining reliable arrays. In particular, they assume that the desired array (application graph) is embedded in a certain class of regular, bounded-degree graphs called dynamic graphs. The degree of reconfigurability (DR) and DR with distance (DR/sup d/) of a redundant graph are defined. When DR and DR/sup d/ are independent of the size of the application graph, the graph is finitely reconfigurable (FR) and locally reconfigurable (LR), respectively. It is shown that DR provides a natural lower bound on the time complexity of any distributed reconfiguration algorithm and that there is no difference between being FR and LR on dynamic graphs. It is also shown that if both local reconfigurability and a fixed level of reliability are to be maintained, a dynamic graph must be of a dimension at least one greater than the application graph. Thus, for example, a one-dimensional systolic array cannot be embedded in a one-dimensional dynamic graph without sacrificing either reliability or locality of reconfiguration. >

4 citations


Cited by
More filters
Book
01 Nov 2008
TL;DR: Numerical Optimization presents a comprehensive and up-to-date description of the most effective methods in continuous optimization, responding to the growing interest in optimization in engineering, science, and business by focusing on the methods that are best suited to practical problems.
Abstract: Numerical Optimization presents a comprehensive and up-to-date description of the most effective methods in continuous optimization. It responds to the growing interest in optimization in engineering, science, and business by focusing on the methods that are best suited to practical problems. For this new edition the book has been thoroughly updated throughout. There are new chapters on nonlinear interior methods and derivative-free methods for optimization, both of which are used widely in practice and the focus of much current research. Because of the emphasis on practical methods, as well as the extensive illustrations and exercises, the book is accessible to a wide audience. It can be used as a graduate text in engineering, operations research, mathematics, computer science, and business. It also serves as a handbook for researchers and practitioners in the field. The authors have strived to produce a text that is pleasant to read, informative, and rigorous - one that reveals both the beautiful nature of the discipline and its practical side.

17,420 citations

Book
24 Aug 2012
TL;DR: This textbook offers a comprehensive and self-contained introduction to the field of machine learning, based on a unified, probabilistic approach, and is suitable for upper-level undergraduates with an introductory-level college math background and beginning graduate students.
Abstract: Today's Web-enabled deluge of electronic data calls for automated methods of data analysis. Machine learning provides these, developing methods that can automatically detect patterns in data and then use the uncovered patterns to predict future data. This textbook offers a comprehensive and self-contained introduction to the field of machine learning, based on a unified, probabilistic approach. The coverage combines breadth and depth, offering necessary background material on such topics as probability, optimization, and linear algebra as well as discussion of recent developments in the field, including conditional random fields, L1 regularization, and deep learning. The book is written in an informal, accessible style, complete with pseudo-code for the most important algorithms. All topics are copiously illustrated with color images and worked examples drawn from such application domains as biology, text processing, computer vision, and robotics. Rather than providing a cookbook of different heuristic methods, the book stresses a principled model-based approach, often using the language of graphical models to specify models in a concise and intuitive way. Almost all the models described have been implemented in a MATLAB software package--PMTK (probabilistic modeling toolkit)--that is freely available online. The book is suitable for upper-level undergraduates with an introductory-level college math background and beginning graduate students.

8,059 citations

Journal ArticleDOI
TL;DR: In this paper, the authors considered factoring integers and finding discrete logarithms on a quantum computer and gave an efficient randomized algorithm for these two problems, which takes a number of steps polynomial in the input size of the integer to be factored.
Abstract: A digital computer is generally believed to be an efficient universal computing device; that is, it is believed able to simulate any physical computing device with an increase in computation time by at most a polynomial factor. This may not be true when quantum mechanics is taken into consideration. This paper considers factoring integers and finding discrete logarithms, two problems which are generally thought to be hard on a classical computer and which have been used as the basis of several proposed cryptosystems. Efficient randomized algorithms are given for these two problems on a hypothetical quantum computer. These algorithms take a number of steps polynomial in the input size, e.g., the number of digits of the integer to be factored.

7,427 citations

Journal ArticleDOI
TL;DR: This paper presents work on computing shape models that are computationally fast and invariant basic transformations like translation, scaling and rotation, and proposes shape detection using a feature called shape context, which is descriptive of the shape of the object.
Abstract: We present a novel approach to measuring similarity between shapes and exploit it for object recognition. In our framework, the measurement of similarity is preceded by: (1) solving for correspondences between points on the two shapes; (2) using the correspondences to estimate an aligning transform. In order to solve the correspondence problem, we attach a descriptor, the shape context, to each point. The shape context at a reference point captures the distribution of the remaining points relative to it, thus offering a globally discriminative characterization. Corresponding points on two similar shapes will have similar shape contexts, enabling us to solve for correspondences as an optimal assignment problem. Given the point correspondences, we estimate the transformation that best aligns the two shapes; regularized thin-plate splines provide a flexible class of transformation maps for this purpose. The dissimilarity between the two shapes is computed as a sum of matching errors between corresponding points, together with a term measuring the magnitude of the aligning transform. We treat recognition in a nearest-neighbor classification framework as the problem of finding the stored prototype shape that is maximally similar to that in the image. Results are presented for silhouettes, trademarks, handwritten digits, and the COIL data set.

6,693 citations

MonographDOI
01 Jan 2006
TL;DR: This coherent and comprehensive book unifies material from several sources, including robotics, control theory, artificial intelligence, and algorithms, into planning under differential constraints that arise when automating the motions of virtually any mechanical system.
Abstract: Planning algorithms are impacting technical disciplines and industries around the world, including robotics, computer-aided design, manufacturing, computer graphics, aerospace applications, drug design, and protein folding. This coherent and comprehensive book unifies material from several sources, including robotics, control theory, artificial intelligence, and algorithms. The treatment is centered on robot motion planning but integrates material on planning in discrete spaces. A major part of the book is devoted to planning under uncertainty, including decision theory, Markov decision processes, and information spaces, which are the “configuration spaces” of all sensor-based planning problems. The last part of the book delves into planning under differential constraints that arise when automating the motions of virtually any mechanical system. Developed from courses taught by the author, the book is intended for students, engineers, and researchers in robotics, artificial intelligence, and control theory as well as computer graphics, algorithms, and computational biology.

6,340 citations