scispace - formally typeset
Search or ask a question
Author

Michael I. Jordan

Other affiliations: Stanford University, Princeton University, Broad Institute  ...read more
Bio: Michael I. Jordan is an academic researcher from University of California, Berkeley. The author has contributed to research in topics: Computer science & Inference. The author has an hindex of 176, co-authored 1016 publications receiving 216204 citations. Previous affiliations of Michael I. Jordan include Stanford University & Princeton University.


Papers
More filters
Journal ArticleDOI
TL;DR: The data support the hypothesis that unconstrained motions, unlike compliant motions, are not programmed to follow a straight line path in the extrinsic space and provide a theoretical frame of reference within which some apparently contradictory results in the movement generation literature may be explained.
Abstract: Two main questions were addressed in the present study. First, does the existence of kinematic regularities in the extrinsic space represent a general rule? Second, can the existence of extrinsic regularities be related to specific experimental situations implying, for instance, the generation of compliant motion (i.e. a motion constrained by external contact)? To address these two questions we studied the spatio-temporal characteristics of unconstrained and compliant movements. Five major differences were observed between these two types of movement: (1) the movement latency and movement duration were significantly longer in the compliant than in the unconstrained condition; (2) whereas the hand path was curved and variable according to movement direction for the unconstrained movements, it was straight and invariant for the compliant movements; (3) whereas the movement end-point distribution was roughly circular for the unconstrained movements, it was consistently elongated and typically oriented in the...

42 citations

Proceedings Article
02 Jun 2010
TL;DR: A type-based sampler, which updates a block of variables, identified by a type, which spans multiple sentences, is introduced, which shows improvements on part-of-speech induction, word segmentation, and learning tree-substitution grammars.
Abstract: Most existing algorithms for learning latent-variable models---such as EM and existing Gibbs samplers---are token-based, meaning that they update the variables associated with one sentence at a time. The incremental nature of these methods makes them susceptible to local optima/slow mixing. In this paper, we introduce a type-based sampler, which updates a block of variables, identified by a type, which spans multiple sentences. We show improvements on part-of-speech induction, word segmentation, and learning tree-substitution grammars.

41 citations

Journal ArticleDOI
TL;DR: In this article, the authors characterized responses to slow growth per se that are not nutrient-specific, and showed that these global homeostatic responses presumably help to coordinate the slowing of growth, and in the case of downregulated genes, to conserve scarce N or S for other purposes.
Abstract: We previously characterized nutrient-specific transcriptional changes in Escherichia coli upon limitation of nitrogen (N) or sulfur (S). These global homeostatic responses presumably minimize the slowing of growth under a particular condition. Here, we characterize responses to slow growth per se that are not nutrient-specific. The latter help to coordinate the slowing of growth, and in the case of down-regulated genes, to conserve scarce N or S for other purposes. Three effects were particularly striking. First, although many genes under control of the stationary phase sigma factor RpoS were induced and were apparently required under S-limiting conditions, one or more was inhibitory under N-limiting conditions, or RpoS itself was inhibitory. RpoS was, however, universally required during nutrient downshifts. Second, limitation for N and S greatly decreased expression of genes required for synthesis of flagella and chemotaxis, and the motility of E. coli was decreased. Finally, unlike the response of all other met genes, transcription of metE was decreased under S- and N-limiting conditions. The metE product, a methionine synthase, is one of the most abundant proteins in E. coli grown aerobically in minimal medium. Responses of metE to S and N limitation pointed to an interesting physiological rationale for the regulatory subcircuit controlled by the methionine activator MetR.

41 citations

Journal ArticleDOI
TL;DR: This work makes use of a careful form of localization in the associated empirical process, and develops a recursive argument to progressively sharpen the statistical rate of the EM algorithm in over-specified settings.
Abstract: A line of recent work has analyzed the behavior of the Expectation-Maximization (EM) algorithm in the well-specified setting, in which the population likelihood is locally strongly concave around its maximizing argument. Examples include suitably separated Gaussian mixture models and mixtures of linear regressions. We consider over-specified settings in which the number of fitted components is larger than the number of components in the true distribution. Such mis-specified settings can lead to singularity in the Fisher information matrix, and moreover, the maximum likelihood estimator based on $n$ i.i.d. samples in $d$ dimensions can have a nonstandard $\mathcal{O}((d/n)^{\frac{1}{4}})$ rate of convergence. Focusing on the simple setting of two-component mixtures fit to a $d$-dimensional Gaussian distribution, we study the behavior of the EM algorithm both when the mixture weights are different (unbalanced case), and are equal (balanced case). Our analysis reveals a sharp distinction between these two cases: in the former, the EM algorithm converges geometrically to a point at Euclidean distance of $\mathcal{O}((d/n)^{\frac{1}{2}})$ from the true parameter, whereas in the latter case, the convergence rate is exponentially slower, and the fixed point has a much lower $\mathcal{O}((d/n)^{\frac{1}{4}})$ accuracy. Analysis of this singular case requires the introduction of some novel techniques: in particular, we make use of a careful form of localization in the associated empirical process, and develop a recursive argument to progressively sharpen the statistical rate.

41 citations

Journal ArticleDOI
TL;DR: A hybrid neural network model of aimed arm movements that consists of a feedforward controller and a postural controller is proposed that provides a candidate neural mechanism to explain the stochastic variability of the time course of the feedforward motor command.
Abstract: We propose a hybrid neural network model of aimed arm movements that consists of a feedforward controller and a postural controller. The cascade neural network of Kawato, Maeda, Uno, and Suzuki (1990) was employed as a computational implementation of the feedforward controller. This network computes feedforward motor commands based on a minimum torque-change criterion. If the weighting parameter of the smoothness criterion is fixed and the number of relaxation iterations is rather small, the cascade model cannot calculate the exact torque, and the hand does not reach the desired target by using the feedforward control alone. Thus, one observes an error between the final position and the desired target location. By using a fixed weighting parameter value and a limited iteration number to simulate target-directed arm movements, we found that the cascade model generated a planning time–accuracy trade-off, and a quasi–power-law type of speed–accuracy trade-off. The model provides a candidate neural m...

40 citations


Cited by
More filters
Proceedings ArticleDOI
07 Jun 2015
TL;DR: Inception as mentioned in this paper is a deep convolutional neural network architecture that achieves the new state of the art for classification and detection in the ImageNet Large-Scale Visual Recognition Challenge 2014 (ILSVRC14).
Abstract: We propose a deep convolutional neural network architecture codenamed Inception that achieves the new state of the art for classification and detection in the ImageNet Large-Scale Visual Recognition Challenge 2014 (ILSVRC14). The main hallmark of this architecture is the improved utilization of the computing resources inside the network. By a carefully crafted design, we increased the depth and width of the network while keeping the computational budget constant. To optimize quality, the architectural decisions were based on the Hebbian principle and the intuition of multi-scale processing. One particular incarnation used in our submission for ILSVRC14 is called GoogLeNet, a 22 layers deep network, the quality of which is assessed in the context of classification and detection.

40,257 citations

Book
18 Nov 2016
TL;DR: Deep learning as mentioned in this paper is a form of machine learning that enables computers to learn from experience and understand the world in terms of a hierarchy of concepts, and it is used in many applications such as natural language processing, speech recognition, computer vision, online recommendation systems, bioinformatics, and videogames.
Abstract: Deep learning is a form of machine learning that enables computers to learn from experience and understand the world in terms of a hierarchy of concepts. Because the computer gathers knowledge from experience, there is no need for a human computer operator to formally specify all the knowledge that the computer needs. The hierarchy of concepts allows the computer to learn complicated concepts by building them out of simpler ones; a graph of these hierarchies would be many layers deep. This book introduces a broad range of topics in deep learning. The text offers mathematical and conceptual background, covering relevant concepts in linear algebra, probability theory and information theory, numerical computation, and machine learning. It describes deep learning techniques used by practitioners in industry, including deep feedforward networks, regularization, optimization algorithms, convolutional networks, sequence modeling, and practical methodology; and it surveys such applications as natural language processing, speech recognition, computer vision, online recommendation systems, bioinformatics, and videogames. Finally, the book offers research perspectives, covering such theoretical topics as linear factor models, autoencoders, representation learning, structured probabilistic models, Monte Carlo methods, the partition function, approximate inference, and deep generative models. Deep Learning can be used by undergraduate or graduate students planning careers in either industry or research, and by software engineers who want to begin using deep learning in their products or platforms. A website offers supplementary material for both readers and instructors.

38,208 citations

Book
01 Jan 1988
TL;DR: This book provides a clear and simple account of the key ideas and algorithms of reinforcement learning, which ranges from the history of the field's intellectual foundations to the most recent developments and applications.
Abstract: Reinforcement learning, one of the most active research areas in artificial intelligence, is a computational approach to learning whereby an agent tries to maximize the total amount of reward it receives when interacting with a complex, uncertain environment. In Reinforcement Learning, Richard Sutton and Andrew Barto provide a clear and simple account of the key ideas and algorithms of reinforcement learning. Their discussion ranges from the history of the field's intellectual foundations to the most recent developments and applications. The only necessary mathematical background is familiarity with elementary concepts of probability. The book is divided into three parts. Part I defines the reinforcement learning problem in terms of Markov decision processes. Part II provides basic solution methods: dynamic programming, Monte Carlo methods, and temporal-difference learning. Part III presents a unified view of the solution methods and incorporates artificial neural networks, eligibility traces, and planning; the two final chapters present case studies and consider the future of reinforcement learning.

37,989 citations

Journal ArticleDOI
TL;DR: This work proposes a generative model for text and other collections of discrete data that generalizes or improves on several previous models including naive Bayes/unigram, mixture of unigrams, and Hofmann's aspect model.
Abstract: We describe latent Dirichlet allocation (LDA), a generative probabilistic model for collections of discrete data such as text corpora. LDA is a three-level hierarchical Bayesian model, in which each item of a collection is modeled as a finite mixture over an underlying set of topics. Each topic is, in turn, modeled as an infinite mixture over an underlying set of topic probabilities. In the context of text modeling, the topic probabilities provide an explicit representation of a document. We present efficient approximate inference techniques based on variational methods and an EM algorithm for empirical Bayes parameter estimation. We report results in document modeling, text classification, and collaborative filtering, comparing to a mixture of unigrams model and the probabilistic LSI model.

30,570 citations

Proceedings Article
03 Jan 2001
TL;DR: This paper proposed a generative model for text and other collections of discrete data that generalizes or improves on several previous models including naive Bayes/unigram, mixture of unigrams, and Hof-mann's aspect model, also known as probabilistic latent semantic indexing (pLSI).
Abstract: We propose a generative model for text and other collections of discrete data that generalizes or improves on several previous models including naive Bayes/unigram, mixture of unigrams [6], and Hof-mann's aspect model, also known as probabilistic latent semantic indexing (pLSI) [3]. In the context of text modeling, our model posits that each document is generated as a mixture of topics, where the continuous-valued mixture proportions are distributed as a latent Dirichlet random variable. Inference and learning are carried out efficiently via variational algorithms. We present empirical results on applications of this model to problems in text modeling, collaborative filtering, and text classification.

25,546 citations