scispace - formally typeset
Search or ask a question
Author

Uri Nodelman

Bio: Uri Nodelman is an academic researcher from Stanford University. The author has contributed to research in topics: Bayesian network & Markov process. The author has an hindex of 12, co-authored 16 publications receiving 5096 citations.

Papers
More filters
01 Jan 2011
TL;DR: To understand the central claims of evolutionary psychology the authors require an understanding of some key concepts in evolutionary biology, cognitive psychology, philosophy of science and philosophy of mind.
Abstract: Evolutionary psychology is one of many biologically informed approaches to the study of human behavior. Along with cognitive psychologists, evolutionary psychologists propose that much, if not all, of our behavior can be explained by appeal to internal psychological mechanisms. What distinguishes evolutionary psychologists from many cognitive psychologists is the proposal that the relevant internal mechanisms are adaptations—products of natural selection—that helped our ancestors get around the world, survive and reproduce. To understand the central claims of evolutionary psychology we require an understanding of some key concepts in evolutionary biology, cognitive psychology, philosophy of science and philosophy of mind. Philosophers are interested in evolutionary psychology for a number of reasons. For philosophers of science —mostly philosophers of biology—evolutionary psychology provides a critical target. There is a broad consensus among philosophers of science that evolutionary psychology is a deeply flawed enterprise. For philosophers of mind and cognitive science evolutionary psychology has been a source of empirical hypotheses about cognitive architecture and specific components of that architecture. Philosophers of mind are also critical of evolutionary psychology but their criticisms are not as all-encompassing as those presented by philosophers of biology. Evolutionary psychology is also invoked by philosophers interested in moral psychology both as a source of empirical hypotheses and as a critical target.

4,670 citations

Posted Content
TL;DR: A probabilistic semantics for the language in terms of the generative model a CTBN defines over sequences of events is presented, and an algorithm for approximate inference which takes advantage of the structure within the process is provided.
Abstract: In this paper we present a language for finite state continuous time Bayesian networks (CTBNs), which describe structured stochastic processes that evolve over continuous time. The state of the system is decomposed into a set of local variables whose values change over time. The dynamics of the system are described by specifying the behavior of each local variable as a function of its parents in a directed (possibly cyclic) graph. The model specifies, at any given point in time, the distribution over two aspects: when a local variable changes its value and the next value it takes. These distributions are determined by the variable s CURRENT value AND the CURRENT VALUES OF its parents IN the graph.More formally, each variable IS modelled AS a finite state continuous time Markov process whose transition intensities are functions OF its parents.We present a probabilistic semantics FOR the language IN terms OF the generative model a CTBN defines OVER sequences OF events.We list types OF queries one might ask OF a CTBN, discuss the conceptual AND computational difficulties associated WITH exact inference, AND provide an algorithm FOR approximate inference which takes advantage OF the structure within the process.

296 citations

Book
01 Jan 2007
TL;DR: In this article, a language for finite state continuous time Bayesian networks (CTBNs) is presented, which describes structured stochastic processes that evolve over continuous time, where the dynamics of the system are described by specifying the behavior of each local variable as a function of its parents in a directed cyclic graph.
Abstract: In this paper we present a language for finite state continuous time Bayesian networks (CTBNs), which describe structured stochastic processes that evolve over continuous time. The state of the system is decomposed into a set of local variables whose values change over time. The dynamics of the system are described by specifying the behavior of each local variable as a function of its parents in a directed (possibly cyclic) graph. The model specifies, at any given point in time, the distribution over two aspects: when a local variable changes its value and the next value it takes. These distributions are determined by the variable's current value and the current values of its parents in the graph. More formally, each variable is modelled as a finite state continuous time Markov process whose transition intensities are functions of its parents. We present a probabilistic semantics for the language in terms of the generative model a CTBN defines over sequences of events. We list types of queries one might ask of a CTBN, discuss the conceptual and computational difficulties associated with exact inference, and provide an algorithm for approximate inference which takes advantage of the structure within the process.

178 citations

Proceedings Article
07 Aug 2002
TL;DR: In this article, the authors define a conjugate prior for continuous-time Bayesian networks and show how it can be used both for Bayesian parameter estimation and as the basis of a Bayesian score for structure learning.
Abstract: Continuous time Bayesian networks (CTBN) describe structured stochastic processes with finitely many states that evolve nver continuous time. A CTBN is a directed (possibly cyclic) dependency graph over a set of variables, each of which represents a finite state continuous time Markov process whose transition model is a function of its parents. We address the problem of leaning parameters and structure of a CTBN from fully observed data. We define a conjugate prior for CTBNs and show how it can be used both for Bayesian parameter estimation and as the basis of a Bayesian score for structure learning. Because acyclicity is not a constraint in CTBNs, we can show that the structure leaning problem is significantly easier, both in theory and in practice, than structure leaning for dynamic Bayesian networks (DBNs). Furthermore, as CTBNs can tailor the parameters and dependency structure to the different time granularities of the evolution of different variables, they can provide a better fit to continuous-time processes than DBNs with a fixed time granularity.

156 citations

Proceedings Article
26 Jul 2005
TL;DR: In this paper, the authors address the problem of approximate inference in continuous time Bayesian networks (CTBNs), allowing for general queries conditioned on evidence over continuous time intervals and at discrete time points.
Abstract: Continuous time Bayesian networks (CTBNs) describe structured stochastic processes with finitely many states that evolve over continuous time. A CTBN is a directed (possibly cyclic) dependency graph over a set of variables, each of which represents a finite state continuous time Markov process whose transition model is a function of its parents. As shown previously, exact inference in CTBNs is intractable. We address the problem of approximate inference, allowing for general queries conditioned on evidence over continuous time intervals and at discrete time points. We show how CTBNs can be parameterized within the exponential family, and use that insight to develop a message passing scheme in cluster graphs and allows us to apply expectation propagation to CTBNs. The clusters in our cluster graph do not contain distributions over the cluster variables at individual time points, but distributions over trajectories of the variables throughout a duration. Thus, unlike discrete time temporal models such as dynamic Bayesian networks, we can adapt the time granularity at which we reason for different variables and in different conditions.

82 citations


Cited by
More filters
01 Jan 1964
TL;DR: In this paper, the notion of a collective unconscious was introduced as a theory of remembering in social psychology, and a study of remembering as a study in Social Psychology was carried out.
Abstract: Part I. Experimental Studies: 2. Experiment in psychology 3. Experiments on perceiving III Experiments on imaging 4-8. Experiments on remembering: (a) The method of description (b) The method of repeated reproduction (c) The method of picture writing (d) The method of serial reproduction (e) The method of serial reproduction picture material 9. Perceiving, recognizing, remembering 10. A theory of remembering 11. Images and their functions 12. Meaning Part II. Remembering as a Study in Social Psychology: 13. Social psychology 14. Social psychology and the matter of recall 15. Social psychology and the manner of recall 16. Conventionalism 17. The notion of a collective unconscious 18. The basis of social recall 19. A summary and some conclusions.

5,690 citations

Journal ArticleDOI
TL;DR: Polanyi is at pains to expunge what he believes to be the false notion contained in the contemporary view of science which treats it as an object and basically impersonal discipline.
Abstract: The Study of Man. By Michael Polanyi. Price, $1.75. Pp. 102. University of Chicago Press, 5750 Ellis Ave., Chicago 37, 1959. One subtitle to Polanyi's challenging and fascinating book might be The Evolution and Natural History of Error , for Polanyi is at pains to expunge what he believes to be the false notion contained in the contemporary view of science which treats it as an object and basically impersonal discipline. According to Polanyi not only is this a radical and important error, but it is harmful to the objectives of science itself. Another subtitle could be Farewell to Detachment , for in place of cold objectivity he develops the idea that science is necessarily intensely personal. It is a human endeavor and human point of view which cannot be divorced from nor uprooted out of the human matrix from which it arises and in which it works. For a good while

2,248 citations

Journal ArticleDOI
TL;DR: It is argued that while this law will pose large challenges for industry, it highlights opportunities for computer scientists to take the lead in designing algorithms and evaluation frameworks which avoid discrimination and enable explanation.
Abstract: We summarize the potential impact that the European Union’s new General Data Protection Regulation will have on the routine use of machine learning algorithms. Slated to take effect as law across the EU in 2018, it will restrict automated individual decision-making (that is, algorithms that make decisions based on user-level predictors) which “significantly affect” users. The law will also effectively create a “right to explanation,” whereby a user can ask for an explanation of an algorithmic decision that was made about them. We argue that while this law will pose large challenges for industry, it highlights opportunities for computer scientists to take the lead in designing algorithms and evaluation frameworks which avoid discrimination and enable explanation.

1,500 citations

Proceedings ArticleDOI
20 Aug 2006
TL;DR: An LDA-style topic model is presented that captures not only the low-dimensional structure of data, but also how the structure changes over time, showing improved topics, better timestamp prediction, and interpretable trends.
Abstract: This paper presents an LDA-style topic model that captures not only the low-dimensional structure of data, but also how the structure changes over time. Unlike other recent work that relies on Markov assumptions or discretization of time, here each topic is associated with a continuous distribution over timestamps, and for each generated document, the mixture distribution over topics is influenced by both word co-occurrences and the document's timestamp. Thus, the meaning of a particular topic can be relied upon as constant, but the topics' occurrence and correlations change significantly over time. We present results on nine months of personal email, 17 years of NIPS research papers and over 200 years of presidential state-of-the-union addresses, showing improved topics, better timestamp prediction, and interpretable trends.

1,327 citations