scispace - formally typeset
Search or ask a question
Author

Ugo Montanari

Bio: Ugo Montanari is an academic researcher from University of Pisa. The author has contributed to research in topics: Petri net & Process calculus. The author has an hindex of 57, co-authored 410 publications receiving 14473 citations. Previous affiliations of Ugo Montanari include Polytechnic University of Milan & University of Maryland, College Park.


Papers
More filters
Journal ArticleDOI
TL;DR: A new unification algorithm, characterized by having the acyclicity test efficiently embedded into it, is derived from the nondeterministic one, and a PASCAL implementation is given.
Abstract: The unification problem in f'mst-order predicate calculus is described in general terms as the solution of a system of equations, and a nondeterministic algorithm is given. A new unification algorithm, characterized by having the acyclicity test efficiently embedded into it, is derived from the nondeterministic one, and a PASCAL implementation is given. A comparison with other well-known unification algorithms shows that the algorithm described here performs well in all cases.

875 citations

Journal ArticleDOI
TL;DR: It is shown how this framework can be used to model both old and new constraint solving and optimization schemes, thus allowing one to both formally justify many informally taken choices in existing schemes, and to prove that local consistency techniques can beused also in newly defined schemes.
Abstract: We introduce a general framework for constraint satisfaction and optimization where classical CSPs, fuzzy CSPs, weighted CSPs, partial constraint satisfaction, and others can be easily cast. The framework is based on a semiring structure, where the set of the semiring specifies the values to be associated with each tuple of values of the variable domain, and the two semiring operations (+ and X) model constraint projection and combination respectively. Local consistency algorithms, as usually used for classical CSPs, can be exploited in this general framework as well, provided that certain conditions on the semiring operations are satisfied. We then show how this framework can be used to model both old and new constraint solving and optimization schemes, thus allowing one to both formally justify many informally taken choices in existing schemes, and to prove that local consistency techniques can be used also in newly defined schemes.

709 citations

Book
01 Feb 1997
TL;DR: The algebraic approaches to graph transformation are based on the concept of gluing of graphs, modelled by pushouts in suitable categories of graphs and graph morphisms, which allows one not only to give an explicit algebraic or set theoretical description of the constructions, but also to use concepts and results from category theory in order to build up a rich theory and to give elegant proofs even in complex situations as discussed by the authors.
Abstract: The algebraic approaches to graph transformation are based on the concept of gluing of graphs, modelled by pushouts in suitable categories of graphs and graph morphisms This allows one not only to give an explicit algebraic or set theoretical description of the constructions, but also to use concepts and results from category theory in order to build up a rich theory and to give elegant proofs even in complex situations In this chapter we start with an overwiev of the basic notions common to the two algebraic approaches, the "double-pushout (DPO) approach" and the "single-pushout (SPO) approach"; next we present the classical theory and some recent development of the double-pushout approach The next chapter is devoted instead to the single-pushout approach, and it is closed by a comparison between the two approaches -- This document will appear as a chapter of the "The Handbook of Graph Grammars Volume I: Foundations", G Rozenberg (Ed), World Scientific

555 citations

Journal ArticleDOI
TL;DR: A formal basis for expressing the semantics of concurrent languages in terms of Petri nets is provided, and a new understanding of concurrency in Terms of algebraic structures over graphs and categories is provided.
Abstract: Petri nets are widely used to model concurrent systems. However, their composition and abstraction mechanisms are inadequate: we solve this problem in a satisfactory way. We start by remarking that place/transition Petri nets can be viewed as ordinary, directed graphs equipped with two algebraic operations corresponding to parallell and sequential composition of transitions. A distributive law between the two operations captures a basic fact about concurrency. New morphisms are defined, mapping single, atomic transitions into whole computations, thus relating system descriptions at different levels of abstraction. Categories equipped with products and coproducts (corresponding to parallel and nondeterministic compositions) are introduced for Petri nets with and without initial markings. Petri net duality is expressed as a duality functor, and several new invariants are introduced. A tensor product is defined on nets, and their category is proved to be symmetric monoidal closed. This construction is generalized to a large class of algebraic theories on graphs. These results provide a formal basis for expressing the semantics of concurrent languages in terms of Petri nets. They also provide a new understanding of concurrency in terms of algebraic structures over graphs and categories that should apply to other models besides Petri nets and thus contribute to the conceptual unification of concurrency.

413 citations

Journal ArticleDOI
TL;DR: A technique for recognizing systems of lines is presented, in which the heuristic of the problem is not embedded in the recognition algorithm but is expressed in a figure of merit, which allows for greater flexibility and adequacy in the particular problem.
Abstract: : A technique for recognizing systems of lines is presented, in which the heuristic of the problem is not embedded in the recognition algorithm but is expressed in a figure of merit. A multistage decision process is then able to recognize in the input picture the optimal system of lines according to the given figure of merit. Due to the global approach, greater flexibility and adequacy in the particular problem is achieved. The relation between the structure of the figure of merit and the complexity of the optimization process is then discussed. (Modified author abstract)

324 citations


Cited by
More filters
Journal ArticleDOI

[...]

08 Dec 2001-BMJ
TL;DR: There is, I think, something ethereal about i —the square root of minus one, which seems an odd beast at that time—an intruder hovering on the edge of reality.
Abstract: There is, I think, something ethereal about i —the square root of minus one. I remember first hearing about it at school. It seemed an odd beast at that time—an intruder hovering on the edge of reality. Usually familiarity dulls this sense of the bizarre, but in the case of i it was the reverse: over the years the sense of its surreal nature intensified. It seemed that it was impossible to write mathematics that described the real world in …

33,785 citations

Book
01 Jan 1983
TL;DR: The methodology used to construct tree structured rules is the focus of a monograph as mentioned in this paper, covering the use of trees as a data analysis method, and in a more mathematical framework, proving some of their fundamental properties.
Abstract: The methodology used to construct tree structured rules is the focus of this monograph. Unlike many other statistical procedures, which moved from pencil and paper to calculators, this text's use of trees was unthinkable before computers. Both the practical and theoretical sides have been developed in the authors' study of tree methods. Classification and Regression Trees reflects these two sides, covering the use of trees as a data analysis method, and in a more mathematical framework, proving some of their fundamental properties.

14,825 citations

Book
24 Aug 2012
TL;DR: This textbook offers a comprehensive and self-contained introduction to the field of machine learning, based on a unified, probabilistic approach, and is suitable for upper-level undergraduates with an introductory-level college math background and beginning graduate students.
Abstract: Today's Web-enabled deluge of electronic data calls for automated methods of data analysis. Machine learning provides these, developing methods that can automatically detect patterns in data and then use the uncovered patterns to predict future data. This textbook offers a comprehensive and self-contained introduction to the field of machine learning, based on a unified, probabilistic approach. The coverage combines breadth and depth, offering necessary background material on such topics as probability, optimization, and linear algebra as well as discussion of recent developments in the field, including conditional random fields, L1 regularization, and deep learning. The book is written in an informal, accessible style, complete with pseudo-code for the most important algorithms. All topics are copiously illustrated with color images and worked examples drawn from such application domains as biology, text processing, computer vision, and robotics. Rather than providing a cookbook of different heuristic methods, the book stresses a principled model-based approach, often using the language of graphical models to specify models in a concise and intuitive way. Almost all the models described have been implemented in a MATLAB software package--PMTK (probabilistic modeling toolkit)--that is freely available online. The book is suitable for upper-level undergraduates with an introductory-level college math background and beginning graduate students.

8,059 citations

Journal ArticleDOI
TL;DR: The Voronoi diagram as discussed by the authors divides the plane according to the nearest-neighbor points in the plane, and then divides the vertices of the plane into vertices, where vertices correspond to vertices in a plane.
Abstract: Computational geometry is concerned with the design and analysis of algorithms for geometrical problems. In addition, other more practically oriented, areas of computer science— such as computer graphics, computer-aided design, robotics, pattern recognition, and operations research—give rise to problems that inherently are geometrical. This is one reason computational geometry has attracted enormous research interest in the past decade and is a well-established area today. (For standard sources, we refer to the survey article by Lee and Preparata [19841 and to the textbooks by Preparata and Shames [1985] and Edelsbrunner [1987bl.) Readers familiar with the literature of computational geometry will have noticed, especially in the last few years, an increasing interest in a geometrical construct called the Voronoi diagram. This trend can also be observed in combinatorial geometry and in a considerable number of articles in natural science journals that address the Voronoi diagram under different names specific to the respective area. Given some number of points in the plane, their Voronoi diagram divides the plane according to the nearest-neighbor

4,236 citations

01 Jan 2002
TL;DR: This thesis will discuss how to represent many different kinds of models as DBNs, how to perform exact and approximate inference in Dbns, and how to learn DBN models from sequential data.
Abstract: Dynamic Bayesian Networks: Representation, Inference and Learning by Kevin Patrick Murphy Doctor of Philosophy in Computer Science University of California, Berkeley Professor Stuart Russell, Chair Modelling sequential data is important in many areas of science and engineering. Hidden Markov models (HMMs) and Kalman filter models (KFMs) are popular for this because they are simple and flexible. For example, HMMs have been used for speech recognition and bio-sequence analysis, and KFMs have been used for problems ranging from tracking planes and missiles to predicting the economy. However, HMMs and KFMs are limited in their “expressive power”. Dynamic Bayesian Networks (DBNs) generalize HMMs by allowing the state space to be represented in factored form, instead of as a single discrete random variable. DBNs generalize KFMs by allowing arbitrary probability distributions, not just (unimodal) linear-Gaussian. In this thesis, I will discuss how to represent many different kinds of models as DBNs, how to perform exact and approximate inference in DBNs, and how to learn DBN models from sequential data. In particular, the main novel technical contributions of this thesis are as follows: a way of representing Hierarchical HMMs as DBNs, which enables inference to be done in O(T ) time instead of O(T ), where T is the length of the sequence; an exact smoothing algorithm that takes O(log T ) space instead of O(T ); a simple way of using the junction tree algorithm for online inference in DBNs; new complexity bounds on exact online inference in DBNs; a new deterministic approximate inference algorithm called factored frontier; an analysis of the relationship between the BK algorithm and loopy belief propagation; a way of applying Rao-Blackwellised particle filtering to DBNs in general, and the SLAM (simultaneous localization and mapping) problem in particular; a way of extending the structural EM algorithm to DBNs; and a variety of different applications of DBNs. However, perhaps the main value of the thesis is its catholic presentation of the field of sequential data modelling.

2,757 citations