scispace - formally typeset
Search or ask a question
Author

Hilary A. Priestley

Bio: Hilary A. Priestley is an academic researcher from University of Oxford. The author has contributed to research in topics: Distributive lattice & Duality (mathematics). The author has an hindex of 18, co-authored 75 publications receiving 7172 citations. Previous affiliations of Hilary A. Priestley include Tonbridge School & La Trobe University.


Papers
More filters
Book
01 Jan 1990
TL;DR: The Stone Representation Theorem for Boolean algebras and its application to lattices in algebra can be found in this article, where the structure of finite distributive lattices and finite Boolean algebraic structures are discussed.
Abstract: 1. Ordered sets 2. Special types of ordered set 3. Lattices as algebraic structures 4. Boolean algebras 5. The structure of finite distributive lattices and finite Boolean algebras 6. Ideals, filters, and congruences 7. The Stone Representation Theorem for Boolean algebras 8. Lattices in algebra Appendix: outline of relevant basic topology.

4,715 citations

Book
05 Dec 1985
TL;DR: In this article, the authors revisited Cauchy's theorem revisited and showed that the Inversion and Convolution theorems can be proved in the complex plane with respect to contour integration and the Laplace tranform.
Abstract: Part 1 The complex plane: complex numbers open and closed sets in the complex plane limits and continuity Part 2 Holomorphic function and power series: complex power series elementary functions Part 3 Prelude to Cauchy's theorem: paths integration along paths connectedness and simple connectedness properties of paths and contours Part 4 Cauchy's theorem: Cauchy's theorem, level I and II logarithms, argument and index Cauchy's theorem revisited Part 5 Consequences of Cauchy's theorem: Cauchy's formulae power series representation zeros of holomorphic functions the maximum-modulus theorem Part 6 Singularities and multifunctions: Laurent's theorem singularities meromorphic functions multifunctions Part 7 Cauchy's residue theorem: counting zeroes and poles claculation of residues estimation of integrals Part 8 Applications of contour integration: improper and principal-values integrals integrals involving functions with a finite number of poles and infinitely many poles deductions from known integrals integrals involving multifunctions evaluation of definite integrals Part 9 Fourier and Laplace tranforms: the Laplace tranform - basic properties and evaluation the inversion of Laplace tranforms the Fourier tranform applications to differential equations proofs of the Inversion and Convolution theorems Part 10 Conformal mapping and harmonic functions: circles and lines revisited conformal mapping mobius tranformations other mappings - powers, exponentials, and the Joukowski transformation examples on building conformal mappings holomorphic mappings - some theory harmonic functions

656 citations

Journal ArticleDOI
TL;DR: In this article, it was shown that the topological spaces which arise as duals of Boolean algebras may be characterized as those which are compact and totally disconnected (i.e. the Stone spaces); the corresponding purely topological characterization of the duals obtained by Stone is less satisfactory.
Abstract: 1. Introduction Stone, in [8], developed for distributive lattices a representation theory generalizing that for Boolean algebras. This he achieved by topologizing the set X of prime ideals of a distributive lattice A (with a zero element) by taking as a base {P a : aeA} (where P a denotes the set of prime ideals of A not containing a), and by showing that the map a i-> P a is an isomorphism representing A as the lattice of all open compact subsets of its dual space X. The topological spaces which arise as duals of Boolean algebras may be characterized as those which are compact and totally disconnected (i.e. the Stone spaces); the corresponding purely topological characterization of the duals of distributive lattices obtained by Stone is less satisfactory. In the present paper we show that a much simpler characterization in terms of ordered topological spaces is possible. The representation theorem itself, and much of the duality theory consequent on it [8, 6], becomes more natural in this new setting, and certain results not previously known can be obtained. It is hoped to give in a later paper a more detailed exposition of those aspects of the theory barely mentioned here. I should like to thank my supervisor, Dr. D. A. Edwards, for some helpful suggestions and also Dr. M. J. Canfell for permission to quote from his unpublished thesis.

604 citations

Book ChapterDOI
TL;DR: The categorical duality which exists between bounded distributive lattices and compact totally order disconnected spaces is discussed in this article, where the authors present a dictionary of mutually dual properties.
Abstract: An account is given of the categorical duality which exists between bounded distributive lattices and compact totally order disconnected spaces. During the past decade, a wide range of structural problems concerning distributive lattices have been solved by the topological and order theoretic techniques provided by duality, and a representative selection of these is presented. In addition, certain related dualities are briefly considered, as are compact totally order disconnected spaces in their own right. The paper ends with a 'dictionary' of mutually dual properties.

155 citations


Cited by
More filters
Book
05 Jun 2007
TL;DR: The second edition of Ontology Matching has been thoroughly revised and updated to reflect the most recent advances in this quickly developing area, which resulted in more than 150 pages of new content.
Abstract: Ontologies tend to be found everywhere. They are viewed as the silver bullet for many applications, such as database integration, peer-to-peer systems, e-commerce, semantic web services, or social networks. However, in open or evolving systems, such as the semantic web, different parties would, in general, adopt different ontologies. Thus, merely using ontologies, like using XML, does not reduce heterogeneity: it just raises heterogeneity problems to a higher level. Euzenat and Shvaikos book is devoted to ontology matching as a solution to the semantic heterogeneity problem faced by computer systems. Ontology matching aims at finding correspondences between semantically related entities of different ontologies. These correspondences may stand for equivalence as well as other relations, such as consequence, subsumption, or disjointness, between ontology entities. Many different matching solutions have been proposed so far from various viewpoints, e.g., databases, information systems, and artificial intelligence. The second edition of Ontology Matching has been thoroughly revised and updated to reflect the most recent advances in this quickly developing area, which resulted in more than 150 pages of new content. In particular, the book includes a new chapter dedicated to the methodology for performing ontology matching. It also covers emerging topics, such as data interlinking, ontology partitioning and pruning, context-based matching, matcher tuning, alignment debugging, and user involvement in matching, to mention a few. More than 100 state-of-the-art matching systems and frameworks were reviewed. With Ontology Matching, researchers and practitioners will find a reference book that presents currently available work in a uniform framework. In particular, the work and the techniques presented in this book can be equally applied to database schema matching, catalog integration, XML schema matching and other related problems. The objectives of the book include presenting (i) the state of the art and (ii) the latest research results in ontology matching by providing a systematic and detailed account of matching techniques and matching systems from theoretical, practical and application perspectives.

2,579 citations

Book
01 Jan 2002
TL;DR: This text provides a comprehensive introduction both to type systems in computer science and to the basic theory of programming languages, with a variety of approaches to modeling the features of object-oriented languages.
Abstract: A type system is a syntactic method for automatically checking the absence of certain erroneous behaviors by classifying program phrases according to the kinds of values they compute. The study of type systems -- and of programming languages from a type-theoretic perspective -- has important applications in software engineering, language design, high-performance compilers, and security.This text provides a comprehensive introduction both to type systems in computer science and to the basic theory of programming languages. The approach is pragmatic and operational; each new concept is motivated by programming examples and the more theoretical sections are driven by the needs of implementations. Each chapter is accompanied by numerous exercises and solutions, as well as a running implementation, available via the Web. Dependencies between chapters are explicitly identified, allowing readers to choose a variety of paths through the material.The core topics include the untyped lambda-calculus, simple type systems, type reconstruction, universal and existential polymorphism, subtyping, bounded quantification, recursive types, kinds, and type operators. Extended case studies develop a variety of approaches to modeling the features of object-oriented languages.

2,391 citations

Journal ArticleDOI
TL;DR: This paper addresses the problem of releasing microdata while safeguarding the anonymity of respondents to which the data refer and introduces the concept of minimal generalization that captures the property of the release process not distorting the data more than needed to achieve k-anonymity.
Abstract: Today's globally networked society places great demands on the dissemination and sharing of information. While in the past released information was mostly in tabular and statistical form, many situations call for the release of specific data (microdata). In order to protect the anonymity of the entities (called respondents) to which information refers, data holders often remove or encrypt explicit identifiers such as names, addresses, and phone numbers. Deidentifying data, however, provides no guarantee of anonymity. Released information often contains other data, such as race, birth date, sex, and ZIP code, that can be linked to publicly available information to reidentify respondents and inferring information that was not intended for disclosure. In this paper we address the problem of releasing microdata while safeguarding the anonymity of respondents to which the data refer. The approach is based on the definition of k-anonymity. A table provides k-anonymity if attempts to link explicitly identifying information to its content map the information to at least k entities. We illustrate how k-anonymity can be provided without compromising the integrity (or truthfulness) of the information released by using generalization and suppression techniques. We introduce the concept of minimal generalization that captures the property of the release process not distorting the data more than needed to achieve k-anonymity, and present an algorithm for the computation of such a generalization. We also discuss possible preference policies to choose among different minimal generalizations.

2,291 citations

Journal ArticleDOI
TL;DR: SPADE is a new algorithm for fast discovery of Sequential Patterns that utilizes combinatorial properties to decompose the original problem into smaller sub-problems, that can be independently solved in main-memory using efficient lattice search techniques, and using simple join operations.
Abstract: In this paper we present SPADE, a new algorithm for fast discovery of Sequential Patterns. The existing solutions to this problem make repeated database scans, and use complex hash structures which have poor locality. SPADE utilizes combinatorial properties to decompose the original problem into smaller sub-problems, that can be independently solved in main-memory using efficient lattice search techniques, and using simple join operations. All sequences are discovered in only three database scans. Experiments show that SPADE outperforms the best previous algorithm by a factor of two, and by an order of magnitude with some pre-processed data. It also has linear scalability with respect to the number of input-sequences, and a number of other database parameters. Finally, we discuss how the results of sequence mining can be applied in a real application domain.

2,063 citations

Book
10 Dec 1997

2,025 citations