scispace - formally typeset
Search or ask a question
Author

David Maier

Other affiliations: The Graduate Center, CUNY, Microsoft, Stony Brook University  ...read more
Bio: David Maier is an academic researcher from Portland State University. The author has contributed to research in topics: Database design & Query optimization. The author has an hindex of 64, co-authored 317 publications receiving 20353 citations. Previous affiliations of David Maier include The Graduate Center, CUNY & Microsoft.


Papers
More filters
Book
01 Mar 1983
TL;DR: The method of operating a water-cooled neutronic reactor having a graphite moderator which comprises flowing a gaseous mixture of carbon dioxide and helium, in which the helium comprises 40-60 volume percent of the mixture, in contact with thegraphite moderator.

1,609 citations

Book
01 Jan 1983
TL;DR: In this article, the graphite moderator is replaced by a gaseous mixture of carbon dioxide and helium, in which the helium comprises 40-60 volume percent of the mixture.
Abstract: 1. The method of operating a water-cooled neutronic reactor having a graphite moderator which comprises flowing a gaseous mixture of carbon dioxide and helium, in which the helium comprises 40-60 volume percent of the mixture, in contact with the graphite moderator.

1,440 citations

Book ChapterDOI
01 Jul 1994
TL;DR: In this article, the main features and characteristics that a system must have to qualify as an object-oriented database system are defined and separated into three groups: mandatory, mandatory, open and optional.
Abstract: This paper attempts to define an object-oriented database system It describes the main features and characteristics that a system must have to qualify as an object-oriented database system We have separated these characteristics into three groups: • Mandatory, the ones the system must satisfy in order to be termed an object-oriented database system These are complex objects, object identity, encapsulation, types or classes, inheritance, overriding combined with late binding, extensibility, computational completeness, persistence, secondary storage management, concurrency, recovery and an ad hoc query facility • Optional, the ones that can be added to make the system better, but which are not mandatory These are multiple inheritance, type checking and inferencing, distribution, design transactions and versions • Open, the points where the designer can make a number of choices These are the programming paradigm, the representation system, the type system, and uniformity We have taken a position, not so much expecting it to be the final word as to erect a provisional landmark to orient further debate

976 citations

Journal ArticleDOI
TL;DR: It is shown that this class of database schemes, called acychc, has a number of desirable properties that have been studied by other researchers and are shown to be eqmvalent to acydicity.
Abstract: A class of database schemes, called acychc, was recently introduced. It is shown that this class has a number of desirable properties. In particular, several desirable properties that have been studied by other researchers m very different terms are all shown to be eqmvalent to acydicity. In addition, several equivalent charactenzauons of the class m terms of graphs and hypergraphs are given, and a smaple algorithm for determining acychclty is presented. Also given are several eqmvalent characterizations of those sets M of multivalued dependencies such that M is the set of muRlvalued dependencies that are the consequences of a given join dependency. Several characterizations for a conflict-free (in the sense of Lien) set of muluvalued dependencies are provided.

825 citations

Proceedings Article
01 Jan 1986
TL;DR: Several methods for implementing database queries expressed as logical rules, including a general algorithm for rewriting logical rules so that they may be implemented bottomUP (= forward chaining) in a way that cuts down on the irrelevant facts that are generated.
Abstract: Several methods for implementing database queries expressed as logical rules are given and they are compared for efficiency. One method, called “magic sets,” is a general algorithm for rewriting logical rules so that they may be implemented bottomUP (= forward chaining) in a way that cuts down on the irrelevant facts that are generated. The advantage of this scheme is that by working bottom-up, we can take advantage of efficient methods for doing massive joins. Two other methods are ad hoc ways of implementing “linear” rules, i.e., rules where at most one predicate in any body is recursive. These methods are 1 Work supported by NSF grant IST-83-51730, cosponsored by Tektronix Foundation, Intel, Mentor Graphics, DEC, Servio Logic Corp., IBM, Xerox and Beaverton Chamber of Commerce. 20n a leave of absence from Hebrew University. Work supported by a grant of AT&T Foundation and a grant of IBM Corp. SWork supported by NSF grant IST-84-12791 and a grant of IBM Corp. Permission to copy without lee all or part of this material is granted provided Out the copies are not made or distributed for direct commercial advantage, the ACM copyright notice and the title of the publi&on and its date appear. and notice is given that copying is by permission of the Association for Computing Machinery. To copy otherwise..or to republish, rquim a fee and/or specific permission.

811 citations


Cited by
More filters
Book
01 Jan 1988
TL;DR: Probabilistic Reasoning in Intelligent Systems as mentioned in this paper is a complete and accessible account of the theoretical foundations and computational methods that underlie plausible reasoning under uncertainty, and provides a coherent explication of probability as a language for reasoning with partial belief.
Abstract: From the Publisher: Probabilistic Reasoning in Intelligent Systems is a complete andaccessible account of the theoretical foundations and computational methods that underlie plausible reasoning under uncertainty. The author provides a coherent explication of probability as a language for reasoning with partial belief and offers a unifying perspective on other AI approaches to uncertainty, such as the Dempster-Shafer formalism, truth maintenance systems, and nonmonotonic logic. The author distinguishes syntactic and semantic approaches to uncertainty—and offers techniques, based on belief networks, that provide a mechanism for making semantics-based systems operational. Specifically, network-propagation techniques serve as a mechanism for combining the theoretical coherence of probability theory with modern demands of reasoning-systems technology: modular declarative inputs, conceptually meaningful inferences, and parallel distributed computation. Application areas include diagnosis, forecasting, image interpretation, multi-sensor fusion, decision support systems, plan recognition, planning, speech recognition—in short, almost every task requiring that conclusions be drawn from uncertain clues and incomplete information. Probabilistic Reasoning in Intelligent Systems will be of special interest to scholars and researchers in AI, decision theory, statistics, logic, philosophy, cognitive psychology, and the management sciences. Professionals in the areas of knowledge-based systems, operations research, engineering, and statistics will find theoretical and computational tools of immediate practical use. The book can also be used as an excellent text for graduate-level courses in AI, operations research, or applied probability.

15,671 citations

Journal ArticleDOI
TL;DR: Machine learning addresses many of the same research questions as the fields of statistics, data mining, and psychology, but with differences of emphasis.
Abstract: Machine Learning is the study of methods for programming computers to learn. Computers are applied to a wide range of tasks, and for most of these it is relatively easy for programmers to design and implement the necessary software. However, there are many tasks for which this is difficult or impossible. These can be divided into four general categories. First, there are problems for which there exist no human experts. For example, in modern automated manufacturing facilities, there is a need to predict machine failures before they occur by analyzing sensor readings. Because the machines are new, there are no human experts who can be interviewed by a programmer to provide the knowledge necessary to build a computer system. A machine learning system can study recorded data and subsequent machine failures and learn prediction rules. Second, there are problems where human experts exist, but where they are unable to explain their expertise. This is the case in many perceptual tasks, such as speech recognition, hand-writing recognition, and natural language understanding. Virtually all humans exhibit expert-level abilities on these tasks, but none of them can describe the detailed steps that they follow as they perform them. Fortunately, humans can provide machines with examples of the inputs and correct outputs for these tasks, so machine learning algorithms can learn to map the inputs to the outputs. Third, there are problems where phenomena are changing rapidly. In finance, for example, people would like to predict the future behavior of the stock market, of consumer purchases, or of exchange rates. These behaviors change frequently, so that even if a programmer could construct a good predictive computer program, it would need to be rewritten frequently. A learning program can relieve the programmer of this burden by constantly modifying and tuning a set of learned prediction rules. Fourth, there are applications that need to be customized for each computer user separately. Consider, for example, a program to filter unwanted electronic mail messages. Different users will need different filters. It is unreasonable to expect each user to program his or her own rules, and it is infeasible to provide every user with a software engineer to keep the rules up-to-date. A machine learning system can learn which mail messages the user rejects and maintain the filtering rules automatically. Machine learning addresses many of the same research questions as the fields of statistics, data mining, and psychology, but with differences of emphasis. Statistics focuses on understanding the phenomena that have generated the data, often with the goal of testing different hypotheses about those phenomena. Data mining seeks to find patterns in the data that are understandable by people. Psychological studies of human learning aspire to understand the mechanisms underlying the various learning behaviors exhibited by people (concept learning, skill acquisition, strategy change, etc.).

13,246 citations

Journal ArticleDOI
TL;DR: The authors describe progress to date in publishing Linked Data on the Web, review applications that have been developed to exploit the Web of Data, and map out a research agenda for the Linked data community as it moves forward.
Abstract: The term “Linked Data” refers to a set of best practices for publishing and connecting structured data on the Web. These best practices have been adopted by an increasing number of data providers over the last three years, leading to the creation of a global data space containing billions of assertions— the Web of Data. In this article, the authors present the concept and technical principles of Linked Data, and situate these within the broader context of related technological developments. They describe progress to date in publishing Linked Data on the Web, review applications that have been developed to exploit the Web of Data, and map out a research agenda for the Linked Data community as it moves forward.

5,113 citations

Book
01 Jan 2001
TL;DR: The book introduces probabilistic graphical models and decision graphs, including Bayesian networks and influence diagrams, and presents a thorough introduction to state-of-the-art solution and analysis algorithms.
Abstract: Probabilistic graphical models and decision graphs are powerful modeling tools for reasoning and decision making under uncertainty. As modeling languages they allow a natural specification of problem domains with inherent uncertainty, and from a computational perspective they support efficient algorithms for automatic construction and query answering. This includes belief updating, finding the most probable explanation for the observed evidence, detecting conflicts in the evidence entered into the network, determining optimal strategies, analyzing for relevance, and performing sensitivity analysis. The book introduces probabilistic graphical models and decision graphs, including Bayesian networks and influence diagrams. The reader is introduced to the two types of frameworks through examples and exercises, which also instruct the reader on how to build these models. The book is a new edition of Bayesian Networks and Decision Graphs by Finn V. Jensen. The new edition is structured into two parts. The first part focuses on probabilistic graphical models. Compared with the previous book, the new edition also includes a thorough description of recent extensions to the Bayesian network modeling language, advances in exact and approximate belief updating algorithms, and methods for learning both the structure and the parameters of a Bayesian network. The second part deals with decision graphs, and in addition to the frameworks described in the previous edition, it also introduces Markov decision processes and partially ordered decision problems. The authors also provide a well-founded practical introduction to Bayesian networks, object-oriented Bayesian networks, decision trees, influence diagrams (and variants hereof), and Markov decision processes. give practical advice on the construction of Bayesian networks, decision trees, and influence diagrams from domain knowledge. give several examples and exercises exploiting computer systems for dealing with Bayesian networks and decision graphs. present a thorough introduction to state-of-the-art solution and analysis algorithms. The book is intended as a textbook, but it can also be used for self-study and as a reference book.

4,566 citations

Journal Article
TL;DR: In this paper, a documento: "Cambiamenti climatici 2007: impatti, adattamento e vulnerabilita" voteato ad aprile 2007 dal secondo gruppo di lavoro del Comitato Intergovernativo sui Cambiamentsi Climatici (Intergovernmental Panel on Climate Change).
Abstract: Impatti, adattamento e vulnerabilita Le cause e le responsabilita dei cambiamenti climatici sono state trattate sul numero di ottobre della rivista Cda. Approfondiamo l’argomento presentando il documento: “Cambiamenti climatici 2007: impatti, adattamento e vulnerabilita” votato ad aprile 2007 dal secondo gruppo di lavoro del Comitato Intergovernativo sui Cambiamenti Climatici (Intergovernmental Panel on Climate Change). Si tratta del secondo di tre documenti che compongono il quarto rapporto sui cambiamenti climatici.

3,979 citations