scispace - formally typeset
Search or ask a question
Author

Nils J. Nilsson

Bio: Nils J. Nilsson is an academic researcher from Stanford University. The author has contributed to research in topics: Inference & First-order logic. The author has an hindex of 37, co-authored 90 publications receiving 28751 citations. Previous affiliations of Nils J. Nilsson include SRI International & Artificial Intelligence Center.


Papers
More filters
01 Jan 1981
TL;DR: This paper presents the view that artificial intelligence is primarily concerned with propositional languages for representing knowledge and with techniques for manipulating these representations, and argues against including the peripheral processes.
Abstract: This paper presents the view that artificial intelligence (AI) is primarily concerned with propositional languages for representing knowledge and with techniques for manipulating these representations. In this respect, AI is analogous to applied mathematics; its representations and techniques can be applied in a variety of other subject areas. Typically, AI research (or should be) more concerned with the general form and properties of representational languages and methods than it is with the content being described by these languages Notable exceptions involve “commonsense” knowledge about the everyday world (no other specialty claims this subject area as its own), and metaknowledge (or knowledge about the properties and uses of knowledge itself). In these areas AI is concerned with content as well as form. We also observe that the technology that seems to underly peripheral sensory and motor activities (analogous to low-level animal or human vision and muscle control) seems to be quite different from the technology that seems to underly cognitive reasoning and problem solving. Some definitions of AI would include peripheral as well as cognitive processes; here we argue against including the peripheral processes.

1 citations

Book ChapterDOI
01 Oct 2009
TL;DR: Some of the first real efforts to build intelligent machines were discussed or reported on at conferences and symposia – making these meetings important milestones in the birth of AI.
Abstract: I f machines are to become intelligent, they must, at the very least, be able to do the thinking-related things that humans can do. The first steps then in the quest for artificial intelligence involved identifying some specific tasks thought to require intelligence and figuring out how to get machines to do them. Solving puzzles, playing games such as chess and checkers, proving theorems, answering simple questions, and classifying visual images were among some of the problems tackled by the early pioneers during the 1950s and early 1960s. Although most of these were laboratory-style, sometimes called “toy,” problems, some real-world problems of commercial importance, such as automatic reading of highly stylized magnetic characters on bank checks and language translation, were also being attacked. (As far as I know, Seymour Papert was the first to use the phrase “toy problem.” At a 1967 AI workshop I attended in Athens, Georgia, he distinguished among tau or “toy” problems, rho or real-world problems, and theta or “theory” problems in artificial intelligence. This distinction still serves us well today.) In this part, I'll describe some of the first real efforts to build intelligent machines. Some of these were discussed or reported on at conferences and symposia – making these meetings important milestones in the birth of AI. I'll also do my best to explain the underlying workings of some of these early AI programs.

1 citations

Book
01 Apr 1989
TL;DR: In this article, the authors explore how AI is likely to affect employment and the distribution of income and argue that AI will indeed reduce drastically the need for human toil, and since the majority of us probably would rather use our time for activities other than our present jobs, we ought thus to greet the work-eliminating consequences of AI enthusiastically.
Abstract: Artificial intelligence (AI) will have many profound societal effects It promises potential benefits (and may also pose risks) in education, defense, business, law, and science In this article we explore how AI is likely to affect employment and the distribution of income. We argue that AI will indeed reduce drastically the need fol human toil We also note that some people fear the automation of work hy machines and the resulting unemployment Yet, since the majority of us probably would rather use our time for activities other than our present jobs, we ought thus to greet the work-eliminating consequences of AI enthusiastically The paper discusses two reasons, one economic and one psychological, for this paradoxical apprehension We conclude with a discussion of problems of moving toward the kind of economy that will he enahled by developments in AI

1 citations

Book ChapterDOI
01 Jan 2009

1 citations

Book
01 Oct 1985
TL;DR: This book discusses some Philosophical Problems from the Standpoint of Artificial Intelligence, including Circumscription - A Form of Non-monotonic Reasoning, Reasoning About Knowledge and Action, and Expert Systems and AI Applications.
Abstract: Preface Acknowledgments Chapter 1 Search and Search Representations On Representations of Problems and Reasoning about Actions Saul Amarel A Problem Similarity Approach to Devising Heuristics: First Results John Gaschnig Optimal Search Strategies for Speech-Understanding Control William Woods Consistency in Networks of Relations Alan Mackworth The B*Tree Search Algorithm: A Best First Proof Procedure Hans Berliner Chapter 2 Deduction Non-Resolution Theorem Proving W. W. Bledsoe Using Rewriting Rules for Connection Graphs to Prove Theorems C. Chang and James Slagle On Closed World Data Base Ray Reiter A Deductive Approach to Program Synthesis Zohar Manna an Richard Waldinger Prolegomena to a Theory of Mechanized Formal Reasoning Richard Weyhrauch Subjective Bayesian Methods for Rule-Based Inference Systems Richard Duda, Peter Hart, and Nils Nilsson Chapter 3 Problem-Solving and Planning Application of Theorem Proving to Problem Solving C. Cordell Green The Frame Problem and Related Problems in Artificial Intelligence Patrick Hayes Learning and Executing Generalized Robot Plans Richard Fikes, Peter Hart, and Nils Nilsson Achieving Several Goals Simultaneously Richard Waldinger Planning and Meta-Planning Mark Stefik Chapter 4 Expert Systems and AI Applications An Experiment in Knowledge-Based Automatic Programming David Barstow Dendral and Meta-Dendral: Their Applications Dimension Bruce Buchanan and Edward Feigenbaum Consultation Systems for Physicians Edward Shortliffe Model Design in the PROSPECTOR Consultant System for Mineral Exploration Richard Duda, John Gaschnig, and Peter Hart The Hearsay-II, Speech-Understanding System: Integrating Knowledge to Resolve Uncertainty Lee Erman, Frederick Hayes-Roth, Victor Lesser, and D. Raj Reddy Using Patterns and Plans in Chess David Wilkins Interactive Transfer of Expertise: Acquisition of New Inference Rules Randall Davis Chapter 5 Advanced topics Some Philosophical Problems from the Standpoint of Artificial Intelligence John McCarthy and Patrick Hayes The Logic of Frames Patrick Hayes Epistemological Problems of Artificial Intelligence John McCarthy Circumscription - A Form of Non-monotonic Reasoning John McCarthy Reasoning About Knowledge and Action Robert Moore Elements of a Plan-Based Theory of Speech Acts Philip Cohen and C. Raymond Perrault A Truth Maintenance System Jon Doyle Generalization as Search Thomas Mitchell Index

1 citations


Cited by
More filters
Book
01 Jan 1988
TL;DR: Probabilistic Reasoning in Intelligent Systems as mentioned in this paper is a complete and accessible account of the theoretical foundations and computational methods that underlie plausible reasoning under uncertainty, and provides a coherent explication of probability as a language for reasoning with partial belief.
Abstract: From the Publisher: Probabilistic Reasoning in Intelligent Systems is a complete andaccessible account of the theoretical foundations and computational methods that underlie plausible reasoning under uncertainty. The author provides a coherent explication of probability as a language for reasoning with partial belief and offers a unifying perspective on other AI approaches to uncertainty, such as the Dempster-Shafer formalism, truth maintenance systems, and nonmonotonic logic. The author distinguishes syntactic and semantic approaches to uncertainty—and offers techniques, based on belief networks, that provide a mechanism for making semantics-based systems operational. Specifically, network-propagation techniques serve as a mechanism for combining the theoretical coherence of probability theory with modern demands of reasoning-systems technology: modular declarative inputs, conceptually meaningful inferences, and parallel distributed computation. Application areas include diagnosis, forecasting, image interpretation, multi-sensor fusion, decision support systems, plan recognition, planning, speech recognition—in short, almost every task requiring that conclusions be drawn from uncertain clues and incomplete information. Probabilistic Reasoning in Intelligent Systems will be of special interest to scholars and researchers in AI, decision theory, statistics, logic, philosophy, cognitive psychology, and the management sciences. Professionals in the areas of knowledge-based systems, operations research, engineering, and statistics will find theoretical and computational tools of immediate practical use. The book can also be used as an excellent text for graduate-level courses in AI, operations research, or applied probability.

15,671 citations

Journal ArticleDOI
TL;DR: This historical survey compactly summarizes relevant work, much of it from the previous millennium, review deep supervised learning, unsupervised learning, reinforcement learning & evolutionary computation, and indirect search for short programs encoding deep and large networks.

14,635 citations

Book
John R. Koza1
01 Jan 1992
TL;DR: This book discusses the evolution of architecture, primitive functions, terminals, sufficiency, and closure, and the role of representation and the lens effect in genetic programming.
Abstract: Background on genetic algorithms, LISP, and genetic programming hierarchical problem-solving introduction to automatically-defined functions - the two-boxes problem problems that straddle the breakeven point for computational effort Boolean parity functions determining the architecture of the program the lawnmower problem the bumblebee problem the increasing benefits of ADFs as problems are scaled up finding an impulse response function artificial ant on the San Mateo trail obstacle-avoiding robot the minesweeper problem automatic discovery of detectors for letter recognition flushes and four-of-a-kinds in a pinochle deck introduction to biochemistry and molecular biology prediction of transmembrane domains in proteins prediction of omega loops in proteins lookahead version of the transmembrane problem evolutionary selection of the architecture of the program evolution of primitives and sufficiency evolutionary selection of terminals evolution of closure simultaneous evolution of architecture, primitive functions, terminals, sufficiency, and closure the role of representation and the lens effect Appendices: list of special symbols list of special functions list of type fonts default parameters computer implementation annotated bibliography of genetic programming electronic mailing list and public repository

13,487 citations

Journal ArticleDOI
TL;DR: Machine learning addresses many of the same research questions as the fields of statistics, data mining, and psychology, but with differences of emphasis.
Abstract: Machine Learning is the study of methods for programming computers to learn. Computers are applied to a wide range of tasks, and for most of these it is relatively easy for programmers to design and implement the necessary software. However, there are many tasks for which this is difficult or impossible. These can be divided into four general categories. First, there are problems for which there exist no human experts. For example, in modern automated manufacturing facilities, there is a need to predict machine failures before they occur by analyzing sensor readings. Because the machines are new, there are no human experts who can be interviewed by a programmer to provide the knowledge necessary to build a computer system. A machine learning system can study recorded data and subsequent machine failures and learn prediction rules. Second, there are problems where human experts exist, but where they are unable to explain their expertise. This is the case in many perceptual tasks, such as speech recognition, hand-writing recognition, and natural language understanding. Virtually all humans exhibit expert-level abilities on these tasks, but none of them can describe the detailed steps that they follow as they perform them. Fortunately, humans can provide machines with examples of the inputs and correct outputs for these tasks, so machine learning algorithms can learn to map the inputs to the outputs. Third, there are problems where phenomena are changing rapidly. In finance, for example, people would like to predict the future behavior of the stock market, of consumer purchases, or of exchange rates. These behaviors change frequently, so that even if a programmer could construct a good predictive computer program, it would need to be rewritten frequently. A learning program can relieve the programmer of this burden by constantly modifying and tuning a set of learned prediction rules. Fourth, there are applications that need to be customized for each computer user separately. Consider, for example, a program to filter unwanted electronic mail messages. Different users will need different filters. It is unreasonable to expect each user to program his or her own rules, and it is infeasible to provide every user with a software engineer to keep the rules up-to-date. A machine learning system can learn which mail messages the user rejects and maintain the filtering rules automatically. Machine learning addresses many of the same research questions as the fields of statistics, data mining, and psychology, but with differences of emphasis. Statistics focuses on understanding the phenomena that have generated the data, often with the goal of testing different hypotheses about those phenomena. Data mining seeks to find patterns in the data that are understandable by people. Psychological studies of human learning aspire to understand the mechanisms underlying the various learning behaviors exhibited by people (concept learning, skill acquisition, strategy change, etc.).

13,246 citations

Journal ArticleDOI
TL;DR: This paper describes a mechanism for defining ontologies that are portable over representation systems, basing Ontolingua itself on an ontology of domain-independent, representational idioms.

12,962 citations