scispace - formally typeset
Search or ask a question
Author

Richard Fikes

Other affiliations: IntelliCorp, SRI International, PricewaterhouseCoopers  ...read more
Bio: Richard Fikes is an academic researcher from Stanford University. The author has contributed to research in topics: Knowledge representation and reasoning & Knowledge base. The author has an hindex of 39, co-authored 92 publications receiving 15715 citations. Previous affiliations of Richard Fikes include IntelliCorp & SRI International.


Papers
More filters
Journal ArticleDOI
TL;DR: In this paper, the authors describe a problem solver called STRIPS that attempts to find a sequence of operators in a space of world models to transform a given initial world model in which a given goal formula can be proven to be true.

2,883 citations

Book
31 Oct 1995
TL;DR: In this article, the authors describe a problem solver called STRIPS that attempts to find a sequence of operators in a spcce of world models to transform a given initial world model into a model in which a given goal formula can be proven to be true.
Abstract: We describe a new problem solver called STRIPS that attempts to find a sequence of operators in a spcce of world models to transform a given initial world model into a model in which a given goal formula can be proven to be true. STRIPS represents a world n,~del as an arbitrary collection of first-order predicate calculus formulas and is designed to work with .models consisting of large numbers of formulas. It employs a resolution theorem prover to answer questions of particular models and uses means-ends analysis to guide it to the desired goal-satisfying model.

1,793 citations

Journal ArticleDOI
TL;DR: This article presents a vision of the future in which knowledge-based system development and operation is facilitated by infrastructure and technology for knowledge sharing, and describes an initiative currently under way to develop these ideas.
Abstract: Building new knowledge-based systems today usually entails constructing new knowledge bases from scratch. It could instead be done by assembling reusable components. System developers would then only need to worry about creating the specialized knowledge and reasoners new to the specific task of their system. This new system would interoperate with existing systems, using them to perform some of its reasoning. In this way, declarative knowledge, problem- solving techniques, and reasoning services could all be shared among systems. This approach would facilitate building bigger and better systems cheaply. The infrastructure to support such sharing and reuse would lead to greater ubiquity of these systems, potentially transforming the knowledge industry. This article presents a vision of the future in which knowledge-based system development and operation is facilitated by infrastructure and technology for knowledge sharing. It describes an initiative currently under way to develop these ideas and suggests steps that must be taken in the future to try to realize this vision.

1,640 citations

Journal ArticleDOI
TL;DR: Some major new additions to the STRIPS robot problem-solving system are described, including a process for generalizing a plan produced by STriPS so that problem-specific constants appearing in the plan are replaced by problem-independent parameters.

1,115 citations

Journal ArticleDOI
TL;DR: The Ontolingua Server as mentioned in this paper is a set of tools and services to support the process of achieving consensus on commonly shared ontologies by geographically distributed groups, and it allows users to publish, browse, create and edit ontologies stored on an ontology server.
Abstract: Reusable ontologies are becoming increasingly important for tasks such as information integration, knowledge-level interoperation and knowledge-base development. We have developed a set of tools and services to support the process of achieving consensus on commonly shared ontologies by geographically distributed groups. These tools make use of the World Wide Web to enable wide access and provide users with the ability to publish, browse, create and edit ontologies stored on anontology server. Users can quickly assemble a new ontology from a library of modules. We discuss how our system was constructed, how it exploits existing protocols and browsing tools, and our experience supporting hundreds of users. We describe applications using our tools to achieve consensus on ontologies and to integrate information.The Ontolingua Server may be accessed through the URLhttp://ontolingua.stanford.edu

893 citations


Cited by
More filters
Journal ArticleDOI
TL;DR: Machine learning addresses many of the same research questions as the fields of statistics, data mining, and psychology, but with differences of emphasis.
Abstract: Machine Learning is the study of methods for programming computers to learn. Computers are applied to a wide range of tasks, and for most of these it is relatively easy for programmers to design and implement the necessary software. However, there are many tasks for which this is difficult or impossible. These can be divided into four general categories. First, there are problems for which there exist no human experts. For example, in modern automated manufacturing facilities, there is a need to predict machine failures before they occur by analyzing sensor readings. Because the machines are new, there are no human experts who can be interviewed by a programmer to provide the knowledge necessary to build a computer system. A machine learning system can study recorded data and subsequent machine failures and learn prediction rules. Second, there are problems where human experts exist, but where they are unable to explain their expertise. This is the case in many perceptual tasks, such as speech recognition, hand-writing recognition, and natural language understanding. Virtually all humans exhibit expert-level abilities on these tasks, but none of them can describe the detailed steps that they follow as they perform them. Fortunately, humans can provide machines with examples of the inputs and correct outputs for these tasks, so machine learning algorithms can learn to map the inputs to the outputs. Third, there are problems where phenomena are changing rapidly. In finance, for example, people would like to predict the future behavior of the stock market, of consumer purchases, or of exchange rates. These behaviors change frequently, so that even if a programmer could construct a good predictive computer program, it would need to be rewritten frequently. A learning program can relieve the programmer of this burden by constantly modifying and tuning a set of learned prediction rules. Fourth, there are applications that need to be customized for each computer user separately. Consider, for example, a program to filter unwanted electronic mail messages. Different users will need different filters. It is unreasonable to expect each user to program his or her own rules, and it is infeasible to provide every user with a software engineer to keep the rules up-to-date. A machine learning system can learn which mail messages the user rejects and maintain the filtering rules automatically. Machine learning addresses many of the same research questions as the fields of statistics, data mining, and psychology, but with differences of emphasis. Statistics focuses on understanding the phenomena that have generated the data, often with the goal of testing different hypotheses about those phenomena. Data mining seeks to find patterns in the data that are understandable by people. Psychological studies of human learning aspire to understand the mechanisms underlying the various learning behaviors exhibited by people (concept learning, skill acquisition, strategy change, etc.).

13,246 citations

Journal ArticleDOI
TL;DR: This paper describes a mechanism for defining ontologies that are portable over representation systems, basing Ontolingua itself on an ontology of domain-independent, representational idioms.

12,962 citations

Journal ArticleDOI
TL;DR: In this paper, an interval-based temporal logic is introduced, together with a computationally effective reasoning algorithm based on constraint propagation, which is notable in offering a delicate balance between time and space.
Abstract: An interval-based temporal logic is introduced, together with a computationally effective reasoning algorithm based on constraint propagation. This system is notable in offering a delicate balance between

7,362 citations

Journal ArticleDOI
TL;DR: The role of ontology in supporting knowledge sharing activities is described, and a set of criteria to guide the development of ontologies for these purposes are presented, and it is shown how these criteria are applied in case studies from the design ofOntologies for engineering mathematics and bibliographic data.
Abstract: Recent work in Artificial Intelligence is exploring the use of formal ontologies as a way of specifying content-specific agreements for the sharing and reuse of knowledge among software entities. We take an engineering perspective on the development of such ontologies. Formal ontologies are viewed as designed artifacts, formulated for specific purposes and evaluated against objective design criteria. We describe the role of ontologies in supporting knowledge sharing activities, and then present a set of criteria to guide the development of ontologies for these purposes. We show how these criteria are applied in case studies from the design of ontologies for engineering mathematics and bibliographic data. Selected design decisions are discussed, and alternative representation choices and evaluated against the design criteria.

6,949 citations

Journal ArticleDOI
TL;DR: Agent theory is concerned with the question of what an agent is, and the use of mathematical formalisms for representing and reasoning about the properties of agents as discussed by the authors ; agent architectures can be thought of as software engineering models of agents; and agent languages are software systems for programming and experimenting with agents.
Abstract: The concept of an agent has become important in both Artificial Intelligence (AI) and mainstream computer science. Our aim in this paper is to point the reader at what we perceive to be the most important theoretical and practical issues associated with the design and construction of intelligent agents. For convenience, we divide these issues into three areas (though as the reader will see, the divisions are at times somewhat arbitrary). Agent theory is concerned with the question of what an agent is, and the use of mathematical formalisms for representing and reasoning about the properties of agents. Agent architectures can be thought of as software engineering models of agents;researchers in this area are primarily concerned with the problem of designing software or hardware systems that will satisfy the properties specified by agent theorists. Finally, agent languages are software systems for programming and experimenting with agents; these languages may embody principles proposed by theorists. The paper is not intended to serve as a tutorial introduction to all the issues mentioned; we hope instead simply to identify the most important issues, and point to work that elaborates on them. The article includes a short review of current and potential applications of agent technology.

6,714 citations