scispace - formally typeset
Search or ask a question

Showing papers in "Ai Magazine in 1997"


Journal ArticleDOI
TL;DR: This article summarizes four directions of machine-learning research, the improvement of classification accuracy by learning ensembles of classifiers, methods for scaling up supervised learning algorithms, reinforcement learning, and the learning of complex stochastic models.
Abstract: Machine-learning research has been making great progress in many directions. This article summarizes four of these directions and discusses some current open problems. The four directions are (1) the improvement of classification accuracy by learning ensembles of classifiers, (2) methods for scaling up supervised learning algorithms, (3) reinforcement learning, and (4) the learning of complex stochastic models.

1,250 citations


Journal ArticleDOI
TL;DR: This technical report describes FAQ Finder, a natural language question answering system that uses files of frequently asked questions as its knowledge base, and describes the design and the current implementation of the system and its support components.
Abstract: This technical report describes FAQ Finder, a natural language question answering system that uses files of frequently asked questions as its knowledge base. Unlike AI question-answering systems that focus on the generation of new answers, FAQ Finder retrieves existing ones found in frequently-asked question files. Unlike information retrieval approaches that rely on a purely lexical metric of similarity between query and document, FAQ Finder uses a semantic knowledge base (WordNet) to improve its ability to match question and answer. We describe the design and the current implementation of the system and its support components, including results from an evaluation of the system''s performance against a corpus of user questions. An important finding was that a combination of semantic and statistical techniques works better than any single approach. We analyze failures of the system and discuss future research aimed at addressing them.

549 citations


Journal ArticleDOI
TL;DR: Technical challenges involved in RoboCup, rules, and the simulation environment are described, including design principles of autonomous agents, multiagent collaboration, strategy acquisition, real-time reasoning, robotics, and sensor fusion.
Abstract: The Robot World-Cup Soccer (RoboCup) is an attempt to foster AI and intelligent robotics research by providing a standard problem where a wide range of technologies can be integrated and examined. The first RoboCup competition will be held at the Fifteenth International Joint Conference on Artificial Intelligence in Nagoya, Japan. A robot team must actually perform a soccer game, incorporating various technologies, including design principles of autonomous agents, multiagent collaboration, strategy acquisition, real-time reasoning, robotics, and sensor fusion. RoboCup is a task for a team of multiple fast-moving robots under a dynamic environment. Although RoboCup's final target is a world cup with real robots, RoboCup offers a software platform for research on the software aspects of RoboCup. This article describes technical challenges involved in RoboCup, rules, and the simulation environment.

480 citations


Journal ArticleDOI
TL;DR: A framework for comparing ontologies is developed and a number of the more prominent ontologies are placed into it and clarified the range of alternatives in creating a standard framework for ontology design is clarified.
Abstract: In this article, we develop a framework for comparing ontologies and place a number of the more prominent ontologies into it. We have selected 10 specific projects for this study, including general ontologies, domain-specific ones, and one knowledge representation system. The comparison framework includes general characteristics, such as the purpose of an ontology, its coverage (general or domain specific), its size, and the formalism used. It also includes the design process used in creating an ontology and the methods used to evaluate it. Characteristics that describe the content of an ontology include taxonomic organization, types of concept covered, top-level divisions, internal structure of concepts, representation of part-whole relations, and the presence and nature of additional axioms. Finally, we consider what experiments or applications have used the ontologies. Knowledge sharing and reuse will require a common framework to support interoperability of independently created ontologies. Our study shows there is great diversity in the way ontologies are designed and the way they represent the world. By identifying the similarities and differences among existing ontologies, we clarify the range of alternatives in creating a standard framework for ontology design.

423 citations


Journal ArticleDOI
TL;DR: A fundamentally new method for generating user profiles that takes advantage of a large-scale database of demographic data to generalize user-specified data along the patterns common across the population, including areas not represented in the user's original data is presented.
Abstract: A number of approaches have been advanced for taking data about a user's likes and dislikes and generating a general profile of the user. These profiles can be used to retrieve documents matching user interests; recommend music, movies, or other similar products; or carry out other tasks in a specialized fashion. This article presents a fundamentally new method for generating user profiles that takes advantage of a large-scale database of demographic data. These data are used to generalize user-specified data along the patterns common across the population, including areas not represented in the user's original data. I describe the method in detail and present its implementation in the LIFESTYLE FINDER agent, an internet-based experiment testing our approach on more than 20,006 users worldwide.

367 citations


Journal ArticleDOI
TL;DR: The goal of the REFERRAL WEB Project is to create models of social networks by data mining the web and develop tools that use the models to assist in locating experts and related information search and evaluation tasks.
Abstract: The difficulty of finding information on the World Wide Web by browsing hypertext documents has led to the development and deployment of various search engines and indexing techniques. However, many information-gathering tasks are better handled by finding a referral to a human expert rather than by simply interacting with online information sources. A personal referral allows a user to judge the quality of the information he or she is receiving as well as to potentially obtain information that is deliberately not made public. The process of finding an expert who is both reliable and likely to respond to the user can be viewed as a search through the net-work of social relationships between individuals as opposed to a search through the network of hypertext documents. The goal of the REFERRAL WEB Project is to create models of social networks by data mining the web and develop tools that use the models to assist in locating experts and related information search and evaluation tasks.

300 citations


Journal ArticleDOI
Claire Cardie1
TL;DR: The author presents a generic architecture for information-extraction systems and then surveys the learning algorithms that have been developed to address the problems of accuracy, portability, and knowledge acquisition for each component of the architecture.
Abstract: This article surveys the use of empirical, machine-learning methods for a particular natural language-understanding task-information extraction. The author presents a generic architecture for information-extraction systems and then surveys the learning algorithms that have been developed to address the problems of accuracy, portability, and knowledge acquisition for each component of the architecture.

279 citations


Journal ArticleDOI
TL;DR: Part-of-speech tagging is considered, which was the first syntactic problem to successfully be attacked by statistical techniques and also serves as a good warm-up for the main topic-statistical parsing.
Abstract: I review current statistical work on syntactic parsing and then consider part-of-speech tagging, which was the first syntactic problem to successfully be attacked by statistical techniques and also serves as a good warm-up for the main topic-statistical parsing. Here, I consider both the simplified case in which the input string is viewed as a string of parts of speech and the more interesting case in which the parser is guided by statistical information about the particular words in the sentence. Finally, I anticipate future research directions.

250 citations


Journal ArticleDOI
TL;DR: The SAVVYSEARCH metasearch engine is designed to efficiently query other search engines by carefully selecting those search engines likely to return useful results and responding to fluctuating load demands on the web.
Abstract: Search engines are among the most successful applications on the web today. So many search engines have been created that it is difficult for users to know where they are, how to use them, and what topics they best address. Metasearch engines reduce the user burden by dispatching queries to multiple search engines in parallel. The SAVVYSEARCH metasearch engine is designed to efficiently query other search engines by carefully selecting those search engines likely to return useful results and responding to fluctuating load demands on the web. SAVVYSEARCH learns to identify which search engines are most appropriate for particular queries, reasons about resource demands, and represents an iterative parallel search strategy as a simple plan.

214 citations


Journal ArticleDOI
TL;DR: A number of recent accomplishments in machine learning are sampled and where the field might be headed is looked at.
Abstract: Does machine learning really work? Yes. Over the past decade, machine learning has evolved from a field of laboratory demonstrations to a field of significant commercial value. Machine-learning algorithms have now learned to detect credit card fraud by mining data on past transactions, learned to steer vehicles driving autonomously on public highways at 70 miles an hour, and learned the reading interests of many individuals to assemble personally customized electronic newsAbstracts. A new computational theory of learning is beginning to shed light on fundamental issues, such as the trade-off among the number of training examples available, the number of hypotheses considered, and the likely accuracy of the learned hypothesis. Newer research is beginning to explore issues such as long-term learning of new representations, the integration of Bayesian inference and induction, and life-long cumulative learning. This article, based on the keynote talk presented at the Thirteenth National Conference on Artificial Intelligence, samples a number of recent accomplishments in machine learning and looks at where the field might be headed. [Copyright restrictions preclude electronic publication of this article.]

177 citations


Journal ArticleDOI
TL;DR: I view the World Wide Web as an information food chain; the maze of pages and hyperlinks that comprise the Web are at the very bottom of the chain and the time is ripe to move up.
Abstract: I view the World Wide Web as an information food chain. The maze of pages and hyperlinks that comprise the Web are at the very bottom of the chain. The WEBCRAWLERs and ALTAVISTAs of the world are information herbivores; they graze on Web pages and regurgitate them as searchable indices. Today, most Web users feed near the bottom of the information food chain, but the time is ripe to move up. Since 1991, we have been building information carnivores, which intelligently hunt and feast on herbivores in UNIX, on the Internet, and on the Web. Information carnivores will become increasingly critical as the Web continues to grow and as more naive users are exposed to its chaotic jumble.

Journal ArticleDOI
TL;DR: This article presents an introduction to the series of specialized articles on empirical methods in speech recognition, syntactic parsing, semantic processing, information extraction, and machine translation and attempts to explain the growing interest in using learning methods to aid the development of natural language processing systems.
Abstract: In recent years, there has been a resurgence in research on empirical methods in natural language processing. These methods employ learning techniques to automatically extract linguistic knowledge from natural language corpora rather than require the system developer to manually encode the requisite knowledge. The current special issue reviews recent research in empirical methods in speech recognition, syntactic parsing, semantic processing, information extraction, and machine translation. This article presents an introduction to the series of specialized articles on these topics and attempts to describe and explain the growing interest in using learning methods to aid the development of natural language processing systems.

Journal ArticleDOI
TL;DR: This article illustrates how recent approaches to machine translation frequently make use of text-based learning algorithms to fully or partially automate the acquisition of knowledge.
Abstract: Machine translation of human languages (for example, Japanese, English, Spanish) was one of the earliest goals of computer science research, and it remains an elusive one. Like many AI tasks, trans-lation requires an immense amount of knowledge about language and the world. Recent approaches to machine translation frequently make use of text-based learning algorithms to fully or partially automate the acquisition of knowledge. This article illustrates these approaches.


Journal ArticleDOI
TL;DR: This article provides a tutorial introduction to all the algorithms and approaches to the planning problem in AI called refinement planning and shows that in its various guises, refinement planning subsumes most of the algorithms that have been, or are being, developed.
Abstract: Planning -- the ability to synthesize a course of action to achieve desired goals -- is an important part of intelligent agency and has thus received significant attention within AI for more than 30 years. Work on efficient planning algorithms still continues to be a hot topic for research in AI and has led to several exciting developments i the past few years. This article provides a tutorial introduction to all the algorithms and approaches to the planning problem in AI. To fulfill this ambitious objective, I introduce a generalized approach to plan synthesis called refinement planning and show that in its various guises, refinement planning subsumes most of the algorithms that have been, or are being, developed. It is hoped that this unifying overview provides the reader with a brand-name-free appreciation of the essential issues in planning.

Journal ArticleDOI
TL;DR: This work configured three physical robots and a set of software agents on the internet to plan and act in coordination to design and implement a winning team in the six weeks before the Fifth Annual AAAI Mobile Robot Competition and Exhibition.
Abstract: Indoor mobile robots are becoming reliable enough in navigation tasks to consider working with teams of robots. Using SRI International's open-agent architecture (OAA) and SAPHIRA robot-control system, we configured three physical robots and a set of software agents on the internet to plan and act in coordination. Users communicate with the robots using a variety of multimodal input: pen, voice, and keyboard. The robust capabilities of the OAA and SAPHIRA enabled us to design and implement a winning team in the six weeks before the Fifth Annual AAAI Mobile Robot Competition and Exhibition.

Journal ArticleDOI
TL;DR: This article discusses the use of fast (60 frames per second) object tracking using the COGNACHROME VISION SYSTEM, produced by Newton Research Labs, and was the only robot capable of using a gripper to capture and pick up the motorized, randomly moving squiggle ball.
Abstract: This article discusses the use of fast (60 frames per second) object tracking using the COGNACHROME VISION SYSTEM, produced by Newton Research Labs. The authors embedded the vision system in a small robot base to tie for first place in the Clean Up the Tennis Court event at the 1996 Annual AAAI Mobile Robot Competition and Exhibition, held as part of the Thirteenth National Conference on Artificial Intelligence. Of particular interest is that the authors' entry was the only robot capable of using a gripper to capture and pick up the motorized, randomly moving squiggle ball. Other examples of robotic systems using fast vision tracking are also presented, such as a robot arm capable of catching thrown objects and the soccer-playing robot team that won the 1996 Micro Robot World Cup Soccer Tournament in Taejon, Korea.

Journal ArticleDOI
TL;DR: An introduction to some of the emerging research in the application of corpus-based learning techniques to problems in semantic interpretation, namely, word-sense disambiguation and semantic parsing.
Abstract: In recent years, there has been a flurry of research into empirical, corpus-based learning approaches to natural language processing (NLP). Most empirical NLP work to date has focused on relatively low-level language processing such as part-of-speech tagging, text segmentation, and syntactic parsing. The success of these approaches has stimulated research in using empirical learning techniques in other facets of NLP, including semantic analysis -- uncovering the meaning of an utterance. This article is an introduction to some of the emerging research in the application of corpus-based learning techniques to problems in semantic interpretation. In particular, we focus on two important problems in semantic interpretation, namely, word-sense disambiguation and semantic parsing.

Journal Article
TL;DR: The "Naive Physics Manifesto" of Pat Hayes (1978) proposes a large-scale project to develop a formal theory encompassing the entire knowledge of physics of naive reasoners, expressed in a declarative symbolic form, and this article compares the advantages and disadvantages of the two approaches.
Abstract: The ``Naive Physics Manifesto'''' of Pat Hayes [1978] proposes a large-scale project of developing a formal theory encompassing the entire knowledge of physics of naive reasoners, expressed in a declarative symbolic form. The theory is organized in clusters of closely interconnected concepts and axioms. More recent work in the representation of commonsense physical knowledge has followed a somewhat different methodology. The goal has been to develop a competence theory powerful enough to justify commonsense physical inferences, and the research is organized in microworlds, each microworld covering a small range of physical phenomena. In this paper we compare the advantages and disadvantages of the two approaches. We also discuss some difficult key issues in automating commonsense physical reasoning.

Journal ArticleDOI
TL;DR: The research agenda, strategy, and heuristics are reviewed, and a change of course is recommend-ed to improve the field's ability to produce reusable and interoperable components.
Abstract: AI has been well supported by government research and development dollars for decades now, and people are beginning to ask hard questions: What really works? What are the limits? What doesn't work as advertised? What isn't likely to work? What isn't affordable? This article holds a mirror up to the community, both to provide feedback and stimulate more self-assessment. The significant accomplishments and strengths of the field are highlighted. The research agenda, strategy, and heuristics are reviewed, and a change of course is recommend-ed to improve the field's ability to produce reusable and interoperable components.

Journal ArticleDOI
TL;DR: Three agents that help a user locate useful or interesting information on the World Wide Web by learning a probabilistic profile to find, classify, or rank other sources of information that are likely to interest the user.
Abstract: This article describes three agents that help a user locate useful or interesting information on the World Wide Web. The agents learn a probabilistic profile to find, classify, or rank other sources of information that are likely to interest the user.

Journal ArticleDOI
TL;DR: The robot competition raised the standard for autonomous mobile robotics, demonstrating the intelligent integration of perception, deliberation, and action.
Abstract: The Fifth Annual AAAI Mobile Robot Competition and Exhibition was held in Portland, Oregon, in conjunction with the Thirteenth National Conference on Artificial Intelligence. The competition consisted of two events: (1) Office Navigation and (2) Clean Up the Tennis Court. The first event stressed navigation and planning. The second event stressed vision sensing and manipulation. In addition to the competition, there was a mobile robot exhibition in which teams demonstrated robot behaviors that did not fit into the competition tasks. The competition and exhibition were unqualified successes, with nearly 20 teams competing. The robot competition raised the standard for autonomous mobile robotics, demonstrating the intelligent integration of perception, deliberation, and action.

Journal ArticleDOI
TL;DR: Artificial intelligence is the key technology in many of today's novel applications, ranging from banking systems that detect attempted credit card fraud, to telephone systems that understand speech, to software systems that notice when you're having problems and offer appropriate advice.
Abstract: Artificial intelligence (AI) is the key technology in many of today's novel applications, ranging from banking systems that detect attempted credit card fraud, to telephone systems that understand speech, to software systems that notice when you're having problems and offer appropriate advice These technologies would not exist today without the sustained federal support of fundamental AI research over the past three decades

Journal ArticleDOI
TL;DR: An assessment about what has been achieved in the 20 years since the field started as a distinct discipline of logic and databases is provided.
Abstract: At a workshop held in Toulouse, France, in 1977, Gallaire, Minker, and Nicolas stated that logic and databases was a field in its own right. This was the first time that this designation was made. The impetus for it started approximately 20 years ago in 1976 when I visited Gallaire and Nicolas in Toulouse, France. In this article, I provide an assessment about what has been achieved in the 20 years since the field started as a distinct discipline. I review developments in the field, assess contributions, consider the status of implementations of deductive databases, and discuss future work needed in deductive databases.

Journal ArticleDOI
TL;DR: Recent and ongoing work on spacecraft autonomy and ground systems that builds on a legacy of existing success at JPL applying AI techniques to challenging computational problems in planning and scheduling, real-time monitoring and control, scientific data analysis, and design automation are described.
Abstract: The National Aeronautics and Space Administration (NASA) is being challenged to perform more frequent and intensive space-exploration missions at greatly reduced cost. Nowhere is this challenge more acute than among robotic planetary exploration missions that the Jet Propulsion Laboratory (JPL) conducts for NASA. This article describes recent and ongoing work on spacecraft autonomy and ground systems that builds on a legacy of existing success at JPL applying AI techniques to challenging computational problems in planning and scheduling, real-time monitoring and control, scientific data analysis, and design automation.

Journal ArticleDOI
TL;DR: How state-of-the-art speech-recognition systems combine statistical modeling, linguistic knowledge, and machine learning to achieve their performance is reviewed and some of the research issues in the field are pointed out.
Abstract: Automatic speech recognition is one of the fastest growing and commercially most promising applications of natural language technology. The technology has achieved a point where carefully designed systems for suitably constrained applications are a reality. Commercial systems are available today for such tasks as large-vocabulary dictation and voice control of medical equipment. This article reviews how state-of-the-art speech-recognition systems combine statistical modeling, linguistic knowledge, and machine learning to achieve their performance and points out some of the research issues in the field.

Journal ArticleDOI
TL;DR: History and an understanding of human-machine interaction argue otherwise: any number of forces may work towards the stratification of society, but the computer is not one of them.
Abstract: Opinion used in Europe as an arithmetical prosthesis. In the 13th century, Leo-nard Fibonnacci introduced into Europe the Hindu-Arabic system of numbers and the arithmetic algorithms they made possible. One of the first books after the Bible printed with moveable type was an Arithmetic. Even so, the algorithms were not easy and not widely disseminated. Most 17th century tradesman could not multiply. Today, every shop assistant W hen hands are wringing, they are often wrung over the information revolution: the computer, the web, robots, the automation of manufacturing will all conspire to separate the rich and quick from the poor and slow, hurrying the trend to an informed, skilled, and employed elite living among an uninformed, unskilled, and unemployed majority. But both history and an understanding of human-machine interaction argue otherwise. Any number of forces may work towards the stratification of society, but the computer is not one of them. Computers , especially intelligent ones, are the great equalizers. Humanity has always recognized that the powers of mind are limited, and has always made devices to compensate for those limitations. Our most obvious cognitive limitation is memory, and writing is a device for storing information outside the head so that it does not have to be remembered. Instead, the human brain need only store the code for reading. As soon as it could be economically reproduced and distributed, writing became in Europe an irresistible force for equality: within three centuries after Guttenberg, modern science had been created, ecclesiastical authority had been reduced, the divine right of kings had vanished, and democratic forms of government had emerged. Calculation shows the same history. Roman enumeration methods made addition, multiplication and division impossible except for the gifted. To compensate, the abacus was turning the screws, or someone driving a car not really moving along the highway? With a power screwdriver anyone can drive the hardest screw; with a calculator, anyone can get the numbers right; with an aircraft anyone can fly to Paris; and with Deep Blue, anyone can beat the world chess champion. Cognitive prostheses undermine the exclusiveness of expertise by giving non-experts equivalent capacities. As with any good tool, the effect is to make all of us more productive , more skillful, more equal. One day last year one of our fathers, Joe Glymour, told us with mixed pleasure and anxiety that he was beginning a game of postcard chess against …

Journal ArticleDOI
TL;DR: It is argued that Jeeves’s success depended crucially on the existence of the model, and that models are generally useful in mobile robotics—even in tasks as simple as the one faced in this competition.
Abstract: —This article describes Jeeves, one of the winning entries in the 1996 AAAI mobile robot competition. Jeeves tied for first place in the finals of the competition, after it won both preliminary trials. A key aspect in Jeeves’s software design was the ability to acquire a model of the environment. The model, a geometric map constructed from sensory data while the robot performs its task, enabled Jeeves to sweep the arena efficiently. It facilitated the retrieval of balls and their delivery at the gate, and it helped to avoid unintended collisions with obstacles. This paper argues that Jeeves’s success depended crucially on the existence of the model. It also argues that models are generally useful in mobile robotics—even in tasks as simple as the one faced in this competition.

Journal ArticleDOI
TL;DR: The piece begins with a short history of the competition, then discusses the technical challenges and the political and cultural issues associated with bringing it off every year, and discusses the community formed by the organizers, participants, and the conference attendees.
Abstract: This article is the content of an invited talk given by the authors at the Thirteenth National Conference on Artificial Intelligence (AAAI-96) The piece begins with a short history of the competition, then discusses the technical challenges and the political and cultural issues associated with bringing it off every year We also cover the science and engineering involved with the robot tasks and the educational and commercial aspects of the competition We finish with a discussion of the community formed by the organizers, participants, and the conference attendees The original talk made liberal use of video clips and slide photographs; so, we have expanded the text and added photographs to make up for the lack of such media

Journal ArticleDOI
TL;DR: The Second International Conference on Multiagent Systems (ICMAS-96) Workshop on Norms, Obligations, and Conventions was held in Kyoto, Japan, from 10 to 13 December 1996 and included scientists from deontic logic, database framework, decision theory, agent architecture, cognitive modeling, and legal expert systems.
Abstract: The Second International Conference on Multiagent Systems (ICMAS-96) Workshop on Norms, Obligations, and Conventions was held in Kyoto, Japan, from 10 to 13 December 1996. Participants included scientists from deontic logic, database framework, decision theory, agent architecture, cognitive modeling, and legal expert systems. This article summarizes the contributions chosen for presentation and their links to these areas.