scispace - formally typeset
Search or ask a question

Showing papers in "IEEE Intelligent Systems in 2002"


Journal ArticleDOI
TL;DR: The issues and available tools in three key areas of virtual human research: face-to-face conversation, emotions and personality, and human figure animation are overviewed.
Abstract: Discusses some of the key issues that must be addressed in creating virtual humans, or androids. As a first step, we overview the issues and available tools in three key areas of virtual human research: face-to-face conversation, emotions and personality, and human figure animation. Assembling a virtual human is still a daunting task, but the building blocks are getting bigger and better every day.

406 citations


Journal ArticleDOI
TL;DR: The goal is to help developers find the most suitable language for their representation needs for the semantic information that this Web requires-solving heterogeneous data exchange in this heterogeneous environment.
Abstract: Ontologies have proven to be an essential element in many applications. They are used in agent systems, knowledge management systems, and e-commerce platforms. They can also generate natural language, integrate intelligent information, provide semantic-based access to the Internet, and extract information from texts in addition to being used in many other applications to explicitly declare the knowledge embedded in them. However, not only are ontologies useful for applications in which knowledge plays a key role, but they can also trigger a major change in current Web contents. This change is leading to the third generation of the Web-known as the Semantic Web-which has been defined as the conceptual structuring of the Web in an explicit machine-readable way. New ontology-based applications and knowledge architectures are developing for this new Web. A common claim for all of these approaches is the need for languages to represent the semantic information that this Web requires-solving heterogeneous data exchange in this heterogeneous environment. Our goal is to help developers find the most suitable language for their representation needs.

378 citations


Journal ArticleDOI
TL;DR: In this paper, the authors focus on DAML's current markup language, DAML+OIL, which is a proposed starting point for the W3C's Semantic Web Activity's Ontology Web Language (OWL).
Abstract: By all measures, the Web is enormous and growing at a staggering rate, which has made it increasingly difficult-and important-for both people and programs to have quick and accurate access to Web information and services. The Semantic Web offers a solution, capturing and exploiting the meaning of terms to transform the Web from a platform that focuses on presenting information, to a platform that focuses on understanding and reasoning with information. To support Semantic Web development, the US Defense Advanced Research Projects Agency launched the DARPA Agent Markup Language (DAML) initiative to fund research in languages, tools, infrastructure, and applications that make Web content more accessible and understandable. Although the US government funds DAML, several organizations-including US and European businesses and universities, and international consortia such as the World Wide Web Consortium-have contributed to work on issues related to DAML's development and deployment. We focus on DAML's current markup language, DAML+OIL, which is a proposed starting point for the W3C's Semantic Web Activity's Ontology Web Language (OWL). We introduce DAML+OIL syntax and usage through a set of examples, drawn from a wine knowledge base used to teach novices how to build ontologies.

356 citations


Journal ArticleDOI
TL;DR: The authors introduce their character-based interactive storytelling prototype that uses hierarchical task network planning techniques, which support story generation and any-time user intervention.
Abstract: Interactive storytelling is a privileged application of intelligent visual actor technology. The authors introduce their character-based interactive storytelling prototype that uses hierarchical task network planning techniques, which support story generation and any-time user intervention.

341 citations


Journal ArticleDOI
TL;DR: Agent-based manufacturing scheduling systems are a promising way to provide this optimization process that allocates limited manufacturing resources over time among parallel and sequential manufacturing activities.
Abstract: Manufacturing scheduling is an optimization process that allocates limited manufacturing resources over time among parallel and sequential manufacturing activities. This allocation must obey a set of rules or constraints that reflect the temporal relationships between manufacturing activities and the capacity limitations of a set of shared resources. The allocation also affects a schedule's optimality with respect to criteria such as cost, lateness, or throughput. The globalization of manufacturing makes such optimization increasingly important. To survive in this competitive market, manufacturing enterprises must increase their productivity and profitability through greater shop floor agility. Agent-based manufacturing scheduling systems are a promising way to provide this optimization.

281 citations


Journal ArticleDOI
TL;DR: A behavior language is a reactive planning language, based on the Oz Project language Hap, designed specifically for authoring believable agents-characters that express rich personality, and that, in this case, play roles in an interactive story called Facade.
Abstract: A behavior language is a reactive planning language, based on the Oz Project language Hap, designed specifically for authoring believable agents-characters that express rich personality, and that, in this case, play roles in an interactive story called Facade.

239 citations


Journal ArticleDOI
TL;DR: The Mission Rehearsal Exercise project involves an ambitious integration of core technologies centered on a common representation of task knowledge to enrich interactive virtual worlds.
Abstract: Virtual humans - autonomous agents that support face-to-face interaction in a variety of roles - can enrich interactive virtual worlds. Toward that end, the Mission Rehearsal Exercise project involves an ambitious integration of core technologies centered on a common representation of task knowledge.

235 citations


Journal ArticleDOI
TL;DR: This article proposes a simulation-based DCF scheme designed to let rational agents form coalitions in dynamic environments that is well suited for applications of ubiquitous and mobile computing.
Abstract: Dynamic coalition formation (DCF) promises to be well suited for applications of ubiquitous and mobile computing. This article proposes a simulation-based DCF scheme designed to let rational agents form coalitions in dynamic environments.

218 citations


Journal ArticleDOI
TL;DR: The tragedy of September 11th 2001 at the World Trade Center is likely to propel search-and-rescue robotics into its next stage, just as the Kobe earthquake and the Oklahoma City bombing were the catalysts for this research domain.
Abstract: The tragedy of September 11th 2001 at the World Trade Center is likely to propel search-and-rescue robotics into its next stage, just as the Kobe earthquake and the Oklahoma City bombing were the catalysts for this research domain. Tragedy hasn't been the only motivator for urban search-and-rescue advancements in the USA and Japan; international competition has motivated both countries, first with RoboCup Soccer and more recently with RoboCup Rescue. We may see inexpensive urban search-and-rescue robots mass-produced within five years if advances in hardware and software keep up.

196 citations


Journal ArticleDOI
TL;DR: The authors discuss travel recommender systems, adaptive context aware mobility support for tourists, tourism information systems, information delivery and travel planning information gathering agents.
Abstract: The authors discuss travel recommender systems, adaptive context aware mobility support for tourists, tourism information systems, information delivery and travel planning information gathering agents.

180 citations


Journal ArticleDOI
TL;DR: The MusicBrainz project is a large database of music metadata, and even though it's only in beta testing right now, it already contains over 300,000 tracks, providing what some have termed the "cornucopia of the commons."
Abstract: Music has always caught the public's imagination. From dreams of a giant "jukebox in the sky" over the Information Superhighway to the debate about Napster, music has always been the "killer app" used to describe new technologies. Of course, these dreams have never quite come about as planned. Instead of a smart machine seeking out music tuned to my tastes, I still have only a small number of choices on my radio dial. And ever since Napster started filtering, sharing music on the Internet has become increasingly difficult. One thing that underlies these ideas is their dependency on metadata, or data about data. Metadata provides information about artists, song titles, and so on. All that information is attached to the music, but isn't part of it. The music world suffers from a lack of standardization in terms of metadata formats, as well as a paucity of public metadata. The MusicBrainz project hopes to change this situation. It's a large database of music metadata, and even though it's only in beta testing right now, it already contains over 300,000 tracks. MusicBrainz information is all user-contributed, providing what some have termed the "cornucopia of the commons." Unlike many situations, where each user decreases the value of the shared space (the so-called "tragedy of the commons"), the easy duplication of electronic information creates a situation where each user makes the system more valuable.

Journal ArticleDOI
TL;DR: This article presents a methodology for designing human-centered computing systems using electronic medical records (EMR) systems and successfully applied the methodology in designing a prototype of a human- centered intelligent flight-surgeon console at NASA Johnson Space Center.
Abstract: Many computer systems are designed according to engineering and technology principles and are typically difficult to learn and use. The fields of human-computer interaction, interface design, and human factors have made significant contributions to ease of use and are primarily concerned with the interfaces between systems and users, not with the structures that are often more fundamental for designing truly human-centered systems. The emerging paradigm of human-centered computing (HCC)-which has taken many forms-offers a new look at system design. HCC requires more than merely designing an artificial agent to supplement a human agent. The dynamic interactions in a distributed system composed of human and artificial agents-and the context in which the system is situated-are indispensable factors. While we have successfully applied our methodology in designing a prototype of a human-centered intelligent flight-surgeon console at NASA Johnson Space Center, this article presents a methodology for designing human-centered computing systems using electronic medical records (EMR) systems.

Journal ArticleDOI
TL;DR: This work focuses on blocking pornography because it is among the most prolific and harmful Web content, but the general framework is adaptable for filtering other objectionable Web material.
Abstract: With the proliferation of harmful Internet content such as pornography, violence, and hate messages, effective content-filtering systems are essential. Many Web-filtering systems are commercially available, and potential users can download trial versions from the Internet. However, the techniques these systems use are insufficiently accurate and do not adapt well to the ever-changing Web. To solve this problem, we propose using artificial neural networks to classify Web pages during content filtering. We focus on blocking pornography because it is among the most prolific and harmful Web content. However, our general framework is adaptable for filtering other objectionable Web material.

Journal ArticleDOI
TL;DR: The system's rules (called frames) used to detect and analyze interaction networks described in the molecular biology literature are described.
Abstract: SUISEKI, an information extraction system, uses morphological, syntactical, and contextual information to detect gene and protein names and interactions in scientific texts. This article describes the system's rules (called frames) used to detect and analyze interaction networks described in the molecular biology literature.

Journal ArticleDOI
TL;DR: The article discusses ways to let semantics emerge from simple observations from the bottom-up, rather than imposing concepts on the observations top-down, to provide precise query, retrieval, communication or translation for a wide variety of applications.
Abstract: The article discusses ways to let semantics emerge from simple observations from the bottom-up, rather than imposing concepts on the observations top-down, to provide precise query, retrieval, communication or translation for a wide variety of applications. The following areas are examined: image retrieval and databases; media information spaces including the Semantic Web and MPEG frameworks; language games for emergent semantics; and emergent semantics for ontologies.

Journal ArticleDOI
Yilin Zhao1
TL;DR: How the concept of personal vehicles has changed in the last decades is shown to become not only a safe and comfortable means of transportation but also a digital platform for entertainment and access to a vast quantity of information while traveling.
Abstract: The author describes current and future wireless applications that are likely to become our companions in future journeys. This article shows how the concept of personal vehicles has changed in the last decades. Our vehicles will become not only a safe and comfortable means of transportation but also a digital platform for entertainment and access to a vast quantity of information while traveling.

Journal ArticleDOI
TL;DR: The integration and use of technology, the distribution and collocation of people, organizational roles and procedures, and the facilities where the work occurs largely determine the evolution of work systems and work practice.
Abstract: Work systems involve people engaging in activities over time-not just with each other, but also with machines, tools, documents, and other artifacts. These activities often produce goods, services, or-as is the case in the work system described in this article-scientific data. Work systems and work practice evolve slowly over time. The integration and use of technology, the distribution and collocation of people, organizational roles and procedures, and the facilities where the work occurs largely determine this evolution.

Journal ArticleDOI
TL;DR: The described sensor validation algorithm monitors a process's active alarms, alerting operators and support personnel to potential false alarms.
Abstract: The described sensor validation algorithm monitors a process's active alarms, alerting operators and support personnel to potential false alarms.

Journal ArticleDOI
TL;DR: The author discusses cognitive computers with the ability to reason, learn and respond intelligently to things that they have never encountered before.
Abstract: The author discusses cognitive computers with the ability to reason, learn and respond intelligently to things that they have never encountered before. A truly cognitive system would be able to learn from its experience, as well as by being instructed, and perform better on day two than it did on day one. It would be able to explain what it was doing and why it was doing it. The author considers application foundations.

Journal ArticleDOI
TL;DR: In this article, the authors describe their ITtalks system and discuss how Semantic Web concepts and DAML+OIL extend its ability to provide an intelligent online service, which will improve the automated gathering and processing of information and help integrate multiagent systems with the existing information infrastructure.
Abstract: Semantic Web markup languages will improve the automated gathering and processing of information and help integrate multiagent systems with the existing information infrastructure. The authors, describe their ITtalks system and discuss how Semantic Web concepts and DAML+OIL extend its ability to provide an intelligent online service.

Journal ArticleDOI
Kenji Yamanishi1, Hang Li
TL;DR: This work has developed a survey analysis system that mines open answers through two statistical learning techniques: rule learning (which the authors call rule analysis) and correspondence analysis.
Abstract: Surveys are important tools for marketing and for managing customer relationships; the answers to open-ended questions, in particular, often contain valuable information and provide an important basis for business decisions. The summaries that human analysts make of these open answers, however, tend to rely too much on intuition and so aren't satisfactorily reliable. Moreover, because the Web makes it so easy to take surveys and solicit comments, companies are finding themselves inundated with data from questionnaires and other sources. Handling it all manually would be not only cumbersome but also costly. Thus, devising a computer system that can automatically mine useful information from open answers has become an important issue. We have developed a survey analysis system that works on these principles. The system mines open answers through two statistical learning techniques: rule learning (which we call rule analysis) and correspondence analysis.

Journal ArticleDOI
TL;DR: Qualitative spatial reasoning techniques can help overcome this challenge by providing more expressive spatial representations, better communication of intent, better path-finding and reusable strategy libraries.
Abstract: Spatial reasoning is a major challenge for strategy-game artificial intelligence systems. Qualitative spatial reasoning techniques can help overcome this challenge by providing more expressive spatial representations, better communication of intent, better path-finding and reusable strategy libraries.

Journal ArticleDOI
TL;DR: The Web-based personalization system proposed here uses both collaborative filtering and Web usage mining to give online shoppers the personalized recommendations they need to purchase products more intelligently.
Abstract: The Web-based personalization system proposed here uses both collaborative filtering and Web usage mining to give online shoppers the personalized recommendations they need to purchase products more intelligently.

Journal ArticleDOI
TL;DR: The E-Cell project aims to develop the theories, techniques, and software platforms necessary for whole-cell-scale modeling, simulation, and analysis.
Abstract: Molecular biology's advent in the 20th century has exponentially increased our knowledge about the inner workings of life. We have dozens of completed genomes and an array of high-throughput methods to characterize gene encodings and gene product operation. The question now is how we will assemble the various pieces. In other words, given sufficient information about a living cell's molecular components, can we predict its behavior? We introduce the major classes of cellular processes relevant to modeling, discuss software engineering's role in cell simulation, and identify cell simulation requirements. Our E-Cell project aims to develop the theories, techniques, and software platforms necessary for whole-cell-scale modeling, simulation, and analysis. Since the project's launch in 1996, we have built a variety of cell models, and we are currently developing new models that vary with respect to species, target subsystem, and overall scale.

Journal ArticleDOI
TL;DR: The emerging symbio-sis of knowledge about users' desires, preferences, and habits with information garnered from the Semantic Web results in superior agent-based assistance than that provided by existing agents.
Abstract: pages in HTML quickly captured public interest, and it evolved at a phenomenal rate into a vast knowledge base. During this growth, however, Web pages' information structure , which is crucial to making the information machine understandable, was sacrificed in favor of presentation and physical design. However, this lack of structure has not deterred agents from using information on the Web. Many agent systems track users' navigation habits as they click throughout the Web so they can suggest new, potentially interesting Web pages to them; others automate access to information sites, replicating human-oriented queries and parsing the resulting pages. Agents have also harvested tax-onomies that guide human navigation toward clustered topics of interest. However, in each case, these agents have had to use bespoke parsers (code written specifically to parse a given Web page, such as Yahoo's restaurant information), and content scrapers (languages for defining rules used to extract information from and manipulate Web pages) to pull out the text before using keyword recognition or natural language recognition techniques to understand the content. 1 The emergence of the Semantic Web 2 has simplified and improved knowledge reuse on the Web. Agents can parse Semantic Web pages to elicit relevant information. They can now understand and reason about information and use it to meet users' needs and provide assistance through reference to ontologies, axioms, and languages such as DARPA Agent Markup Language. 3 The emerging symbio-sis of knowledge about users' desires, preferences, and habits with information garnered from the Semantic Web results in superior agent-based assistance than that provided by existing agents. In Technology Review's \" A Smarter Web, \" 4 Tim Berners-Lee describes a travel scenario in which a user instructs an agent to organize a trip to a conference. To achieve this, the agent requires knowledge about the user's flight preferences , current schedule, and details such as credit card information. The agent obtains the conference's dates and locations from the semantic markup at the conference Web site. After booking a flight, the agent then stores the resulting itinerary in the user's calendar and cancels or reschedules pending meetings while the user is away. It highlights conference presentations and sessions that might interest the user and sets reminders to notify him or her (through PDA or phone, for example) of an impending talk. RCAL To address many of these tasks, we've developed the RETSINA (Reusable Environment for Task-Structured Intelligent …

Journal ArticleDOI
TL;DR: Bayesian network methods are useful for elucidating genetic regulatory networks because they can represent more than pair-wise relationships between variables, are resistant to overfitting, and remain robust in the face of noisy data.
Abstract: Bayesian network methods are useful for elucidating genetic regulatory networks because they can represent more than pair-wise relationships between variables, are resistant to overfitting, and remain robust in the face of noisy data.


Journal ArticleDOI
TL;DR: In this paper, the authors use a biological classification scheme to organize the discussion of the new approaches to the design of complex sociotechnical systems, including cognitive systems engineering.
Abstract: In this article, we concern ourselves with characterizations of the "new" approaches to the design of complex sociotechnical systems, and we use a biological classification scheme to organize the discussion. Until fairly recently, the design of complex sociotechnical systems was primarily known as "cognitive engineering" or "cognitive systems engineering" (CSE), a term introduced to denote an emerging branch of applied cognitive psychology. A number of new terms have since emerged, all of which might be considered members of the genus "human-centered computing" (HCC). A number of varieties have entered the fray, resulting in an "acronym soup" of terms that have been offered to designate "the" new approach to cognitive engineering. Using the rose metaphor, and taking some liberties with Latin, this article is organized around a set of "genuses" into which the individual "varieties" seem to fall.

Journal ArticleDOI
TL;DR: The author considers McCarthy's conception of Lisp and discusses McCarthy's recent research that involves elaboration tolerance, creativity by machines, free will of machines, and some improved ways of doing situation calculus.
Abstract: If John McCarthy, the father of AI, were to coin a new phrase for "artificial intelligence" today, he would probably use "computational intelligence." McCarthy is not just the father of AI, he is also the inventor of the Lisp (list processing) language. The author considers McCarthy's conception of Lisp and discusses McCarthy's recent research that involves elaboration tolerance, creativity by machines, free will of machines, and some improved ways of doing situation calculus.

Journal ArticleDOI
TL;DR: The Coalition Agents Experiment aims to show that multi-agent systems offer effective tools for dealing with complex real-world problems by enabling agile and robust coalition operations and interoperability between heterogeneous military systems.
Abstract: The Coalition Agents Experiment aims to show that multi-agent systems offer effective tools for dealing with complex real-world problems by enabling agile and robust coalition operations and interoperability between heterogeneous military systems.