scispace - formally typeset
Search or ask a question
Author

Yun Peng

Bio: Yun Peng is an academic researcher from University of Maryland, Baltimore County. The author has contributed to research in topics: Ontology (information science) & Semantic Web. The author has an hindex of 35, co-authored 134 publications receiving 6194 citations. Previous affiliations of Yun Peng include Academia Sinica & University of Maryland, Baltimore.


Papers
More filters
Proceedings ArticleDOI
13 Nov 2004
TL;DR: Swoogle is a crawler-based indexing and retrieval system for the Semantic Web that extracts metadata for each discovered document, and computes relations between documents.
Abstract: Swoogle is a crawler-based indexing and retrieval system for the Semantic Web. It extracts metadata for each discovered document, and computes relations between documents. Discovered documents are also indexed by an information retrieval system which can use either character N-Gram or URIrefs as keywords to find relevant documents and to compute the similarity among a set of documents. One of the interesting properties we compute is ontology rank, a measure of the importance of a Semantic Web document.

926 citations

Journal ArticleDOI
TL;DR: Some concepts useful in discussing agent communication languages are introduced and then the two major ACLs are compared and evaluated.
Abstract: Despite the substantial number of multiagent systems that use an agent communication language, the dust has not yet settled over the ACL landscape. Although semantic specification issues have monopolized the debate, other important pragmatic issues must be resolved quickly if ACLs are to support the development of robust agent systems. We introduce some concepts useful in discussing agent communication languages and then compare and evaluate the two major ACLs.

508 citations

Book
26 Jun 1990
TL;DR: This paper presents a meta-model for parallel processing for Diagnostic Problem-Solving using the probabilistic Causal Model and a parallel processing model based on the Parsimonious Covering Theory.
Abstract: Contents: Abduction and Diagnostic Inference.- Computational Models for Diagnostic Problem Solving.- Basics of Parsimonious Covering Theory.- Probabilistic Causal Model.- Diagnostic Strategies in the Probabilistic Causal Model.- Causal Chaining.- Parallel Processing for Diagnostic Problem-Solving.- Conclusion.- Bibliography.- Index.

461 citations

Proceedings ArticleDOI
05 Jan 2004
TL;DR: This work proposes to incorporate Bayesian networks (BN), a widely used graphic model for knowledge representation under uncertainty and OWL, the de facto industry standard ontology language recommended by W3C to support uncertain ontology representation and ontology reasoning and mapping.
Abstract: To support uncertain ontology representation and ontology reasoning and mapping, we propose to incorporate Bayesian networks (BN), a widely used graphic model for knowledge representation under uncertainty and OWL, the de facto industry standard ontology language recommended by W3C. First, OWL is augmented to allow additional probabilistic markups, so probabilities can be attached with individual concepts and properties in an OWL ontology. Secondly, a set of translation rules is defined to convert this probabilistically annotated OWL ontology into the directed acyclic graph (DAG) of a BN. Finally, the BN is completed by constructing conditional probability tables (CPT) for each node in the DAG. Our probabilistic extension to OWL is consistent with OWL semantics, and the translated BN is associated with a joint probability distribution over the application domain. General Bayesian network inference procedures (e.g., belief propagation or junction tree) can be used to compute P(C/spl bsol/e): the degree of the overlap or inclusion between a concept C and a concept represented by a description e. We also provide a similarity measure that can be used to find the most similar concept that a given description belongs to.

262 citations

Journal ArticleDOI
01 Mar 1987
TL;DR: It is shown that the causal relationships in a general diagnostic domain can be used to remove the barriers to applying Bayesian classification effectively and provides insight into which notions of "parsimony" may be relevant in a given application area.
Abstract: The issue of how to effectively integrate and use symbolic causal knowledge with numeric estimates of probabilities in abductive diagnostic expert systems is examined. In particular, a formal probabilistic causal model that integrates Bayesian classification with a domain-independent artificial intelligence model of diagnostic problem solving (parsimonious covering theory) is developed. Through a careful analysis, it is shown that the causal relationships in a general diagnostic domain can be used to remove the barriers to applying Bayesian classification effectively (large number of probabilities required as part of the knowledge base, certain unrealistic independence assumptions, the explosion of diagnostic hypotheses that occurs when multiple disorders can occur simultaneously, etc.). Further, this analysis provides insight into which notions of "parsimony" may be relevant in a given application area. In a companion paper, Part Two, a computationally efficient diagnostic strategy based on the probabilistic causal model discussed in this paper is developed.

257 citations


Cited by
More filters
Journal ArticleDOI
TL;DR: Machine learning addresses many of the same research questions as the fields of statistics, data mining, and psychology, but with differences of emphasis.
Abstract: Machine Learning is the study of methods for programming computers to learn. Computers are applied to a wide range of tasks, and for most of these it is relatively easy for programmers to design and implement the necessary software. However, there are many tasks for which this is difficult or impossible. These can be divided into four general categories. First, there are problems for which there exist no human experts. For example, in modern automated manufacturing facilities, there is a need to predict machine failures before they occur by analyzing sensor readings. Because the machines are new, there are no human experts who can be interviewed by a programmer to provide the knowledge necessary to build a computer system. A machine learning system can study recorded data and subsequent machine failures and learn prediction rules. Second, there are problems where human experts exist, but where they are unable to explain their expertise. This is the case in many perceptual tasks, such as speech recognition, hand-writing recognition, and natural language understanding. Virtually all humans exhibit expert-level abilities on these tasks, but none of them can describe the detailed steps that they follow as they perform them. Fortunately, humans can provide machines with examples of the inputs and correct outputs for these tasks, so machine learning algorithms can learn to map the inputs to the outputs. Third, there are problems where phenomena are changing rapidly. In finance, for example, people would like to predict the future behavior of the stock market, of consumer purchases, or of exchange rates. These behaviors change frequently, so that even if a programmer could construct a good predictive computer program, it would need to be rewritten frequently. A learning program can relieve the programmer of this burden by constantly modifying and tuning a set of learned prediction rules. Fourth, there are applications that need to be customized for each computer user separately. Consider, for example, a program to filter unwanted electronic mail messages. Different users will need different filters. It is unreasonable to expect each user to program his or her own rules, and it is infeasible to provide every user with a software engineer to keep the rules up-to-date. A machine learning system can learn which mail messages the user rejects and maintain the filtering rules automatically. Machine learning addresses many of the same research questions as the fields of statistics, data mining, and psychology, but with differences of emphasis. Statistics focuses on understanding the phenomena that have generated the data, often with the goal of testing different hypotheses about those phenomena. Data mining seeks to find patterns in the data that are understandable by people. Psychological studies of human learning aspire to understand the mechanisms underlying the various learning behaviors exhibited by people (concept learning, skill acquisition, strategy change, etc.).

13,246 citations

Journal ArticleDOI
TL;DR: The authors describe progress to date in publishing Linked Data on the Web, review applications that have been developed to exploit the Web of Data, and map out a research agenda for the Linked data community as it moves forward.
Abstract: The term “Linked Data” refers to a set of best practices for publishing and connecting structured data on the Web. These best practices have been adopted by an increasing number of data providers over the last three years, leading to the creation of a global data space containing billions of assertions— the Web of Data. In this article, the authors present the concept and technical principles of Linked Data, and situate these within the broader context of related technological developments. They describe progress to date in publishing Linked Data on the Web, review applications that have been developed to exploit the Web of Data, and map out a research agenda for the Linked Data community as it moves forward.

5,113 citations

01 Jan 2000
TL;DR: This article briefly reviews the basic concepts about cognitive radio CR, and the need for software-defined radios is underlined and the most important notions used for such.
Abstract: An Integrated Agent Architecture for Software Defined Radio. Rapid-prototype cognitive radio, CR1, was developed to apply these.The modern software defined radio has been called the heart of a cognitive radio. Cognitive radio: an integrated agent architecture for software defined radio. Http:bwrc.eecs.berkeley.eduResearchMCMACR White paper final1.pdf. The cognitive radio, built on a software-defined radio, assumes. Radio: An Integrated Agent Architecture for Software Defined Radio, Ph.D. The need for software-defined radios is underlined and the most important notions used for such. Mitola III, Cognitive radio: an integrated agent architecture for software defined radio, Ph.D. This results in the set-theoretic ontology of radio knowledge defined in the. Cognitive Radio An Integrated Agent Architecture for Software.This article first briefly reviews the basic concepts about cognitive radio CR. Cognitive Radio-An Integrated Agent Architecture for Software Defined Radio. Cognitive Radio RHMZ 2007. Software-defined radio SDR idea 1. Cognitive radio: An integrated agent architecture for software.Cognitive Radio SOFTWARE DEFINED RADIO, AND ADAPTIVE WIRELESS SYSTEMS2 Cognitive Networks. 3 Joseph Mitola III, Cognitive Radio: An Integrated Agent Architecture for Software Defined Radio Stockholm.

3,814 citations

Book
01 Nov 2001
TL;DR: A multi-agent system (MAS) as discussed by the authors is a distributed computing system with autonomous interacting intelligent agents that coordinate their actions so as to achieve its goal(s) jointly or competitively.
Abstract: From the Publisher: An agent is an entity with domain knowledge, goals and actions. Multi-agent systems are a set of agents which interact in a common environment. Multi-agent systems deal with the construction of complex systems involving multiple agents and their coordination. A multi-agent system (MAS) is a distributed computing system with autonomous interacting intelligent agents that coordinate their actions so as to achieve its goal(s) jointly or competitively.

3,003 citations

Book
05 Jun 2007
TL;DR: The second edition of Ontology Matching has been thoroughly revised and updated to reflect the most recent advances in this quickly developing area, which resulted in more than 150 pages of new content.
Abstract: Ontologies tend to be found everywhere. They are viewed as the silver bullet for many applications, such as database integration, peer-to-peer systems, e-commerce, semantic web services, or social networks. However, in open or evolving systems, such as the semantic web, different parties would, in general, adopt different ontologies. Thus, merely using ontologies, like using XML, does not reduce heterogeneity: it just raises heterogeneity problems to a higher level. Euzenat and Shvaikos book is devoted to ontology matching as a solution to the semantic heterogeneity problem faced by computer systems. Ontology matching aims at finding correspondences between semantically related entities of different ontologies. These correspondences may stand for equivalence as well as other relations, such as consequence, subsumption, or disjointness, between ontology entities. Many different matching solutions have been proposed so far from various viewpoints, e.g., databases, information systems, and artificial intelligence. The second edition of Ontology Matching has been thoroughly revised and updated to reflect the most recent advances in this quickly developing area, which resulted in more than 150 pages of new content. In particular, the book includes a new chapter dedicated to the methodology for performing ontology matching. It also covers emerging topics, such as data interlinking, ontology partitioning and pruning, context-based matching, matcher tuning, alignment debugging, and user involvement in matching, to mention a few. More than 100 state-of-the-art matching systems and frameworks were reviewed. With Ontology Matching, researchers and practitioners will find a reference book that presents currently available work in a uniform framework. In particular, the work and the techniques presented in this book can be equally applied to database schema matching, catalog integration, XML schema matching and other related problems. The objectives of the book include presenting (i) the state of the art and (ii) the latest research results in ontology matching by providing a systematic and detailed account of matching techniques and matching systems from theoretical, practical and application perspectives.

2,579 citations