Author
Marco Ruzzi
Other affiliations: Marche Polytechnic University
Bio: Marco Ruzzi is an academic researcher from Sapienza University of Rome. The author has contributed to research in topics: Ontology (information science) & Description logic. The author has an hindex of 20, co-authored 55 publications receiving 1550 citations. Previous affiliations of Marco Ruzzi include Marche Polytechnic University.
Papers published on a yearly basis
Papers
More filters
••
TL;DR: MASTRO is a Java tool for ontology-based data access (OBDA) developed at Sapienza Universita di Roma and at the Free University of Bozen-Bolzano that provides optimized algorithms for answering expressive queries, as well as features for intensional reasoning and consistency checking.
Abstract: In this paper we present MASTRO, a Java tool for ontology-based data access (OBDA) developed at Sapienza Universita di Roma and at the Free University of Bozen-Bolzano. MASTRO manages OBDA systems in which the ontology is specified in DL-Lite A,id, a logic of the DL-Lite family of tractable Description Logics specifically tailored to ontology-based data access, and is connected to external JDBC enabled data management systems through semantic mappings that associate SQL queries over the external data to the elements of the ontology. Advanced forms of integrity constraints, which turned out to be very useful in practical applications, are also enabled over the ontologies. Optimized algorithms for answering expressive queries are provided, as well as features for intensional reasoning and consistency checking. MASTRO provides a proprietary API, an OWLAPI compatible interface, and a plugin for the Protege 4 ontology editor. It has been successfully used in several projects carried out in collaboration with important organizations, on which we briefly comment in this paper.
282 citations
••
22 Sep 2010TL;DR: It is shown that, if the notion of repair studied in databases is used, inconsistency-tolerant query answering is intractable, even for the simplest form of queries.
Abstract: We address the problem of dealing with inconsistencies in Description Logic (DL) knowledge bases. Our general goal is both to study DL semantical frameworks which are inconsistency-tolerant, and to devise techniques for answering unions of conjunctive queries posed to DL knowledge bases under such inconsistency-tolerant semantics. Our work is inspired by the approaches to consistent query answering in databases, which are based on the idea of living with inconsistencies in the database, but trying to obtain only consistent information during query answering, by relying on the notion of database repair. We show that, if we use the notion of repair studied in databases, inconsistency-tolerant query answering is intractable, even for the simplest form of queries. Therefore, we study different variants of the repair-based semantics, with the goal of reaching a good compromise between expressive power of the semantics and computational complexity of inconsistency-tolerant query answering.
226 citations
••
TL;DR: This new, ASA status-based model is simple to use and can be performed routinely in the operating room to predict operative risk for both elective and emergency surgery.
Abstract: Background. Although the POSSUM (Physiological and Operative Severity Score for the enumeration of Mortality and Morbidity) score can be used to calculate operative risk, its complexity makes its use unfeasible in the immediate clinical setting. The aim of this study was to create a new model, based on ASA status, to predict mortality. Methods. Data were collected in two hospitals. All types of surgery were included except for cardiac surgery and Caesarean delivery. Age, sex and preoperative information, including the presence of cardiocirculatory and/or lung disease, renal failure, diabetes mellitus, hepatic disease, cancer, Glasgow Coma Score, ASA grade, surgical diagnosis, severity of the procedure and type of surgery (elective, urgent or emergency), were recorded for each patient. The model was developed using a data set incorporating data from 1936 surgical patients, and validated using data from a further 1849 patients. Forward stepwise logistic regression was used to build the model. Goodness of fit was examined using the Hosmer–Lemeshow test and receiver operating characteristic (ROC) curve analyses were performed on both data sets to test calibration and discrimination. In the validation data set, the new model was compared with POSSUM and P-POSSUM for both calibration and discrimination, and with ASA alone to compare discrimination. Results. The following variables were included in the new model: ASA status, age, type of surgery (elective, urgent, emergency) and degree of surgery (minor, moderate or major). Calibration and discrimination of the new model were good in both development and validation data sets. This new model was better calibrated in the validation data set (Hosmer–Lemeshow goodness-of-fit test: χ 2 =6.8017, P =0.7440) than either P-POSSUM (χ 2 =14.4643, P =0.1528) or POSSUM, which was not calibrated (χ 2 =31.8147, P =0.0004). POSSUM and P-POSSUM had better discrimination than the new model, although this was not statistically significant. Comparing the two ROC curves, the new model had better discrimination than ASA alone (difference between areas, 0.077, se 0.034, 95% confidence interval 0.012–0.143, P =0.021). Conclusions. This new, ASA status-based model is simple to use and can be performed routinely in the operating room to predict operative risk for both elective and emergency surgery.
182 citations
••
14 Jun 2005TL;DR: The task of an information integration system is to combine data residing at different sources, provides the user with a unified view of them, called global schema, and the system suitably queries the sources, providing an answer to the user, who is not obliged to have any information about the sources.
Abstract: The task of an information integration system is to combine data residing at different sources, providing the user with a unified view of them, called global schema. Users formulate queries over the global schema, and the system suitably queries the sources, providing an answer to the user, who is not obliged to have any information about the sources. Recent developments in IT such as the expansion of the Internet and the World Wide Web, have made available to users a huge number of information sources, generally autonomous, heterogeneous and widely distributed: as a consequence, information integration has emerged as a crucial issue in many application domains, e.g., distributed databases, cooperative information systems, data warehousing, or on-demand computing. Recent estimates view information integration to be a $10 Billion market by 2006 [14].
125 citations
••
TL;DR: This work proposes a different repair-based semantics, and shows that query answering under the new semantics is first-order rewritable in OBDA, even if the ontology is expressed in one of the most expressive members of the DL-Lite family.
98 citations
Cited by
More filters
•
[...]
05 Jun 2007
TL;DR: The second edition of Ontology Matching has been thoroughly revised and updated to reflect the most recent advances in this quickly developing area, which resulted in more than 150 pages of new content.
Abstract: Ontologies tend to be found everywhere. They are viewed as the silver bullet for many applications, such as database integration, peer-to-peer systems, e-commerce, semantic web services, or social networks. However, in open or evolving systems, such as the semantic web, different parties would, in general, adopt different ontologies. Thus, merely using ontologies, like using XML, does not reduce heterogeneity: it just raises heterogeneity problems to a higher level. Euzenat and Shvaikos book is devoted to ontology matching as a solution to the semantic heterogeneity problem faced by computer systems. Ontology matching aims at finding correspondences between semantically related entities of different ontologies. These correspondences may stand for equivalence as well as other relations, such as consequence, subsumption, or disjointness, between ontology entities. Many different matching solutions have been proposed so far from various viewpoints, e.g., databases, information systems, and artificial intelligence. The second edition of Ontology Matching has been thoroughly revised and updated to reflect the most recent advances in this quickly developing area, which resulted in more than 150 pages of new content. In particular, the book includes a new chapter dedicated to the methodology for performing ontology matching. It also covers emerging topics, such as data interlinking, ontology partitioning and pruning, context-based matching, matcher tuning, alignment debugging, and user involvement in matching, to mention a few. More than 100 state-of-the-art matching systems and frameworks were reviewed. With Ontology Matching, researchers and practitioners will find a reference book that presents currently available work in a uniform framework. In particular, the work and the techniques presented in this book can be equally applied to database schema matching, catalog integration, XML schema matching and other related problems. The objectives of the book include presenting (i) the state of the art and (ii) the latest research results in ontology matching by providing a systematic and detailed account of matching techniques and matching systems from theoretical, practical and application perspectives.
2,579 citations
••
[...]
TL;DR: This article places data fusion into the greater context of data integration, precisely defines the goals of data fusion, namely, complete, concise, and consistent data, and highlights the challenges of data Fusion.
Abstract: The development of the Internet in recent years has made it possible and useful to access many different information systems anywhere in the world to obtain information. While there is much research on the integration of heterogeneous information systems, most commercial systems stop short of the actual integration of available data. Data fusion is the process of fusing multiple records representing the same real-world object into a single, consistent, and clean representation.This article places data fusion into the greater context of data integration, precisely defines the goals of data fusion, namely, complete, concise, and consistent data, and highlights the challenges of data fusion, namely, uncertain and conflicting data values. We give an overview and classification of different ways of fusing data and present several techniques based on standard and advanced operators of the relational algebra and SQL. Finally, the article features a comprehensive survey of data integration systems from academia and industry, showing if and how data fusion is performed in each.
1,797 citations
••
TL;DR: This article carries out a thorough and systematic investigation of inference in extensions of the original DL-Lite logics along five axes, by adding the Boolean connectives and number restrictions to concept constructs and adopting or dropping the unique name assumption.
Abstract: The recently introduced series of description logics under the common moniker 'DL-Lite' has attracted attention of the description logic and semantic web communities due to the low computational complexity of inference, on the one hand, and the ability to represent conceptual modeling formalisms, on the other. The main aim of this article is to carry out a thorough and systematic investigation of inference in extensions of the original DL-Lite logics along five axes: by (i) adding the Boolean connectives and (ii) number restrictions to concept constructs, (iii) allowing role hierarchies, (iv) allowing role disjointness, symmetry, asymmetry, reflexivity, irreflexivity and transitivity constraints, and (v) adopting or dropping the unique name assumption. We analyze the combined complexity of satisfiability for the resulting logics, as well as the data complexity of instance checking and answering positive existential queries. Our approach is based on embedding DL-Lite logics in suitable fragments of the one-variable first-order logic, which provides useful insights into their properties and, in particular, computational behavior.
592 citations
•
19 Dec 2012TL;DR: This book presents a practical introduction to ASP, aiming at using ASP languages and systems for solving application problems, and introduces ASP's solving technology, modeling language and methodology.
Abstract: Answer Set Programming (ASP) is a declarative problem solving approach, initially tailored to modeling problems in the area of Knowledge Representation and Reasoning (KRR). More recently, its attractive combination of a rich yet simple modeling language with high-performance solving capacities has sparked interest in many other areas even beyond KRR. This book presents a practical introduction to ASP, aiming at using ASP languages and systems for solving application problems. Starting from the essential formal foundations, it introduces ASP's solving technology, modeling language and methodology, while illustrating the overall solving process by practical examples. Table of Contents: List of Figures / List of Tables / Motivation / Introduction / Basic modeling / Grounding / Characterizations / Solving / Systems / Advanced modeling / Conclusions
503 citations
•
01 Apr 2017TL;DR: This introduction presents the main motivations for the development of Description Logics as a formalism for representing knowledge, as well as some important basic notions underlying all systems that have been created in the DL tradition.
Abstract: This introduction presents the main motivations for the development of Description Logics (DLs) as a formalism for representing knowledge, as well as some important basic notions underlying all systems that have been created in the DL tradition. In addition, we provide the reader with an overview of the entire book and some guidelines for reading it.We first address the relationship between Description Logics and earlier semantic network and frame systems, which represent the original heritage of the field. We delve into some of the key problems encountered with the older efforts. Subsequently, we introduce the basic features of DL languages and related reasoning techniques.DL languages are then viewed as the core of knowledge representation systems. considering both the structure of a DL knowledge base and its associated reasoning services. The development of some implemented knowledge representation systems based on Description Logics and the first applications built with such systems are then reviewed.Finally, we address the relationship of Description Logics to other fields of Computer Science. We also discuss some extensions of the basic representation language machinery; these include features proposed for incorporation in the formalism that originally arose in implemented systems, and features proposed to cope with the needs of certain application domains.
470 citations