scispace - formally typeset
Search or ask a question
Author

Bernhard Ganter

Bio: Bernhard Ganter is an academic researcher from Dresden University of Technology. The author has contributed to research in topics: Formal concept analysis & Description logic. The author has an hindex of 29, co-authored 88 publications receiving 10071 citations. Previous affiliations of Bernhard Ganter include Darmstadt University of Applied Sciences.


Papers
More filters
Book
04 Dec 1998
TL;DR: This is the first textbook on formal concept analysis that gives a systematic presentation of the mathematical foundations and their relation to applications in computer science, especially in data analysis and knowledge processing.
Abstract: From the Publisher: This is the first textbook on formal concept analysis. It gives a systematic presentation of the mathematical foundations and their relation to applications in computer science, especially in data analysis and knowledge processing. Above all, it presents graphical methods for representing conceptual systems that have proved themselves in communicating knowledge. Theory and graphical representation are thus closely coupled together. The mathematical foundations are treated thoroughly and illuminated by means of numerous examples.

4,757 citations

Journal ArticleDOI
TL;DR: FCA explicitly formalises extension and intension of a concept, their mutual relationships, and the fact that increasing intent implies decreasing extent and vice versa, and allows to derive a concept hierarchy from a given dataset.

2,029 citations

Book ChapterDOI
30 Jul 2001
TL;DR: It is shown how concepts, implications, hypotheses, and classifications in projected pattern structures are related to those in original ones.
Abstract: Pattern structures consist of objects with descriptions (called patterns) that allow a semilattice operation on them. Pattern structures arise naturally from ordered data, e.g., from labeled graphs ordered by graph morphisms. It is shown that pattern structures can be reduced to formal contexts, however sometimes processing the former is often more efficient and obvious than processing the latter. Concepts, implications, plausible hypotheses, and classifications are defined for data given by pattern structures. Since computation in pattern structures may be intractable, approximations of patterns by means of projections are introduced. It is shown how concepts, implications, hypotheses, and classifications in projected pattern structures are related to those in original ones.

412 citations

Book
01 Jan 2005
TL;DR: This book discusses Formal Concept Analysis as Mathematical Theory of Concepts and Concept Hierarchies as well as applications for Software Analysis and Modelling, and the ToscanaJ Suite for Implementing Conceptual Information Systems.
Abstract: Foundations.- Formal Concept Analysis as Mathematical Theory of Concepts and Concept Hierarchies.- Semiconcept and Protoconcept Algebras: The Basic Theorems.- Features of Interaction Between Formal Concept Analysis and Algebraic Geometry.- From Formal Concept Analysis to Contextual Logic.- Contextual Attribute Logic of Many-Valued Attributes.- Treating Incomplete Knowledge in Formal Concept Analysis.- States, Transitions, and Life Tracks in Temporal Concept Analysis.- Applications.- Linguistic Applications of Formal Concept Analysis.- Using Concept Lattices for Text Retrieval and Mining.- Efficient Mining of Association Rules Based on Formal Concept Analysis.- Galois Connections in Data Analysis: Contributions from the Soviet Era and Modern Russian Research.- Conceptual Knowledge Processing in the Field of Economics.- Software Engineering.- A Survey of Formal Concept Analysis Support for Software Engineering Activities.- Concept Lattices in Software Analysis.- Formal Concept Analysis Used for Software Analysis and Modelling.- Formal Concept Analysis-Based Class Hierarchy Design in Object-Oriented Software Development.- The ToscanaJ Suite for Implementing Conceptual Information Systems.

340 citations

Book ChapterDOI
15 Mar 2010
TL;DR: Two algorithms for closure systems are described, one to produce all closed sets of a given closure operator and another to construct a minimal family of implications for the ”logic” of a closure system.
Abstract: We describe two algorithms for closure systems. The purpose of the first is to produce all closed sets of a given closure operator. The second constructs a minimal family of implications for the ”logic” of a closure system. These algorithms then are applied to problems in concept analysis: Determining all concepts of a given context and describing the dependencies between attributes. The problem of finding all concepts is equivalent, e.g., to finding all maximal complete bipartite subgraphs of a bipartite graph.

294 citations


Cited by
More filters
Journal ArticleDOI
TL;DR: Machine learning addresses many of the same research questions as the fields of statistics, data mining, and psychology, but with differences of emphasis.
Abstract: Machine Learning is the study of methods for programming computers to learn. Computers are applied to a wide range of tasks, and for most of these it is relatively easy for programmers to design and implement the necessary software. However, there are many tasks for which this is difficult or impossible. These can be divided into four general categories. First, there are problems for which there exist no human experts. For example, in modern automated manufacturing facilities, there is a need to predict machine failures before they occur by analyzing sensor readings. Because the machines are new, there are no human experts who can be interviewed by a programmer to provide the knowledge necessary to build a computer system. A machine learning system can study recorded data and subsequent machine failures and learn prediction rules. Second, there are problems where human experts exist, but where they are unable to explain their expertise. This is the case in many perceptual tasks, such as speech recognition, hand-writing recognition, and natural language understanding. Virtually all humans exhibit expert-level abilities on these tasks, but none of them can describe the detailed steps that they follow as they perform them. Fortunately, humans can provide machines with examples of the inputs and correct outputs for these tasks, so machine learning algorithms can learn to map the inputs to the outputs. Third, there are problems where phenomena are changing rapidly. In finance, for example, people would like to predict the future behavior of the stock market, of consumer purchases, or of exchange rates. These behaviors change frequently, so that even if a programmer could construct a good predictive computer program, it would need to be rewritten frequently. A learning program can relieve the programmer of this burden by constantly modifying and tuning a set of learned prediction rules. Fourth, there are applications that need to be customized for each computer user separately. Consider, for example, a program to filter unwanted electronic mail messages. Different users will need different filters. It is unreasonable to expect each user to program his or her own rules, and it is infeasible to provide every user with a software engineer to keep the rules up-to-date. A machine learning system can learn which mail messages the user rejects and maintain the filtering rules automatically. Machine learning addresses many of the same research questions as the fields of statistics, data mining, and psychology, but with differences of emphasis. Statistics focuses on understanding the phenomena that have generated the data, often with the goal of testing different hypotheses about those phenomena. Data mining seeks to find patterns in the data that are understandable by people. Psychological studies of human learning aspire to understand the mechanisms underlying the various learning behaviors exhibited by people (concept learning, skill acquisition, strategy change, etc.).

13,246 citations

Book
17 Aug 1999
TL;DR: This chapter discusses knowledge representation, meaning, purpose, context, and agents in the context of ontology, as well as some examples of knowledge acquisition and sharing.
Abstract: 1. Logic. 2. Ontology. 3. Knowledge Representation. 4. Processes. 5. Purposes, Contexts, And Agents. 6. Knowledge Soup. 7. Knowledge Acquisition And Sharing. Appendixes: Appendix A: Summary Of Notations Appendix B: Sample Ontology. Appendix C: Extended Example. Answers To Selected Exercises. Bibliography. Name Index. Subject Index. Special Symbols.

2,725 citations

Book
05 Jun 2007
TL;DR: The second edition of Ontology Matching has been thoroughly revised and updated to reflect the most recent advances in this quickly developing area, which resulted in more than 150 pages of new content.
Abstract: Ontologies tend to be found everywhere. They are viewed as the silver bullet for many applications, such as database integration, peer-to-peer systems, e-commerce, semantic web services, or social networks. However, in open or evolving systems, such as the semantic web, different parties would, in general, adopt different ontologies. Thus, merely using ontologies, like using XML, does not reduce heterogeneity: it just raises heterogeneity problems to a higher level. Euzenat and Shvaikos book is devoted to ontology matching as a solution to the semantic heterogeneity problem faced by computer systems. Ontology matching aims at finding correspondences between semantically related entities of different ontologies. These correspondences may stand for equivalence as well as other relations, such as consequence, subsumption, or disjointness, between ontology entities. Many different matching solutions have been proposed so far from various viewpoints, e.g., databases, information systems, and artificial intelligence. The second edition of Ontology Matching has been thoroughly revised and updated to reflect the most recent advances in this quickly developing area, which resulted in more than 150 pages of new content. In particular, the book includes a new chapter dedicated to the methodology for performing ontology matching. It also covers emerging topics, such as data interlinking, ontology partitioning and pruning, context-based matching, matcher tuning, alignment debugging, and user involvement in matching, to mention a few. More than 100 state-of-the-art matching systems and frameworks were reviewed. With Ontology Matching, researchers and practitioners will find a reference book that presents currently available work in a uniform framework. In particular, the work and the techniques presented in this book can be equally applied to database schema matching, catalog integration, XML schema matching and other related problems. The objectives of the book include presenting (i) the state of the art and (ii) the latest research results in ontology matching by providing a systematic and detailed account of matching techniques and matching systems from theoretical, practical and application perspectives.

2,579 citations