scispace - formally typeset
Search or ask a question
Author

Joachim Baumeister

Bio: Joachim Baumeister is an academic researcher from University of Würzburg. The author has contributed to research in topics: Knowledge-based systems & Knowledge engineering. The author has an hindex of 18, co-authored 115 publications receiving 1121 citations.


Papers
More filters
Journal ArticleDOI
TL;DR: Sebastian Schaffert et al. as discussed by the authors describe semantic wikis and explain how to model wiki knowledge and content for improved usability, which is similar to our approach.
Abstract: Lean knowledge management is today implemented mostly through wikis, which let users enter text and other data, such as files, and connect the content through hyperlinks. Easy setup and a huge variety of editing support are primary reasons for wiki use in all types of intranet- and Internet-based information sharing (see P. Louridas, "Using Wikis in Software Development," IEEE Software, Mar. 2006, pp. 88- 91). The drawbacks show up when you need to structure data as opposed to just edit text. Many wikis have tons of useful content, but the volume and lack of structure make it inaccessible over time. This is where semantic wikis enter the picture. Sebastian Schaffert and his colleagues describe them here and explain how to model wiki knowledge and content for improved usability. I look forward to hearing from both readers and prospective authors about this column and the technologies you want to know more about.

121 citations

Journal ArticleDOI
TL;DR: A novel approach is presented, that interprets the concept of Semantic Wikis as a knowledge engineering environment, that effectively help to build decision-support systems and introduces the Semantic Wiki KnowWE, that provides the possibility to define and maintain ontologies together with strong problem-solving knowledge.
Abstract: Recently, Semantic Wikis showed reasonable success as collaboration platforms in the context of social semantic applications. In this paper, we present a novel approach, that interprets the concept of Semantic Wikis as a knowledge engineering environment, that effectively help to build decision-support systems. We introduce the Semantic Wiki KnowWE, that provides the possibility to define and maintain ontologies together with strong problem-solving knowledge. Thus, the wiki can be used to collaboratively build decision-support systems. These enhancements require extensions of the standard Semantic Wiki architecture by a task ontology for problem-solving and an adapted reasoning process. We discuss these extensions in detail, and we describe a case study in the field of medical emergency systems.

92 citations

Journal ArticleDOI
TL;DR: It is concluded that the fluorescence method which allows rapid measurements on intact leaves can provide a quantitative estimate of epidermal transmittance for UV-B (280–320 nm) and UV-A (320–400 nm) radiation.
Abstract: Leaves of Vicia faba were collected from the field and the greenhouse and transmittance of epidermal peels from adaxial and abaxial sides was determined in the wavelength range from 250 to 800 nm using a spectrophotometer equipped for the measurement of turbid samples. From the same leaves, epidermal transmittance was estimated by a recently developed fluorometric method. Both methods gave highly correlated results with a slope of the regression line between both methods close to 1 and an intercept close to 0. Transmittances at around 310 nm as low as 3% were detected in the adaxial epidermis of field-grown leaves, while transmittance could be as high as 70% in the abaxial epidermis of greenhouse-grown leaves. There was a strong correlation between UV-A (ca. 366 nm) and UV-B (ca. 310 nm) transmittance detected by both methods which could be explained by the pigment composition in methanolic extracts where flavonols accounted for 90% of the absorption at 310 nm in the extract, while hydroxycinnamic acid derivatives which absorb only at the shorter wavelength constituted about 5%. It is concluded that the fluorescence method which allows rapid measurements on intact leaves can provide a quantitative estimate of epidermal transmittance for UV-B (280–320 nm) and UV-A (320–400 nm) radiation.

65 citations

Journal ArticleDOI
TL;DR: Neumann et al. as mentioned in this paper developed an expert system (LIMPACT) to estimate the pesticide contamination of small streams using benthic macroinvertebrate as bioindicators.

34 citations

Proceedings Article
01 Jan 2006
TL;DR: Subgroup mining is used to discover local patterns that describe factors potentially causing incorrect behavior of the knowledge system and is supplemented by introspective subgroup analysis techniques in order to help the user with the interpretation of the refinement recommendations proposed by the system.
Abstract: When knowledge systems are deployed into a real-world application, then the maintenance and the refinement of the knowledge are essential tasks. Many existing automatic knowledge refinement methods only provide limited control and clarification capabilities during the refinement process. Furthermore, often assumptions about the correctness of the knowledge base and the cases are made. However, such assumptions do not necessarily hold for real-world applications. In this paper, we present a novel interactive approach for the refinement of knowledge bases: Subgroup mining is used to discover local patterns that describe factors potentially causing incorrect behavior of the knowledge system. The approach is supplemented by introspective subgroup analysis techniques in order to help the user with the interpretation of the refinement recommendations proposed by the system.

33 citations


Cited by
More filters
Journal ArticleDOI
TL;DR: Machine learning addresses many of the same research questions as the fields of statistics, data mining, and psychology, but with differences of emphasis.
Abstract: Machine Learning is the study of methods for programming computers to learn. Computers are applied to a wide range of tasks, and for most of these it is relatively easy for programmers to design and implement the necessary software. However, there are many tasks for which this is difficult or impossible. These can be divided into four general categories. First, there are problems for which there exist no human experts. For example, in modern automated manufacturing facilities, there is a need to predict machine failures before they occur by analyzing sensor readings. Because the machines are new, there are no human experts who can be interviewed by a programmer to provide the knowledge necessary to build a computer system. A machine learning system can study recorded data and subsequent machine failures and learn prediction rules. Second, there are problems where human experts exist, but where they are unable to explain their expertise. This is the case in many perceptual tasks, such as speech recognition, hand-writing recognition, and natural language understanding. Virtually all humans exhibit expert-level abilities on these tasks, but none of them can describe the detailed steps that they follow as they perform them. Fortunately, humans can provide machines with examples of the inputs and correct outputs for these tasks, so machine learning algorithms can learn to map the inputs to the outputs. Third, there are problems where phenomena are changing rapidly. In finance, for example, people would like to predict the future behavior of the stock market, of consumer purchases, or of exchange rates. These behaviors change frequently, so that even if a programmer could construct a good predictive computer program, it would need to be rewritten frequently. A learning program can relieve the programmer of this burden by constantly modifying and tuning a set of learned prediction rules. Fourth, there are applications that need to be customized for each computer user separately. Consider, for example, a program to filter unwanted electronic mail messages. Different users will need different filters. It is unreasonable to expect each user to program his or her own rules, and it is infeasible to provide every user with a software engineer to keep the rules up-to-date. A machine learning system can learn which mail messages the user rejects and maintain the filtering rules automatically. Machine learning addresses many of the same research questions as the fields of statistics, data mining, and psychology, but with differences of emphasis. Statistics focuses on understanding the phenomena that have generated the data, often with the goal of testing different hypotheses about those phenomena. Data mining seeks to find patterns in the data that are understandable by people. Psychological studies of human learning aspire to understand the mechanisms underlying the various learning behaviors exhibited by people (concept learning, skill acquisition, strategy change, etc.).

13,246 citations

01 Jan 2002

9,314 citations

Book
29 Nov 2005

2,161 citations

Journal ArticleDOI
TL;DR: Interactive machine learning (iML) is defined as “algorithms that can interact with agents and can optimize their learning behavior through these interactions, where the agents can also be human.”
Abstract: Machine learning (ML) is the fastest growing field in computer science, and health informatics is among the greatest challenges. The goal of ML is to develop algorithms which can learn and improve over time and can be used for predictions. Most ML researchers concentrate on automatic machine learning (aML), where great advances have been made, for example, in speech recognition, recommender systems, or autonomous vehicles. Automatic approaches greatly benefit from big data with many training sets. However, in the health domain, sometimes we are confronted with a small number of data sets or rare events, where aML-approaches suffer of insufficient training samples. Here interactive machine learning (iML) may be of help, having its roots in reinforcement learning, preference learning, and active learning. The term iML is not yet well used, so we define it as “algorithms that can interact with agents and can optimize their learning behavior through these interactions, where the agents can also be human.” This “human-in-the-loop” can be beneficial in solving computationally hard problems, e.g., subspace clustering, protein folding, or k-anonymization of health data, where human expertise can help to reduce an exponential search space through heuristic selection of samples. Therefore, what would otherwise be an NP-hard problem, reduces greatly in complexity through the input and the assistance of a human agent involved in the learning phase.

651 citations

Book ChapterDOI
TL;DR: In this chapter, it is shown that the evidence behind the chemistry-based models and the photophysically oriented models can be brought together to build a mechanism that confirms with all types of experimental data.
Abstract: Photoinhibition of Photosystem II (PSII) is the light-induced loss of PSII electron-transfer activity. Although photoinhibition has been studied for a long time, there is no consensus about its mechanism. On one hand, production of singlet oxygen ((1)O(2)) by PSII has promoted models in which this reactive oxygen species (ROS) is considered to act as the agent of photoinhibitory damage. These chemistry-based models have often not taken into account the photophysical features of photoinhibition-like light response and action spectrum. On the other hand, models that reproduce these basic photophysical features of the reaction have not considered the importance of data about ROS. In this chapter, it is shown that the evidence behind the chemistry-based models and the photophysically oriented models can be brought together to build a mechanism that confirms with all types of experimental data. A working hypothesis is proposed, starting with inhibition of the manganese complex by light. Inability of the manganese complex to reduce the primary donor promotes recombination between the oxidized primary donor and Q(A), the first stable quinone acceptor of PSII. (1)O(2) production due to this recombination may inhibit protein synthesis or spread the photoinhibitory damage to another PSII center. The production of (1)O(2) is transient because loss of activity of the oxygen-evolving complex induces an increase in the redox potential of Q(A), which lowers (1)O(2) production.

429 citations