scispace - formally typeset
Search or ask a question
JournalISSN: 0933-1875

Künstliche Intelligenz 

Springer Science+Business Media
About: Künstliche Intelligenz is an academic journal published by Springer Science+Business Media. The journal publishes majorly in the area(s): Robot & Description logic. It has an ISSN identifier of 0933-1875. Over the lifetime, 603 publications have been published receiving 7817 citations. The journal is also known as: Künstliche Intelligenz (Berlin. Springer. Print) & Künstliche Intelligenz (München. Oldenbourg).


Papers
More filters
Journal ArticleDOI
TL;DR: The dissertation presented in this article proposes Semantic 3D Object Models as a novel representation of the robot’s operating environment that satisfies these requirements and shows how these models can be automatically acquired from dense 3D range data.
Abstract: Environment models serve as important resources for an autonomous robot by providing it with the necessary task-relevant information about its habitat. Their use enables robots to perform their tasks more reliably, flexibly, and efficiently. As autonomous robotic platforms get more sophisticated manipulation capabilities, they also need more expressive and comprehensive environment models: for manipulation purposes their models have to include the objects present in the world, together with their position, form, and other aspects, as well as an interpretation of these objects with respect to the robot tasks. The dissertation presented in this article (Rusu, PhD thesis, 2009) proposes Semantic 3D Object Models as a novel representation of the robot’s operating environment that satisfies these requirements and shows how these models can be automatically acquired from dense 3D range data.

908 citations

Journal ArticleDOI
TL;DR: A brief introduction into basic concepts, methods, insights, current developments, and some applications of RC are given.
Abstract: Reservoir Computing (RC) is a paradigm of understanding and training Recurrent Neural Networks (RNNs) based on treating the recurrent part (the reservoir) differently than the readouts from it. It started ten years ago and is currently a prolific research area, giving important insights into RNNs, practical machine learning tools, as well as enabling computation with non-conventional hardware. Here we give a brief introduction into basic concepts, methods, insights, current developments, and highlight some applications of RC.

347 citations

Journal ArticleDOI
TL;DR: This work proposes to enhance an implemented affect simulator called ALMA (A Layered Model of Affect) by learning the parametrization of the underlying OCC model through user studies, and presents a tool called EMIMOTO (EMotion Intensity MOdeling TOol) in conjunction with the ALMA simulation tool.
Abstract: While current virtual characters may look photorealistic they often lack behavioral complexity. Emotion may be the key ingredient to create behavioral variety, social adaptivity and thus believability. While various models of emotion have been suggested, the concrete parametrization must often be designed by the implementer. We propose to enhance an implemented affect simulator called ALMA (A Layered Model of Affect) by learning the parametrization of the underlying OCC model through user studies. Users are asked to rate emotional intensity in a variety of described situations. We then use regression analysis to recreate these reactions in the OCC model. We present a tool called EMIMOTO (EMotion Intensity MOdeling TOol) in conjunction with the ALMA simulation tool. Our approach is a first step toward empirically parametrized emotion models that try to reflect user expectations.

193 citations

Journal ArticleDOI
TL;DR: In this article, the authors introduce a System Causability Scale to measure the quality of explanations, which is based on the notion of causability (Holzinger et al., 2019) combined with concepts adapted from a widely accepted usability scale.
Abstract: Recent success in Artificial Intelligence (AI) and Machine Learning (ML) allow problem solving automatically without any human intervention. Autonomous approaches can be very convenient. However, in certain domains, e.g., in the medical domain, it is necessary to enable a domain expert to understand, why an algorithm came up with a certain result. Consequently, the field of Explainable AI (xAI) rapidly gained interest worldwide in various domains, particularly in medicine. Explainable AI studies transparency and traceability of opaque AI/ML and there are already a huge variety of methods. For example with layer-wise relevance propagation relevant parts of inputs to, and representations in, a neural network which caused a result, can be highlighted. This is a first important step to ensure that end users, e.g., medical professionals, assume responsibility for decision making with AI/ML and of interest to professionals and regulators. Interactive ML adds the component of human expertise to AI/ML processes by enabling them to re-enact and retrace AI/ML results, e.g. let them check it for plausibility. This requires new human-AI interfaces for explainable AI. In order to build effective and efficient interactive human-AI interfaces we have to deal with the question of how to evaluate the quality of explanations given by an explainable AI system. In this paper we introduce our System Causability Scale to measure the quality of explanations. It is based on our notion of Causability (Holzinger et al. in Wiley Interdiscip Rev Data Min Knowl Discov 9(4), 2019) combined with concepts adapted from a widely-accepted usability scale.

168 citations

Journal ArticleDOI
TL;DR: This work discusses two strategies towards making machine learning algorithms more autonomous: automated optimization of hyperparameters (including mechanisms for feature selection, preprocessing, model selection, etc) and the development of algorithms with reduced sets ofhyperparameters.
Abstract: The success of hand-crafted machine learning systems in many applications raises the question of making machine learning algorithms more autonomous, i.e., to reduce the requirement of expert input to a minimum. We discuss two strategies towards this goal: (1) automated optimization of hyperparameters (including mechanisms for feature selection, preprocessing, model selection, etc) and (2) the development of algorithms with reduced sets of hyperparameters. Since many research directions (e.g., deep learning), show a tendency towards increasingly complex algorithms with more and more hyperparamters, the demand for both of these strategies continuously increases. We review recent hyperparameter optimization methods and discuss data-driven approaches to avoid the introduction of hyperparameters using unsupervised learning. We end in discussing how these complementary strategies can work hand-in-hand, representing a very promising approach towards autonomous machine learning.

144 citations

Performance
Metrics
No. of papers from the Journal in previous years
YearPapers
202311
202231
202139
202061
201941
201839