scispace - formally typeset
Search or ask a question

Showing papers in "Künstliche Intelligenz in 2012"


Journal ArticleDOI
TL;DR: A brief introduction into basic concepts, methods, insights, current developments, and some applications of RC are given.
Abstract: Reservoir Computing (RC) is a paradigm of understanding and training Recurrent Neural Networks (RNNs) based on treating the recurrent part (the reservoir) differently than the readouts from it. It started ten years ago and is currently a prolific research area, giving important insights into RNNs, practical machine learning tools, as well as enabling computation with non-conventional hardware. Here we give a brief introduction into basic concepts, methods, insights, current developments, and highlight some applications of RC.

347 citations


Journal ArticleDOI
TL;DR: A general method to determine the influence of social and mobility behavior over a specific geographical area in order to evaluate to what extent the current administrative borders represent the real basin of human movement.
Abstract: The availability of massive network and mobility data from diverse domains has fostered the analysis of human behavior and interactions. Broad, extensive, and multidisciplinary research has been devoted to the extraction of non-trivial knowledge from this novel form of data. We propose a general method to determine the influence of social and mobility behavior over a specific geographical area in order to evaluate to what extent the current administrative borders represent the real basin of human movement. We build a network representation of human movement starting with vehicle GPS tracks and extract relevant clusters, which are then mapped back onto the territory, finding a good match with the existing administrative borders. The novelty of our approach is the focus on a detailed spatial resolution, we map emerging borders in terms of individual municipalities, rather than macro regional or national areas. We present a series of experiments to illustrate and evaluate the effectiveness of our approach.

78 citations


Journal ArticleDOI
TL;DR: This work presents an approach utilising Visual Analytics methods to explore and understand the temporal variation of spatial situations derived from episodic movement data by means of spatio-temporal aggregation.
Abstract: Continuing advances in modern data acquisition techniques result in rapidly growing amounts of geo-referenced data about moving objects and in emergence of new data types We define episodic movement data as a new complex data type to be considered in the research fields relevant to data analysis In episodic movement data, position measurements may be separated by large time gaps, in which the positions of the moving objects are unknown and cannot be reliably reconstructed Many of the existing methods for movement analysis are designed for data with fine temporal resolution and cannot be applied to discontinuous trajectories We present an approach utilising Visual Analytics methods to explore and understand the temporal variation of spatial situations derived from episodic movement data by means of spatio-temporal aggregation The situations are defined in terms of the presence of moving objects in different places and in terms of flows (collective movements) between the places The approach, which combines interactive visual displays with clustering of the spatial situations, is presented by example of a real dataset collected by Bluetooth sensors

77 citations


Journal ArticleDOI
TL;DR: The reader is introduced to the basic concepts of deep learning, selected methods for incrementally learning a hierarchy of features from unlabeled inputs are discussed, and application examples from computer vision and speech recognition are presented.
Abstract: Hierarchical neural networks for object recognition have a long history. In recent years, novel methods for incrementally learning a hierarchy of features from unlabeled inputs were proposed as good starting point for supervised training. These deep learning methods—together with the advances of parallel computers—made it possible to successfully attack problems that were not practical before, in terms of depth and input size. In this article, we introduce the reader to the basic concepts of deep learning, discuss selected methods in detail, and present application examples from computer vision and speech recognition.

37 citations


Journal ArticleDOI
TL;DR: The primary implementation frameworks that provide the core capabilities of Sol are introduced: the Luna Software Agent Framework, the VIA Cross-Layer Communications Substrate, and the KAoS Policy Services Framework, and it is shown how policy-governed agents can perform much of the tedious high-tempo tasks of analysts and facilitate collaboration.
Abstract: In this article, we describe how we augment human perception and cognition through Sol, an agent-based framework for distributed sensemaking. We describe how our visualization approach, based on IHMC’s OZ flight display, has been leveraged and extended in our development of the Flow Capacitor, an analyst display for maintaining cyber situation awareness, and in the Parallel Coordinates 3D Observatory (PC3O or Observatory), a generalization of the Flow Capacitor that provides capabilities for developing and exploring lines of inquiry. We then introduce the primary implementation frameworks that provide the core capabilities of Sol: the Luna Software Agent Framework, the VIA Cross-Layer Communications Substrate, and the KAoS Policy Services Framework. We show how policy-governed agents can perform much of the tedious high-tempo tasks of analysts and facilitate collaboration. Much of the power of Sol lies in the concept of coactive emergence, whereby a comprehension of complex situations is achieved through the collaboration of analysts and agents working together in tandem. Not only can the approach embodied in Sol lead to a qualitative improvement in cyber situation awareness, but its approach is equally relevant to applications of distributed sensemaking for other kinds of complex high-tempo tasks.

29 citations


Journal ArticleDOI
TL;DR: Computational techniques that decompose complex tasks into simpler, verifiable steps to improve quality, and optimize work to return results in seconds are introduced.
Abstract: Crowd-powered systems combine computation with human intelligence, drawn from large groups of people connecting and coordinating online. These hybrid systems enable applications and experiences that neither crowds nor computation could support alone. Unfortunately, crowd work is error-prone and slow, making it difficult to incorporate crowds as first-order building blocks in software systems. I introduce computational techniques that decompose complex tasks into simpler, verifiable steps to improve quality, and optimize work to return results in seconds. These techniques develop crowdsourcing as a platform so that it is reliable and responsive enough to be used in interactive systems. This thesis develops these ideas through a series of crowd-powered systems. The first, Soylent, is a word processor that uses paid micro-contributions to aid writing tasks such as text shortening and proofreading. Using Soylent is like having access to an entire editorial staff as you write. The second system, Adrenaline, is a camera that uses crowds to help amateur photographers capture the exact right moment for a photo. It finds the best smile and catches subjects in mid-air jumps, all in realtime. Moving beyond generic knowledge and paid crowds, I introduce techniques to motivate a social network that has specific expertise, and techniques to data mine crowd activity traces in support of a large number of uncommon user goals. These systems point to a future where social and crowd intelligence are central elements of interaction, software, and computation. (Copies available exclusively from MIT Libraries, Rm. 14-0551, Cambridge, MA 02139-4307. Ph. 617-253-5668; Fax 617-253-1690.)

25 citations


Journal ArticleDOI
TL;DR: This study compares two interactive information visualisation systems: VisuExplore and Gravi++ by analysing logfiles and identifying sets of activities and interaction patterns users followed while working with these systems.
Abstract: Modern information visualisation systems do not only support interactivity but also increasingly complex problem solving. In this study we compare two interactive information visualisation systems: VisuExplore and Gravi++. By analysing logfiles we were able to identify sets of activities and interaction patterns users followed while working with these systems. These patterns are an indication of strategies users adopt to find solutions. Identifying such patterns may help in improving the design of future information visualisation systems.

22 citations


Journal ArticleDOI
TL;DR: This work proposes a way to define and automate user and usability evaluation from within the integrated development environment (IDE) and provides a framework for conducting evaluation experiments using TaMoGolog-based formal task models.
Abstract: We propose a way to define and automate user and usability evaluation from within the integrated development environment (IDE). Specifically, for the automatic analysis of usability issues and functionality problems, we provide a framework for conducting evaluation experiments using TaMoGolog-based formal task models. This approach enables the software development team to automatically collect and analyze users and system activities and behavior for recognizing usability issues and functionality problems in an efficient and effective way. The developed tools, UEMan and TaMUlator, provide the realization of the proposed approach and framework at the IDE level.

21 citations


Journal ArticleDOI
TL;DR: The theoretical foundation is laid by a mapping between the planning language Pddl and the Situation Calculus, which is underlying Golog, together with a study of how these formalisms relate in terms of expressivity.
Abstract: Action programming languages like Golog allow to define complex behaviors for agents on the basis of action representations in terms of expressive (first-order) logical formalisms, making them suitable for realistic scenarios of agents with only partial world knowledge. Often these scenarios include sub-tasks that require sequential planning. While in principle it is possible to express and execute such planning sub-tasks directly in Golog, the system can performance-wise not compete with state-of-the-art planners. In this paper, we report on our efforts to integrate efficient planning and expressive action programming in the Platas project. The theoretical foundation is laid by a mapping between the planning language Pddl and the Situation Calculus, which is underlying Golog, together with a study of how these formalisms relate in terms of expressivity. The practical benefit is demonstrated by an evaluation of embedding a Pddl planner into Golog, showing a drastic increase in performance while retaining the full expressiveness of Golog.

20 citations


Journal ArticleDOI
TL;DR: An overview on data characteristics, state-of-the-art preprocessing and analysis methods of trajectory data, and a collection of challenges that arise due to the increasing variety of spatiotemporal data sources and which have to be solved for the application of spatiotsemporal analysis methods in practice are provided.
Abstract: Over the past five to seven years the analysis of trajectory data has established itself as an independent research discipline within the area of data mining. In this article we provide an overview on data characteristics, state-of-the-art preprocessing and analysis methods of trajectory data. We conclude the article with a collection of challenges that arise due to the increasing variety of spatiotemporal data sources and which have to be solved for the application of spatiotemporal analysis methods in practice.

18 citations


Journal ArticleDOI
TL;DR: This paper presents real-time approaches to segment trajectories into meaningful parts which reflect the underlying typical behaviour or structure and shows how atypical behaviour can be identified.
Abstract: Spatio-temporal trajectories contain implicit knowledge about the movement of individuals, which is relevant for problems in various domains, e.g. animal migration, traffic analysis, security. In this paper we present real-time approaches to segment trajectories into meaningful parts which reflect the underlying typical behaviour or structure. Based on this information atypical behaviour can be identified.

Journal ArticleDOI
Ubbo Visser1
TL;DR: Watson is one of the most impressive systems that the authors saw during 2011 and it is able to answer a wide range of questions for which the answer is a phrase—at most.
Abstract: (Laughs) I think that this is still hard for Q&A systems. So, just to explain that a little bit more, Watson is one of the most impressive systems that we saw during 2011. This system is able to answer a wide range of questions for which the answer is a phrase—at most. It won’t answer a question for which a sentence or paragraph is necessary. It can answer only those kinds of questions for which the answer must have been mentioned somewhere in the text. Given a question, it goes and looks for evidence for its potential answers in the text and if there were no evidence to be found in the text it would fail. It is very impressive for what it does,

Journal ArticleDOI
TL;DR: This work reviews several illustrative examples of possible applications of slow Feature Analysis including the estimation of driving forces, nonlinear blind source separation, traffic sign recognition, and face processing.
Abstract: Slow Feature Analysis (SFA) is an unsupervised learning algorithm based on the slowness principle and has originally been developed to learn invariances in a model of the primate visual system. Although developed for computational neuroscience, SFA has turned out to be a versatile algorithm also for technical applications since it can be used for feature extraction, dimensionality reduction, and invariance learning. With minor adaptations SFA can also be applied to supervised learning problems such as classification and regression. In this work, we review several illustrative examples of possible applications including the estimation of driving forces, nonlinear blind source separation, traffic sign recognition, and face processing.

Journal ArticleDOI
TL;DR: Advances on neural network themes from the last decade are summarized, with a focus on results obtained by members of the SAMM team of Université Paris 1.
Abstract: Artificial neural networks are simple and efficient machine learning tools. Defined originally in the traditional setting of simple vector data, neural network models have evolved to address more and more difficulties of complex real world problems, ranging from time evolving data to sophisticated data structures such as graphs and functions. This paper summarizes advances on those themes from the last decade, with a focus on results obtained by members of the SAMM team of Universite Paris 1.

Journal ArticleDOI
TL;DR: This paper shows in two evaluation scenarios that the wireless EPOC headsets can be used efficiently for supporting subjectivity measurement, and highlights situations that may result in a lower accuracy, as well as explore possible reasons and propose solutions for improving the error rates of the device.
Abstract: Since the dawn of the industrial era, modern devices and interaction methods have undergone rigorous evaluations in order to ensure their functionality and quality, as well as usability. While there are many methods for measuring objective data, capturing and interpreting subjective factors—like the feelings or states of mind of the users—is still an imprecise and usually post-event process. In this paper we propose the utilization of the Emotiv EPOC commercial electroencephalographic (EEG) neuroheadset for real-time support during evaluations and user studies. We show in two evaluation scenarios that the wireless EPOC headsets can be used efficiently for supporting subjectivity measurement. Additionally, we highlight situations that may result in a lower accuracy, as well as explore possible reasons and propose solutions for improving the error rates of the device.

Journal ArticleDOI
TL;DR: This paper describes the design and implementation of HyperLMNtal, a hierarchical hypergraph rewriting language model that enabled efficient encoding of a constraint processing language CHR in terms of both performance and computational complexity.
Abstract: LMNtal (pronounced “elemental”) is a language model based on hierarchical graph rewriting that uses point-to-point links to represent connectivity and membranes to represent hierarchy. LMNtal was designed to be a substrate language of various computational models, especially those addressing concurrency, mobility and multiset rewriting. Although point-to-point links and membranes could be used together to represent multipoint connectivity, our experiences with LMNtal showed that hyperlinks would be an important and useful extension to the language. We have accordingly expanded LMNtal into a hierarchical hypergraph rewriting language model, HyperLMNtal. HyperLMNtal enabled concise description of computational models involving flexible and diverse forms of references between data; in particular, it enabled efficient encoding of a constraint processing language CHR in terms of both performance and computational complexity. This paper describes the design and implementation of HyperLMNtal as a case study of language evolution.

Journal ArticleDOI
TL;DR: This paper investigates the process of role adoption in the context of the well-known OperA organisational modelling language to allow for the specification of role capabilities using the Blocks World for Teams domain.
Abstract: The organisational specification of a multi-agent system supports agents’ effectiveness in attaining their purpose, or prevent certain undesired behaviour from occurring. This requires that agents are able to find out about the organisational purpose and description and decide on its appropriateness for their own objectives. Organisational modeling languages are used to specify an agent system in terms of its roles, organizational structure, norms, etc. Agents take part in organisations by playing one or more of the specified roles for which they have the necessary capabilities. In this paper, we investigate the process of role adoption in the context of the well-known OperA organisational modelling language. In OperA, each organisation has a gatekeeper role responsible for admitting agents to the organisation. Agents playing the role of gatekeeper can interact with agents that want to enter the organisation in order to come to agreement on role adoption. That is, negotiate which roles they will play and under which conditions they will play them. This is possible by evaluating capability requirements for roles. We extend OperA to allow for the specification of role capabilities. This approach will be illustrated using the Blocks World for Teams (BW4T) domain.

Journal ArticleDOI
TL;DR: The vision and scientific challenges presented in this paper are the objectives of the FET-FP7 project DATASIM, which aims at providing an entirely new and highly detailed spatio-temporal microsimulation methodology for human mobility, grounded on massive amounts of big data of various types and from various sources.
Abstract: The vision and scientific challenges presented in this paper are the objectives of the FET-FP7 project DATASIM. The project aims at providing an entirely new and highly detailed spatio-temporal microsimulation methodology for human mobility, grounded on massive amounts of big data of various types and from various sources, with the goal to forecast the nation-wide consequences of a massive switch to electric vehicles, given the intertwined nature of mobility and power distribution networks.

Journal ArticleDOI
TL;DR: This paper focuses on improving the conversational abilities of existing interactive interfaces by enhancing their underlying QA systems in terms of response time and correctness, and introduces a method based on a tripartite contextualization.
Abstract: Research results in the field of Question Answering (QA) have shown that the classification of natural language questions significantly contributes to the accuracy of the generated answers. In this paper we present an approach which extends the prevalent question classification techniques by additionally considering further contextual information provided by the questions. Thereby we focus on improving the conversational abilities of existing interactive interfaces by enhancing their underlying QA systems in terms of response time and correctness. As a result, we are able to introduce a method based on a tripartite contextualization. First, we present a comprehensive question classification experiment based on machine learning using two different datasets and various feature sets for the German language. Second, we propose a method for detecting the focus chunk of a given question, that is, for identifying which part of the question is fundamentally relevant to the answer and which part refers to a specification of it. Third, we investigate how to identify and label the topic of a given question by means of a human-judgment experiment. We show that the resulting contextualization method contributes to an improvement of existing question answering systems and enhances their application within interactive scenarios.

Journal ArticleDOI
TL;DR: The extension of Learning Vector Quantization by Matrix Relevance Learning is presented and discussed and a particularly successful application in the context of tumor classification highlights the usefulness and interpretability of the method in practical contexts.
Abstract: The extension of Learning Vector Quantization by Matrix Relevance Learning is presented and discussed. The basic concept, essential properties, and several modifications of the scheme are outlined. A particularly successful application in the context of tumor classification highlights the usefulness and interpretability of the method in practical contexts. The development and putting forward of Matrix Relevance Learning Vector Quantization was, to a large extent, pursued in the frame of the project Adaptive Distance Measures in Relevance Learning Vector Quantization—Admire LVQ, funded through the Nederlandse Organisatie voor Wetenschappelijk Onderzoek (NWO) under project code 612.066.620, from 2007 to 2011.

Journal ArticleDOI
TL;DR: This contribution contains a short history of neural computation and an overview about the major learning paradigms and neural architectures used today.
Abstract: This contribution contains a short history of neural computation and an overview about the major learning paradigms and neural architectures used today.

Journal ArticleDOI
TL;DR: A new theory of mental imagery is presented, which in contrast to other contemporary theories is formalized as a computational cognitive model and it is argued that formalized theories can advance the currently stagnant imagery debate.
Abstract: Mental imagery is the human ability to imagine and reason with visuo-spatial information. It is crucial for everyday tasks such as describing a route or remembering the form of objects. The so-called imagery debate has been centered around the question how mental imagery is realized, i.e., what structures and algorithms can plausibly explain and model mental imagery. There is, however, little progress on a coherent theory that can sufficiently cover the diversity of the empirical data. This article presents a new theory of mental imagery, which in contrast to other contemporary theories is formalized as a computational cognitive model. We will compare this theory to the contemporary theories using two representative phenomena of mental imagery. We will argue that formalized theories can advance the currently stagnant imagery debate.

Journal ArticleDOI
TL;DR: This special issue on HCI reflects some of the main roads of current research in HCI and reflects a nice multifaceted mix of world-renowned experts from many different disciplines.
Abstract: Human-Computer Interaction (HCI; original name: manmachine interaction) is a term known since more than 30 years. The aim of HCI research is to intelligently assist users in improving the interaction with computers (in the sense that this interaction should become more user-friendly and better tailored to the needs and abilities of the users and the capabilities of the involved devices). HCI has its roots in other disciplines, especially in ergonomics and human factors. It is a multi-disciplinary research field, populated by researchers and practitioners from areas like psychology, cognitive science, ergonomics, sociology, computer science, graphic design, business, and more. This special issue on HCI reflects some of the main roads of current research in HCI. It represents a nice multifaceted mix of world-renowned experts from many different disciplines. A good starting point for reading is the very first article of this special issue, “Human-Computer Interaction— Introduction and Overview”, which takes the reader from research to practice, from computer science to cognitive psychology, and from past to future. One of the highlights of this issue is the interview with Turing award winner Alan Kay, who reflects on past, present and future of HCI. I wish to thank the KI editors for inviting me as a guest editor for this special issue of their journal. Furthermore, I want to thank all authors who have invested so much energy in their research work and contributed to this special issue on HCI. I should also not forget to mention the many reviewers who really helped to strongly improve this issue with lots of valuable comments and suggestions. I wish all readers a pleasant reading.

Journal ArticleDOI
TL;DR: This thesis presents a dynamical system approach to learning forward and inverse models in associative recurrent neural networks that enable robust and efficient training of multi-stable dynamics with application to movement control in robotics.
Abstract: This thesis presents a dynamical system approach to learning forward and inverse models in associative recurrent neural networks. Ambiguous inverse models are represented by multi-stable dynamics. Random projection networks, i.e. reservoirs, together with a rigorous regularization methodology enable robust and efficient training of multi-stable dynamics with application to movement control in robotics.

Journal ArticleDOI
TL;DR: This paper presents parts of on-going work on task-based user-system interaction, which highlights the need for a shift from an information-centric to a task-centric environment.
Abstract: In current electronic environments, the ever-increasing amount of personal information, means that users focus more on managing their information rather than using it to accomplish their objectives. To overcome this problem, a user task-based interactive environment is needed to help users focus on tasks they wish to perform rather than spending more time on managing their personal information. In this paper, we present parts of our on-going work on task-based user-system interaction, which highlights the need for a shift from an information-centric to a task-centric environment. More precisely, we look into issues relating to modeling user tasks that arise when users interact with the environment to fulfill their goals through these sets of tasks.

Journal ArticleDOI
TL;DR: This presentation focuses on new aspects of the current version of firstCS, i.e. redundancy checking and the assembling of new search strategies from existing ones using the implementation language Java.
Abstract: The aim of the presented constraint programming library firstCS is the integration of the constraint programming paradigm in the object-oriented programming language Java. This open-box library provides its users with the necessary concepts to model and solve constraint satisfaction problems and even constraint optimization problems over finite integer domains. The application focus of firstCS is constraint-based scheduling and resource allocations (e.g. Sandow in INFORMATIK 2011, LNI, vol. P-192, p. 248, 2011), however, it offers all primitives to realize new constraints and according propagation algorithms as well as problem-specific tree search heuristics to find good or even best solutions. Beyond related work and an overview over the general architecture of the system and the supported constraints, this presentation focuses on new aspects of the current version of firstCS, i.e. redundancy checking and the assembling of new search strategies from existing ones using the implementation language Java. The presentation is completed by code fragments showing interesting implementation details.

Journal ArticleDOI
TL;DR: Two major challenges were addressed, starting from the customer support system developed by OMQ, the classification of incoming customer requests into previously defined problem cases and the identification of new problem cases in a set of unclassified customer requests.
Abstract: Customer support departments of large companies are often faced with large amounts of customer requests about the same issue. These requests are usually answered by using preformulated text blocks. However, choosing the right text from a large number of text blocks can be challenging for the customer support agent, especially when the text blocks are thematically related. Optimizing this process using the power of language and knowledge technologies can save resources and improve customer satisfaction. We present a joint project between OMQ GmbH ( www.omq.de ) and the Language Technology lab of the DFKI GmbH ( www.dfki.de ) (German Research Center for Artificial Intelligence), in which, starting from the customer support system developed by OMQ, we addressed two major challenges: First, the classification of incoming customer requests into previously defined problem cases; second, the identification of new problem cases in a set of unclassified customer requests. The two tasks were approached using linguistic and statistical methods combined with machine learning techniques.

Journal ArticleDOI
TL;DR: This article presents the ECo-CoPS approach that defines a structured process for the selection of coordination mechanisms for autonomous planning systems, where the local autonomy and existing planning systems can be preserved.
Abstract: The reuse of code and concepts is an important aspect when developing multiagent systems (MAS) and it is a driving force of agent-oriented software engineering (AOSE). In particular, the reuse of mechanisms like coordination is fundamental to support developers of MASs. In this article we address the selection of effective and efficient mechanisms to coordinate plans among autonomous agents. The selection of coordination mechanisms is, up to now, not covered sufficiently. Therefore, we present the ECo-CoPS approach that defines a structured process for the selection of coordination mechanisms for autonomous planning systems, where the local autonomy and existing planning systems can be preserved.

Journal ArticleDOI
Tobias Müller1
TL;DR: Offboard-Diagnostik-Verfahren entwickelt, welches in einem automatisierten Prozess aus den aufgezeichneten Reparaturfällen neuronale Netze erzeugt and trainiert wurde, dass ein solcher Ansatz bereits zur Verfügung stehenden Daten gute Ergebnisse liefert.
Abstract: Zunehmend vernetzte Fahrzeugfunktionen, steigende Variantenvielfalt und kurzere Entwicklungszyklen fuhren dazu, dass etablierte Expertenbasierte Verfahren an ihre Grenzen stosen. Die Folge sind hohere Kosten durch langere Fehlersuchzeiten, unnotig getauschte Teile und uberforderte Kundendienstmitarbeiter. In dieser Arbeit wird ein Offboard-Diagnostik-System entwickelt, welches zuerst in einem automatisierten Prozess aus aufgezeichneten Reparaturfallen Diagnostik-Modelle erlernt. In den Servicewerkstatten erzeugen die Modelle auf Basis von Fahrzeuginformationen und Fehlersymptomen Hypothesen uber schadhafte Komponenten und sinnvolle Reparaturmasnahmen. Das Ergebnis der Reparatur wird wiederum aufgezeichnet, wodurch ein iterativer Feedback- und Losungsprozess zur automatischen kontinuierlichen Verbesserung der Modelle entsteht. Zur Modellbildung werden zunachst deskriptive und inferenzielle statistische Diagnostik-Modelle analysiert, die jedoch aufgrund der Art und Qualitat der Daten unzureichend sind. Als Losung wird zunachst das inferenzielle statistische Modell in ein einschichtiges neuronales Netz uberfuhrt, welches dann mit einer neuen Konstruktions- und Trainingsmethode um verdeckte Neuronen und Verbindungen derart erweitert wird, dass nur diejenigen Abhangigkeiten abgebildet werden konnen, die erforderlich sind. Die spezielle Struktur des Netzes ermoglicht weiterhin eine Interpretation der Verbindungsgewichte und Neuronen und wirkt damit dem typischen Black-Box-Charakter neuronaler Netze entgegen. Als Vergleich wird ein Standard-MLP-Netz herangezogen. Die resultierenden Diagnostik-Modelle werden einer ausfuhrlichen Evaluierung bestehend aus einer Konzeptbewertung, einer Evaluierung mit realen Falldaten und einem Prototypen zur Praxiserprobung am Fahrzeug unterzogen. Die Ergebnisse zeigen, dass trotz der mangelnden Qualitat und Anzahl der im Projekt zur Verfugung stehenden Reparaturfalldaten sehr gute Ergebnisse erzielt werden.

Journal ArticleDOI
TL;DR: The AMARSi challenge is to integrate novel biological notions, advanced learning algorithms and cutting-edge compliant mechanics in the design of fully-fledged humanoid and quadruped robots with an unprecedented aptitude for integrating in the authors' environments.
Abstract: Flexible, robust, precise, adaptive, compliant and safe: these are some of the qualities robots must have to interact safely and productively with humans. Yet robots are still nowadays perceived as too rigid, clumsy and not sufficiently adaptive to work efficiently in interaction with people. The AMARSi Project endeavors to design and implement rich motor skills, unique flexibility, compliance and state-of-the-art learning in robots. Inspired by human-recorded motion and learning behavior, similarly versatile and constantly adaptive movements and skills endow robots with singularly human-like motor dynamics and learning. The AMARSi challenge is to integrate novel biological notions, advanced learning algorithms and cutting-edge compliant mechanics in the design of fully-fledged humanoid and quadruped robots with an unprecedented aptitude for integrating in our environments.