scispace - formally typeset
Search or ask a question
Author

Andreas Bernauer

Bio: Andreas Bernauer is an academic researcher from University of Tübingen. The author has contributed to research in topics: Learning classifier system & System on a chip. The author has an hindex of 7, co-authored 19 publications receiving 154 citations.

Papers
More filters
Proceedings ArticleDOI
01 Oct 2006
TL;DR: This paper presents an organic computing inspired SoC architecture which applies self-organization and self-calibration concepts to build reliable SoCs with lower overheads and a broader fault coverage than classical fault-tolerance techniques.
Abstract: The evolution of CMOS technologies leads to integrated circuits with ever smaller device sizes, lower supply voltage, higher clock frequency and more process variability. Intermittent faults effecting logic and timing are becoming a major challenge for future integrated circuit designs. This paper presents an Organic Computing inspired SoC architecture which applies self-organization and self-calibration concepts to build reliable SoCs with lower overheads and a broader fault coverage than classical fault-tolerance techniques. We demonstrate the feasibility of this approach by example on the processing pipeline of a public-domain RISC CPU core.

33 citations

Proceedings ArticleDOI
14 Sep 2009
TL;DR: The proposed generic self-adaptation method helps to improve the design process by allowing design reuse, providing generic applicability, and offering a uniform design process for various self- Adaptation tasks.
Abstract: We investigate a generic self-adaptation method to reduce the design effort for System-on-Chip (SoC). Previous self-adaptation solutions at chip-level use circuitries which have been specially designed for the current problem by hand, leading to an elaborate and inflexible design process, requiring specially trained engineers, and making design reuse difficult. On the other hand, a generic self-adaptation method that can be used for various self-adaptation problems promises to reduce the necessary design effort, but may come with reduced performance and other costs. In this paper, we analyze the performance, self-adaptation capabilities and costs of a generic self-adaptation method. The proposed method allows chip-level self-adaptation of a SoC, can tolerate unforeseen events, and can generalize from previous self-adaptation tasks. Furthermore, the method helps to improve the design process by allowing design reuse, providing generic applicability, and offering a uniform design process for various self-adaptation tasks. Simulation results show that the performance of our method lies only 10% below the performance of a perfect, non-adaptive system in the average case, and only 32% in the worst case. In case of unforeseen events, where the performance of a non-adaptive system decreases significantly, the method can keep its performance level by self-adaptation. We also compare other costs involved.

27 citations

01 Jan 2006
TL;DR: This paper presents an architecture to evaluate the reliability of a systemon-chip (SoC) during its runtime that also accounts for the system’s redundancy and proposes to integrate an autonomic layer into the SoC to detect the chip's current condition and instruct appropriate countermeasures.
Abstract: This paper presents an architecture to evaluate the reliability of a systemon-chip (SoC) during its runtime that also accounts for the system’s redundancy. We propose to integrate an autonomic layer into the SoC to detect the chip’s current condition and instruct appropriate countermeasures. In the autonomic layer, error counters are used to count the number of errors within a fixed time interval. The counters’ values accumulate into a global register representing the system’s reliability. The accumulation takes into account the series and parallel composition of the system.

16 citations

Book ChapterDOI
20 Sep 2010
TL;DR: A novel two-stage method is presented to realise a lightweight but very capable hardware implementation of a Learning Classifier System for on-chip learning.
Abstract: In this article we present a novel two-stage method to realise a lightweight but very capable hardware implementation of a Learning Classifier System for on-chip learning. Learning Classifier Systems (LCS) allow taking good run-time decisions, but current hardware implementations are either large or have limited learning capabilities.

12 citations

01 Jan 2012
TL;DR: It is shown that Learning Classifier Tables, a simplified XCS-based reinforcement learning technique optimised for a low-overhead hardware implementation and integration, achieve nearly optimal results for task-level dynamic workload balancing during run time for a standard networking application.
Abstract: This article presents the use of decentralised self-organisation concepts for the efficient dynamic parameterisation of hardware components and the autonomic distribution of tasks in a symmetrical multi-core processor system. Using results obtained with an autonomic system-on-chip hardware demonstrator, we show that Learning Classifier Tables, a simplified XCS-based reinforcement learning technique optimised for a low-overhead hardware implementation and integration, achieve nearly optimal results for task-level dynamic workload balancing during run time for a standard networking application. Further investigations show the quantitative differences in optimisation quality between scenarios when local and global system information is available to the classifier rules. Autonomic workload management or task repartitioning at run time relieves the software application developers from exploring this NP-hard problem during design time, and is able to react to dynamic and unforeseeable changes in the MPSoC operating environment.

10 citations


Cited by
More filters
Book
28 Dec 2017
TL;DR: System Boundary Systems are defined by their system boundary, which makes a distinction between inside and outside possible (i.e. between self and non-self, see Sect. 4.1).
Abstract: ion is the selection and (possibly) coarsening (i.e. quantisation) of certain system characteristics (attributes, performance indicators, parameters) from the total set of system characteristics. The abstraction process comprises: • A simplification (Example: The colour Red is an abstraction, which neglects the different possible shades of Red, the wavelengths, the intensity etc.), • an aggregation (Example: ‘Temperature’ condenses the myriad of individual molecule movements in a gas volume into a single number.), • and consequently: loss of information. The opposite of abstraction is concretisation. It comprises: • gain of information, • detailing, • refinement, • disaggregation, • in engineering: the design process. 3.2.2 System Boundary Systems are defined by their system boundary, which makes a distinction between inside and outside possible (i.e. between self and non-self, see Sect. 4.1). When 3.2 What is a System? 87 Fig. 3.3 Trade-off between initial development cost and later adaptation cost systems are designed, we must choose where to place the boundary. This involves a consideration of cost (Fig. 3.3): A narrow boundary will reduce the cost for planning and development in the first place but bears the risk of a high effort in case a subsequent adaptation and extension of the system should be necessary. A wide system boundary reverses the cost curves. Apparently, the total cost minimum lies somewhere in the middle. 3.2.3 Some System Types and Properties Most systems are transient, i.e. they change over time. They are developed, assembled, modified, destroyed. Reactive systems react on inputs (events, signals, sensor data) by applying outputs (events, control signals, commands) to the environment. All open systems are reactive! In a more focused definition: Only systems reacting in real time (within a predefined time) are reactive systems. Such real-time systems are characterised by (1) time restrictions (deadlines), and (2) time determinism (i.e. guarantees). Planned vs. Unplanned systems: During the development process of technical systems, only predictable events are taken into account in the designed core. But there will always be an unpredictable rest. Therefore, the ‘spontaneous closure’ (Fig. 3.4) has to cover these events. In technical systems, the exception handler or the diagnosis system play the role of a simple spontaneous closure. Living systems are characterised by a powerful spontaneous closure. Recurring reactions of the spontaneous closure become part of the designed core. This amounts to learning.

97 citations

Journal ArticleDOI
01 Apr 2014
TL;DR: A definition of population diversity in BSO algorithm is introduced in this paper to measure the change of solutions’ distribution and show that the performance of the BSO is improved by part of solutions re-initialization strategies.
Abstract: The convergence and divergence are two common phenomena in swarm intelligence. To obtain good search results, the algorithm should have a balance on convergence and divergence. The premature convergence happens partially due to the solutions getting clustered together, and not diverging again. The brain storm optimization (BSO), which is a young and promising algorithm in swarm intelligence, is based on the collective behavior of human being, that is, the brainstorming process. The convergence strategy is utilized in BSO algorithm to exploit search areas may contain good solutions. The new solutions are generated by divergence strategy to explore new search areas. Premature convergence also happens in the BSO algorithm. The solutions get clustered after a few iterations, which indicate that the population diversity decreases quickly during the search. A definition of population diversity in BSO algorithm is introduced in this paper to measure the change of solutions’ distribution. The algorithm’s exploration and exploitation ability can be measured based on the change of population diversity. Different kinds of partial reinitialization strategies are utilized to improve the population diversity in BSO algorithm. The experimental results show that the performance of the BSO is improved by part of solutions re-initialization strategies.

83 citations

Proceedings ArticleDOI
20 Oct 2008
TL;DR: The core idea is that the behaviour of an organic computing system can be split into productive phases and self-x phases, which allows for a generic description of how ``organic'' aspects can be specified and implemented.
Abstract: Organic computing systems are systems which have the capability to autonomously (re-)organize and adapt themselves. The benefit of such systems with self-x properties is that they are more dependable, as they can compensate for some failures. They are easier to maintain, because they can automatically configure themselves and are more convenient to use because of automatic adaptation to new situations. While organic computing systems have a lot of desired properties, there still exists only little knowledge on how they can be designed and built.In this paper an approach for specification and construction of a class of organic computing systems is presented, called the (RIA). The core idea is that the behaviour of an organic computing system can be split into productive phases and self-x phases. This allows for a generic description of how ``organic'' aspects can be specified and implemented. The approach will be illustrated by applying it to a design methodology for organic computing systems and further refining it to an explicit case study in the domain of production automation.

53 citations