scispace - formally typeset
Search or ask a question
Author

Michael Engel

Other affiliations: University of Marburg
Bio: Michael Engel is an academic researcher from Technical University of Dortmund. The author has contributed to research in topics: System on a chip & MPSoC. The author has an hindex of 14, co-authored 47 publications receiving 689 citations. Previous affiliations of Michael Engel include University of Marburg.

Papers
More filters
Proceedings ArticleDOI
09 Oct 2011
TL;DR: An overview of a major research project on dependable embedded systems that has started in Fall 2010 and is running for a projected duration of six years is presented, including a new classification on faults, errors, and failures.
Abstract: The paper presents an overview of a major research project on dependable embedded systems that has started in Fall 2010 and is running for a projected duration of six years. Aim is a ‘dependability co-design’ that spans various levels of abstraction in the design process of embedded systems starting from gate level through operating system, applications software to system architecture. In addition, we present a new classification on faults, errors, and failures.

99 citations

Proceedings ArticleDOI
14 Mar 2005
TL;DR: The TOSKANA toolkit for deploying dynamic aspects into an operating system kernel as a central part of a computer system having an overview of current system operation and resource usage is introduced.
Abstract: To master the complexity of software systems in the presence of unexpected events potentially affecting system operation, the Autonomic Computing Initiative [16] aims to build systems that have the ability to control and organize themselves to meet unforeseen changes in the hard- and software environment.The basic principles employed by autonomic computing are self-configuration, self-optimization, self-healing and self-protection. Typically, these principles are cross-cutting concerns, so an obvious approach to their realization in software is to use aspect-oriented programming (AOP). Since autonomic systems have to adapt their behavior to changing runtime conditions, a dynamic AOP approach is required to implement autonomic computing functionality.This paper introduces the TOSKANA toolkit for deploying dynamic aspects into an operating system kernel as a central part of a computer system having an overview of current system operation and resource usage. TOSKANA provides before, after and around advice for in-kernel functions and supports the specification of pointcuts as well as the implementation of aspects themselves as dynamically exchangeable kernel modules. The use of TOSKANA is demonstrated by several examples indicating the cross-cutting nature of providing autonomic computing functionality in an operating system kernel. Performance results are presented to characterize the aspect deployment overhead incurred by using TOSKANA.

84 citations

Proceedings ArticleDOI
07 Oct 2012
TL;DR: Romain is presented, a framework that provides transparent redundant multithreading1 as an operating system service for hardware error detection and recovery and minimizes the complexity added to the operating system for the sake of replication.
Abstract: In modern commodity operating systems, core functionality is usually designed assuming that the underlying processor hardware always functions correctly. Shrinking hardware feature sizes break this assumption. Existing approaches to cope with these issues either use hardware functionality that is not available in commercial-off-the-shelf (COTS) systems or poses additional requirements on the software development side, making reuse of existing software hard, if not impossible.In this paper we present Romain, a framework that provides transparent redundant multithreading1 as an operating system service for hardware error detection and recovery. When applied to a standard benchmark suite, Romain requires a maximum runtime overhead of 30% for triple-modular redundancy (while in many cases remaining below 5%). Furthermore, our approach minimizes the complexity added to the operating system for the sake of replication.

57 citations

Journal ArticleDOI
TL;DR: The paper presents solutions for addressing the threats inherent to these three increasingly demanding levels of on-demand Grid computing, applying sandbox-based approaches using virtual machine technology and jailing mechanisms to ensure trust and Trusted Computing Platform Alliance technology for the third level.

42 citations

Book ChapterDOI
01 Jan 2016
TL;DR: The final main section of this chapter comprises solutions which demonstrate that it is feasible to address the challenges and find solutions, even though a major amount of additional work is required.
Abstract: The notion of Cyber-Physical Systems (CPS) has recently been introduced The term describes the integration of information and computation technologies (ICT) with real, physical objects In this chapter, we motivate work in this new area by presenting the large set of opportunities resulting from this integration However, this requires coping with a number of challenges which we do also include in this chapter The final main section of this chapter comprises solutions which demonstrate that it is feasible to address the challenges and find solutions, even though a major amount of additional work is required

31 citations


Cited by
More filters
Journal ArticleDOI
TL;DR: Machine learning addresses many of the same research questions as the fields of statistics, data mining, and psychology, but with differences of emphasis.
Abstract: Machine Learning is the study of methods for programming computers to learn. Computers are applied to a wide range of tasks, and for most of these it is relatively easy for programmers to design and implement the necessary software. However, there are many tasks for which this is difficult or impossible. These can be divided into four general categories. First, there are problems for which there exist no human experts. For example, in modern automated manufacturing facilities, there is a need to predict machine failures before they occur by analyzing sensor readings. Because the machines are new, there are no human experts who can be interviewed by a programmer to provide the knowledge necessary to build a computer system. A machine learning system can study recorded data and subsequent machine failures and learn prediction rules. Second, there are problems where human experts exist, but where they are unable to explain their expertise. This is the case in many perceptual tasks, such as speech recognition, hand-writing recognition, and natural language understanding. Virtually all humans exhibit expert-level abilities on these tasks, but none of them can describe the detailed steps that they follow as they perform them. Fortunately, humans can provide machines with examples of the inputs and correct outputs for these tasks, so machine learning algorithms can learn to map the inputs to the outputs. Third, there are problems where phenomena are changing rapidly. In finance, for example, people would like to predict the future behavior of the stock market, of consumer purchases, or of exchange rates. These behaviors change frequently, so that even if a programmer could construct a good predictive computer program, it would need to be rewritten frequently. A learning program can relieve the programmer of this burden by constantly modifying and tuning a set of learned prediction rules. Fourth, there are applications that need to be customized for each computer user separately. Consider, for example, a program to filter unwanted electronic mail messages. Different users will need different filters. It is unreasonable to expect each user to program his or her own rules, and it is infeasible to provide every user with a software engineer to keep the rules up-to-date. A machine learning system can learn which mail messages the user rejects and maintain the filtering rules automatically. Machine learning addresses many of the same research questions as the fields of statistics, data mining, and psychology, but with differences of emphasis. Statistics focuses on understanding the phenomena that have generated the data, often with the goal of testing different hypotheses about those phenomena. Data mining seeks to find patterns in the data that are understandable by people. Psychological studies of human learning aspire to understand the mechanisms underlying the various learning behaviors exhibited by people (concept learning, skill acquisition, strategy change, etc.).

13,246 citations

Journal Article
TL;DR: AspectJ as mentioned in this paper is a simple and practical aspect-oriented extension to Java with just a few new constructs, AspectJ provides support for modular implementation of a range of crosscutting concerns.
Abstract: Aspect] is a simple and practical aspect-oriented extension to Java With just a few new constructs, AspectJ provides support for modular implementation of a range of crosscutting concerns. In AspectJ's dynamic join point model, join points are well-defined points in the execution of the program; pointcuts are collections of join points; advice are special method-like constructs that can be attached to pointcuts; and aspects are modular units of crosscutting implementation, comprising pointcuts, advice, and ordinary Java member declarations. AspectJ code is compiled into standard Java bytecode. Simple extensions to existing Java development environments make it possible to browse the crosscutting structure of aspects in the same kind of way as one browses the inheritance structure of classes. Several examples show that AspectJ is powerful, and that programs written using it are easy to understand.

2,947 citations

Proceedings Article
01 Jan 2003

1,212 citations