scispace - formally typeset
Search or ask a question

Showing papers by "Sebastian Elbaum published in 1999"


Patent
11 May 1999
TL;DR: A real-time approach for detecting aberrant modes of system behavior induced by abnormal and unauthorized system activities that are indicative of an intrusive, undesired access of the system is presented in this article.
Abstract: A real-time approach for detecting aberrant modes of system behavior induced by abnormal and unauthorized system activities that are indicative of an intrusive, undesired access of the system. This detection methodology is based on behavioral information obtained from a suitably instrumented computer program as it is executing. The theoretical foundation for the present invention is founded on a study of the internal behavior of the software system. As a software system is executing, it expresses a set of its many functionalities as sequential events. Each of these functionalities has a characteristic set of modules that is executed to implement the functionality. These module sets execute with clearly defined and measurable execution profiles, which change as the executed functionalities change. Over time, the normal behavior of the system will be defined by the boundary of the profiles. An attempt to violate the security of the system will result in behavior that is outside the normal activity of the system and thus result in a perturbation of the system in a manner outside the scope of the normal profiles. Such violations are detected by an analysis and comparison of the profiles generated from an instrumented software system against a set of known intrusion profiles and a varying criterion level of potential new intrusion events.

131 citations


09 Apr 1999
TL;DR: It is shown, through the real-time analysis of the Linux kernel, that the authors can detect very subtle shifts in the behavior of a system, and an attempt to violate the security of the system will result in behavior that is outside the normal activity of theSystem and thus result in a perturbation in the normal profiles.
Abstract: The thrust of this paper is to present a new real-time approach to detect aberrant modes of system behavior induced by abnormal and unauthorized system activities. The theoretical foundation for the research program is based on the study of the software internal behavior. As a software system is executing, it will express a set of its many functionalities as sequential events. Each of these functionalities has a characteristic set of modules that it will execute. In addition, these module sets will execute with clearly defined and measurable execution profiles. These profiles change as the executed functionalities change. Over time, the normal behavior of the system will be defined by profiles. An attempt to violate the security of the system will result in behavior that is outside the normal activity of the system and thus result in a perturbation in the normal profiles. We will show, through the real-time analysis of the Linux kernel, that we can detect very subtle shifts in the behavior of a system.

28 citations


Proceedings ArticleDOI
05 Jan 1999
TL;DR: A model is investigated that represents the program sequential execution of nodules as a stochastic process that may help to learn exactly where the system is fragile and under which execution patterns a certain level of reliability can be guaranteed.
Abstract: Assessing the reliability of a software system has always been an elusive target. A program may work very well for a number of years and this same program may suddenly become quite unreliable if its mission is changed by the user. This has led to the conclusion that the failure of a software system is dependent only on what the software is currently doing. If a program is always executing a set of fault free modules, it will certainly execute indefinitely without any likelihood of failure. A program may execute a sequence of fault prone modules and still not fail. In this particular case, the faults may lie in a region of the code that is not likely to be expressed during the execution of that module. A failure event can only occur when the software system executes a module that contains faults. If an execution pattern that drives the program into a module that contains faults is ever selected, then the program will never fail. Alternatively, a program may execute successfully a module that contains faults just as long as the faults are in code subsets that are not executed. The reliability of the system then, can only be determined with respect to what the software is currently doing. Future reliability predictions will be bound in their precision by the degree of understanding of future execution patterns. We investigate a model that represents the program sequential execution of nodules as a stochastic process. By analyzing the transitions between modules and their failure counts, we may learn exactly where the system is fragile and under which execution patterns a certain level of reliability can be guaranteed.

21 citations


Journal ArticleDOI
TL;DR: The initial estimates of fault introduction rates can serve as a baseline against which future projects can be compared to determine whether progress is being made in reducing the fault introduction rate, and to identify those development techniques that seem to provide the greatest reduction.
Abstract: In any manufacturing environment, the fault introduction rate might be considered one of the most meaningful criterion to evaluate the goodness of the development process. In many investigations, the estimates of such a rate are often oversimplified or misunderstood generating unrealistic expectations on the prediction power of regression models with a fault criterion. The computation of fault introduction rates in software development requires accurate and consistent measurement, which translates into demanding parallel efforts for the development organization. This paper presents the techniques and mechanisms that can be implemented in a software development organization to provide a consistent method of anticipating fault content and structural evolution across multiple projects over time. The initial estimates of fault introduction rates can serve as a baseline against which future projects can be compared to determine whether progress is being made in reducing the fault introduction rate, and to identify those development techniques that seem to provide the greatest reduction.

8 citations


01 Jan 1999
TL;DR: The Software Black Box (SBB) is presented, which constitutes a framework that facilitates the investigation and understanding of software failures, and specifies a mechanism to capture the essentials of an executing program, and provides a reconstruction technique which allows the generation of the scenarios that may have led to the failure.
Abstract: One of the greatest safety-improvement inventions for the airline industry has been the crash-protected Flight Data Recorder (FDR). Today, FDR for accident investigation are mandatory pieces of equipment in most civil aircrafts. With the data retrieved from the FDR, the last moments of the flight before the accident can be reconstructed. Constructing the analog of the FDR for software would be beneficial because it is often very difficult to determine the precise cause of the failure when dealing with complex software systems. This is largely because insufficient or inappropriate information has been retained to permit the reconstruction of the circumstances that led to the failure. This research effort presents the Software Black Box (SBB), which constitutes a framework that facilitates the investigation and understanding of software failures. The SBB specifies a mechanism to capture the essentials of an executing program, and provides a reconstruction technique which allows the generation of the scenarios that may have led to the failure. The SBB architecture, operation, advantages, limitations and potential are shown in this document.

3 citations