scispace - formally typeset
Search or ask a question

Showing papers by "Betty H. C. Cheng published in 2021"


Journal ArticleDOI
TL;DR: The models and data framework demystifies the different roles that model and data play in software development and operation and clarifies where machine learning and artificial intelligence techniques could be used.
Abstract: The models and data framework demystifies the different roles that models and data play in software development and operation and clarifies where machine learning and artificial intelligence techniques could be used.

19 citations


Proceedings ArticleDOI
01 May 2021
TL;DR: In this article, an approach to develop a more robust safety-critical learning-enabled system by predicting its learned behavior when exposed to uncertainty and thereby enabling mitigating countermeasures for predicted failures is proposed.
Abstract: Since deep learning systems do not generalize well when training data is incomplete and missing coverage of corner cases, it is difficult to ensure the robustness of safety-critical self-adaptive systems with deep learning components. Stakeholders require a reasonable level of confidence that a safety-critical system will behave as expected in all contexts. However, uncertainty in the behavior of safety-critical Learning-Enabled Systems (LESs) arises when run-time contexts deviate from training and validation data. To this end, this paper proposes an approach to develop a more robust safety-critical LES by predicting its learned behavior when exposed to uncertainty and thereby enabling mitigating countermeasures for predicted failures. By combining evolutionary computation with machine learning, an automated method is introduced to assess and predict the behavior of an LES when faced with previously unseen environmental conditions. By experimenting with Deep Neural Networks (DNNs) under a variety of adverse environmental changes, the proposed method is compared to a Monte Carlo (i.e., random sampling) method. Results indicate that when Monte Carlo sampling fails to capture uncommon system behavior, the proposed method is better at training behavior models with fewer training examples required.

9 citations


Proceedings ArticleDOI
01 Oct 2021
TL;DR: In this paper, a model-driven approach is proposed to manage self-adaptation of a learning-enabled system (LES) to account for run-time contexts for which the learned behavior of LECs cannot be trusted.
Abstract: Increasingly, safety-critical systems include artificial intelligence and machine learning components (i.e., Learning-Enabled Components (LECs)). However, when behavior is learned in a training environment that fails to fully capture real-world phenomena, the response of an LEC to untrained phenomena is uncertain, and therefore cannot be assured as safe. Automated methods are needed for self-assessment and adaptation to decide when learned behavior can be trusted. This work introduces a model-driven approach to manage self-adaptation of a Learning-Enabled System (LES) to account for run-time contexts for which the learned behavior of LECs cannot be trusted. The resulting framework enables an LES to monitor and evaluate goal models at run time to determine whether or not LECs can be expected to meet functional objectives. Using this framework enables stakeholders to have more confidence that LECs are used only in contexts comparable to those validated at design time.

8 citations


Proceedings ArticleDOI
01 May 2021
TL;DR: In this article, the authors model domain knowledge of the environment as a secondary system to enable design-time verification against documented environmental assumptions (i.e., those elements external to the system), and run-time monitors are used to detect scenarios in the actual environment not specified by the modeled environmental domain knowledge.
Abstract: While verifying adherence to a specification (i.e., specification-based testing) is important, the results are only as valid as the specification itself. Problematically, verifying a system specification must be done within the context of changing or even unknown environmental domain knowledge that could render the specification ineffective or incorrect. This issue is even more apparent in the context of self-adaptive systems, where uncertainty in both the system configuration and environment can impact the validity of the system. This paper introduces a method to explicitly model domain knowledge of the environment as a secondary system to enable design-time verification against documented environmental assumptions (i.e., those elements external to the system). In addition, run-time monitors are used to detect scenarios in the actual environment not specified by the modeled environmental domain knowledge. Rather than simply identifying unexpected inputs, our approach is able to identify run-time violations of the environmental domain knowledge, even when inputs appear valid based on the domain assumptions embedded in the system specification. These violations can then be used to correspondingly update the system and environmental specifications via automated run-time adaptation or subsequent design-time revisions. We illustrate our approach by applying our method to a running example of a goal-based model of a baby monitor.

4 citations


Journal ArticleDOI
TL;DR: In this paper, an evolution-based technique is proposed to assist developers with uncovering limitations in existing data when previously unseen environmental phenomena are introduced, with an emphasis on diversity, to assess the robustness of a system to uncertain environmental factors and to improve the system's robustness.
Abstract: Data-driven Learning-enabled Systems are limited by the quality of available training data, particularly when trained offline. For systems that must operate in real-world environments, the space of possible conditions that can occur is vast and difficult to comprehensively predict at design time. Environmental uncertainty arises when run-time conditions diverge from design-time training conditions. To address this problem, automated methods can generate synthetic data to fill in gaps for training and test data coverage. We propose an evolution-based technique to assist developers with uncovering limitations in existing data when previously unseen environmental phenomena are introduced. This technique explores unique contexts for a given environmental condition, with an emphasis on diversity. Synthetic data generated by this technique may be used for two purposes: (1) to assess the robustness of a system to uncertain environmental factors and (2) to improve the system’s robustness. This technique is demonstrated to outperform random and greedy methods for multiple adverse environmental conditions applied to image-processing Deep Neural Networks.

3 citations