scispace - formally typeset
Search or ask a question

Showing papers by "Charles W. Anderson published in 1999"


Proceedings ArticleDOI
17 Nov 1999
TL;DR: This paper presents and evaluates a stopping rule that can be used to determine when to stop the current testing phase using a given testing technique, and compares savings and quality of testing both with and without using the stopping rule.
Abstract: Testing behavioral models before they are released to the synthesis and logic design phase is a tedious process, to say the least. A common practice is the test-it-to-death approach in which millions or even billions of vectors are applied and the results are checked for possible bugs. The vectors applied to behavioral models include functional vectors, but the significant amount of the vectors are random in nature, including random combinations of instructions. In this paper, we present and evaluate a stopping rule that can be used to determine when to stop the current testing phase using a given testing technique. We demonstrate the use of the stopping rule on two complex VHDL models that were tested for branch coverage with 4 different testing phases. We compare savings and quality of testing both with and without using the stopping rule.

18 citations


Proceedings ArticleDOI
01 Jan 1999
TL;DR: A stopping rule is presented and evaluated that can be used to determine when it is time to switch to a different testing technique, because the current one is not likely to increase criteria fulfilment.
Abstract: When testing software, testers rarely use only one technique to generate tests that, they hope, will fulfill their testing criteria. Malaiya showed that testers switch strategies when testing yield saturates. We present and evaluate a stopping rule that can be used to determine when it is time to switch to a different testing technique, because the current one is not likely to increase criteria fulfilment. We demonstrate use of the stopping rule on a program that is being tested for branch coverage with five different testing techniques. We compare savings and accuracy of stopping both with and without using the stopping rule.

16 citations


Book ChapterDOI
02 Jun 1999
TL;DR: In this paper, the authors introduce temporal neighborhoods as small groups of states that experience frequent intra-group transitions during on-line sampling, and form basis functions along these temporal neighborhoods.
Abstract: To avoid the curse of dimensionality, function approximators are used in reinforcement learning to learn value functions for individual states. In order to make better use of computational resources (basis functions) many researchers are investigating ways to adapt the basis functions during the learning process so that they better fit the value-function landscape. Here we introduce temporal neighborhoods as small groups of states that experience frequent intra-group transitions during on-line sampling. We then form basis functions along these temporal neighborhoods. Empirical evidence is provided which demonstrates the effectiveness of this scheme. We discuss a class of RL problems for which this method might be plausible.

9 citations


Book ChapterDOI
02 Jun 1999
TL;DR: It is shown how independent components analysis and its extension for sub-Gaussian sources, extended ICA (eICA), can be applied to accurately classify cognitive tasks with eye blink contaminated EEG recordings.
Abstract: Electroencephalography (EEG) has been used extensively for classifying cognitive tasks. Many investigators have demonstrated classification accuracies well over 90% for some combinations of cognitive tasks, signal transformations, and classification methods. Unfortunately, EEG data is prone to significant interference from a wide variety of artifacts, particularly eye blinks. Most methods for classifying cognitive tasks with EEG data simply discard time windows containing eye blink artifacts. However, future applications of EEG-based cognitive task classification should not be hindered by eye blinks. The value of an EEG-controlled human-computer interface, for instance, would be severely diluted if it did not work in the presence of eye blinks. Fortunately, recent advances in blind signal separation algorithms and their applications to EEG data mitigate the artifact contamination issue. In this paper, we show how independent components analysis (ICA) and its extension for sub-Gaussian sources, extended ICA (eICA), can be applied to accurately classify cognitive tasks with eye blink contaminated EEG recordings.

5 citations


Book ChapterDOI
02 Jun 1999
TL;DR: Feedforward neural networks are trained to classify half-second segments of six-channel, EEG data into one of five classes corresponding to five mental tasks performed by one subject, and the resulting hidden-unit weight vectors suggests which electrodes and representation components are the most relevant to the classification problem.
Abstract: Feedforward neural networks are trained to classify half-second segments of six-channel, EEG data into one of five classes corresponding to five mental tasks performed by one subject. Two and three-layer neural networks are trained on a 128-processor SIMD computer using 10-fold cross-validation and early stopping to limit over-fitting. Four representations of the EEG signals, based on autoregressive (AR) models and Fourier Transforms, are compared. Using the AR representation and averaging over consecutive segments, an average of 72% of the test segments are correctly classified; for some test sets 100% are correctly classified. Cluster arm, is of the resulting hidden-unit weight vectors suggests which electrodes and representation components are the most relevant to the classification problem.

1 citations