scispace - formally typeset
Search or ask a question

Showing papers on "Decision tree model published in 1992"



Journal ArticleDOI
TL;DR: It is shown that functional capability of distributed hierarchical multicomponent systems (networks) can be described by the directed rooted tree model according to the above fuzzy graph idea.

105 citations


Proceedings ArticleDOI
22 Jun 1992
TL;DR: The authors get a special case of Lovasz's fractional cover measure and use it to completely characterize the amortized nondeterministic communication complexity, and obtain some results.
Abstract: It is possible to view communication complexity as the solution of an integer programming problem. The authors relax this integer programming problem to a linear programming problem, and try to deduce from it information regarding the original communication complexity question. This approach works well for nondeterministic communication complexity. In this case the authors get a special case of Lovasz's fractional cover measure and use it to completely characterize the amortized nondeterministic communication complexity. In the case of deterministic complexity the situation is more complicated. The authors discuss two attempts, and obtain some results using each of them. >

66 citations


Journal ArticleDOI
James Renegar1
TL;DR: This series of papers presents a complete development and complexity analysis of a decision method, and a quantifier elimination method, for the first order theory of the reals.

55 citations


Journal ArticleDOI
TL;DR: An axiomatic approach to defining program complexity using complexity rankings and measures, which has many applications, including evaluating and classifying existing complexity measures and serving as criteria for complexity measure selection.

54 citations


Journal ArticleDOI
TL;DR: It is shown that relative complexity gives feedback on the same complexity domains that many other metrics do, and developers can save time by choosing one metric to do the work of many.
Abstract: A relative complexity technique that combines the features of many complexity metrics to predict performance and reliability of a computer program is presented. Relative complexity aggregates many similar metrics into a linear compound metric that describes a program. Since relative complexity is a static measure, it is expanded by measuring relative complexity over time to find a program's functional complexity. It is shown that relative complexity gives feedback on the same complexity domains that many other metrics do. Thus, developers can save time by choosing one metric to do the work of many. >

49 citations


Proceedings ArticleDOI
07 Oct 1992
TL;DR: A previously developed axiomatic model of program complexity is merged with the previously developed decision tree process for an improvement in the ability to identify high cost modules.
Abstract: Identification of high cost modules has been viewed as one mechanism to improve overall system reliability, since such modules tend to produce more than their fair share of problems. A decision tree model has previously been used to identify such modules. In this paper, a previously developed axiomatic model of program complexity is merged with the previously developed decision tree process for an improvement in the ability to identify such modules. This improvement has been tested using data from the NASA Software Engineering Laboratory. >

13 citations


Book ChapterDOI
01 Jan 1992
TL;DR: Information-theoretic approaches for lower bound problems are discussed and two applications of Communication Complexity are presented.
Abstract: Information-theoretic approaches for lower bound problems are discussed and two applications of Communication Complexity are presented.

12 citations


Book ChapterDOI
01 Jul 1992
TL;DR: Empirical tests show that NFDT performs better than a Classical Decision Tree method, especially when noise is present, and that it can be used to evaluate the quality of concept descriptions issued from a Decision Tree.
Abstract: Most research efforts in Empirical Concept Learning have been devoted to nominal attribute spaces. Using numerical ordered spaces rises some specific problems, especially for Top-Down algorithms. A new system, called NFDT (Numerical Flexible Decision Tree), which aims to solve these problems, is presented in this paper. It builds a Flexible Matching function based on a Decision Tree issued from a classical Top-Down Induction algorithm. Empirical tests show that NFDT performs better than a Classical Decision Tree method, especially when noise is present, and that it can be used to evaluate the quality of concept descriptions issued from a Decision Tree.

12 citations


01 Aug 1992
TL;DR: In this article, the complexity of alphabet indexing is shown to be NP-complete and a local search algorithm for this problem is given, and a result on PLS-completeness is given.
Abstract: For two nite disjoint sets P and Q of strings over an alphabet , an alphabet indexing for P;Q by an indexing alphabet with j j < j j is a mapping : ! satisfying ~ (P ) \ ~ (Q) = ;, where ~ : ! is the homomorphism derived from . We de ned this notion through experiments of knowledge acquisition from amino acid sequences of proteins by learning algorithms. This paper analyzes the complexity of nding an alphabet indexing. We rst show that the problem is NP-complete. Then we give a local search algorithm for this problem and show a result on PLS-completeness.

12 citations


Journal ArticleDOI
TL;DR: An automatic generation of the two knowledge representation models, the symptom tree and the fault-consequence digraph, is synthesized, which gives the generality, reliability, effectiveness and usefulness in building an expert system for process diagnosis.

01 Nov 1992
TL;DR: In this article, a previously developed axiomatic model of program complexity is merged with the previously developed decision tree process for an improvement in the ability to identify high-cost modules.
Abstract: Identification of high cost modules has been viewed as one mechanism to improve overall system reliability, since such modules tend to produce more than their share of problems. A decision tree model was used to identify such modules. In this current paper, a previously developed axiomatic model of program complexity is merged with the previously developed decision tree process for an improvement in the ability to identify such modules. This improvement was tested using data from the NASA Software Engineering Laboratory.

Journal ArticleDOI
TL;DR: The notion of admissible models are defined as a function of problem complexity, the number of data pointsN, and prior belief to derive general bounds relating classifier complexity with data-dependent parameters such as sample size, class entropy and the optimal Bayes error rate.
Abstract: In this paper we investigate the application of stochastic complexity theory to classification problems. In particular, we define the notion of admissible models as a function of problem complexity, the number of data pointsN, and prior belief. This allows us to derive general bounds relating classifier complexity with data-dependent parameters such as sample size, class entropy and the optimal Bayes error rate. We discuss the application of these results to a variety of problems, including decision tree classifiers, Markov models for image segmentation, and feedforward multilayer neural network classifiers.


Proceedings ArticleDOI
30 Aug 1992
TL;DR: A systematic approach for the evaluation of quadtree complexity that is based on a flexible linkage paradigm is proposed and it is realized that a quadtree may undergo a complexity reduction through node condensation.
Abstract: Complexity of hierarchical representation of images is defined as the total number of nodes in the representation tree. An a priori knowledge of this quantity is of considerable interest in problems involving tree search, storage and transmission of imagery. This paper proposes a systematic approach for the evaluation of quadtree complexity that is based on a flexible linkage paradigm. It is further realized that a quadtree may undergo a complexity reduction through node condensation. This event is fully modeled and absorbed in the expected complexity expression through a multidimensional weighting function. Inspection of the weighting surface provides a more clear view of the interaction of quadtree complexity and the random image model. >

Journal ArticleDOI
TL;DR: An optimal lower bound on the average time required by any algorithm that merges two sorted lists on Valiant’s parallel computation tree model is proven.
Abstract: An optimal lower bound on the average time required by any algorithm that merges two sorted lists on Valiant’s parallel computation tree model is proven.

Book ChapterDOI
Hongzhong Wu1
01 Sep 1992
TL;DR: The structure of test complexity classes of uniform tree circuits is explored and it is proved that the test complexity of a balanced uniform tree circuit is either O(1) or Ω(lg n), which means that the balanced tree circuits based on monotonic functions are testable.
Abstract: This paper explores the structure of test complexity classes of uniform tree circuits. We prove that the test complexity of a balanced uniform tree circuit is either O(1) or Ω(lg n), the test complexity of balanced uniform tree circuits based on commutative functions can be divided into constant, logarithmic and polynomial classes, and balanced uniform tree circuits based on monotonic functions are all Θ(nr) (r ∈ (0,1]) testable.




01 Jan 1992
TL;DR: This dissertation introduces a newly developed system of algorithms called the Data Analyzing Tree (DAT) which is designed to either reduce the time complexity or produce more accurate result.
Abstract: Data analysis (reconstructablity analysis) is an area used on a data set which has several variables and a function value to find the most important factor that causes the function values to fall within a desired range. Normal data analysis algorithm [1] finds the most important factor in 0(2n) time. This dissertation introduces a newly developed system of algorithms called the Data Analyzing Tree (DAT) which is designed to either reduce the time complexity or produce more accurate result. DAT-1 uses 0 ( n 2) time to produce the same results as the normal data analysis method, and DAT-2 produces the result with a higher fall-into-the-range rate while using the same time complexity as the normal data analysis. Therefore, DAT-1 is suitable to get quick results, and DAT-2 or a higher numbered DAT is suitable to get more accurate results. DATs give more choices of the algorithm, so the users can chosse the appropriate algorithm depending on the circustances.