scispace - formally typeset
Search or ask a question
Topic

Decision tree model

About: Decision tree model is a research topic. Over the lifetime, 2256 publications have been published within this topic receiving 38142 citations.


Papers
More filters
Proceedings ArticleDOI
15 Jun 1998
TL;DR: It is observed in contrast with Turing complexity that a one round Merlin-Arthur protocol is as powerful as a general interactive proof system and, in particular, can simulate a one-round Arthur-Merlin protocol.
Abstract: It is well known that probabilistic boolean decision trees cannot be much more powerful than deterministic ones. Motivated by a question if randomization can significantly speed up a nondeterministic computation via a boolean decision tree, we address structural properties of Arthur-Merlin games in this model and prove some lower bounds. We consider two cases of interest, the first when the length of communication between the players is limited and the second if it is not. While in the first case we can carry over the relations between the corresponding Turing complexity classes, in the second case we observe in contrast with Turing complexity that a one round Merlin-Arthur protocol is as powerful as a general interactive proof system and, in particular, can simulate a one-round Arthur-Merlin protocol. Moreover, we show that sometimes a Merlin-Arthur protocol can be more efficient than an Arthur-Merlin protocol, and than a Merlin-Arthur protocol with limited communication. This is the case for a boolean function whose set of zeroes is a code with high minimum distance and a natural uniformity condition. Such functions provide an example when the Merlin-Arthur complexity is 1 with one-sided error /spl epsiv//spl isin/(2/3, 1), but at the same time the nondeterministic decision tree complexity is /spl Omega/(n). The latter should be contrasted with another fact we prove. Namely, if a function has Merlin-Arthur complexity 1 with one-sided error probability /spl epsiv//spl isin/(0, 2/3], then its nondeterministic complexity is bounded by a constant.

4 citations

Patent
07 Sep 2016
TL;DR: In this paper, a power distribution network situation sensing method based on complex event processing technology and a decision tree is proposed, and the method comprises the steps: defining nodes, and generating decision tree model according to a node rule and status values; calculating and obtaining output data through employing the corresponding formulas of all nodes; comparing the output data of all the nodes with expected values, and obtaining the state changes of the nodes; determining the node priority of the decision tree and obtaining an inference result.
Abstract: The invention relates to a power distribution network situation sensing method based on the complex event processing technology and a decision tree, and the method comprises the steps: defining nodes, and generating a decision tree model according to a node rule and status values; obtaining the input data of the decision tree model; calculating and obtaining output data through employing the corresponding formulas of all nodes; comparing the output data of all nodes with expected values, and obtaining the state changes of the nodes; determining the node priority of the decision tree model, and obtaining an inference result. The method greatly improves the capability of a power distribution network for processing mass information and mastering a key situation under the condition of disasters, and provides decision support for the power distribution network for handling natural disasters.

4 citations

Journal Article
TL;DR: A dynamic method based on tree’s growth is proposed which can generate virtual object of third dimension which can compare beauty with real one through acceptable complexity of algorithm.
Abstract: The basic goal of 3D modeling is to generate virtual object of third dimension which can compare beauty with real one through acceptable complexity of algorithm Static approaches of modeling 3D tree generally utilize fractal features and stochastic features of tree We propose a dynamic method based on tree’s growth Tree’s growing process is affected by inner gene and outer circumstance So it can be generated tree model by dynamic controlling the growth of tree The notion of force has been introduced in the dynamic modeling system to control the process of tree growth There are two kinds of force, called inner force and outer force, which simulate life-force and environmental factors respectively Combined with fractal mechanism, the system can generate various ordinary state trees Another virtue of the dynamic system is that it can expediently control the outcome By adjusting arguments of force, the results can be smoothly changed and deliberately controlled

4 citations

Book ChapterDOI
13 Jun 2005
TL;DR: A contingency-based approach to ensemble classification is described, which finds that decision tree models can significantly improve the identification of highly-performing ensembles, and the input parameters for a decision tree are dependent on the characteristics and demands of the decision problem, as well as the objectives of the decisions maker.
Abstract: This paper describes a contingency-based approach to ensemble classification. Motivated by a business marketing problem, we explore the use of decision tree models, along with diversity measures and other elements of the task domain, to identify highly-performing ensemble classification models. Working from generated data sets, we found that 1) decision tree models can significantly improve the identification of highly-performing ensembles, and 2) the input parameters for a decision tree are dependent on the characteristics and demands of the decision problem, as well as the objectives of the decision maker.

4 citations

01 Jan 2000
TL;DR: This thesis investigates variable complexity algorithms and proposes two fast algorithms based on fast distance metric computation or fast matching approaches that allow computational scalability in distance computation with graceful degradation in the overall image quality.
Abstract: In this thesis we investigate variable complexity algorithms. The complexities of these algorithms are input-dependent, i.e., the type of input determines the complexity required to complete the operation. The key idea is to enable the algorithm to classify the inputs so that unnecessary operations can be pruned. The goal of the design of the variable complexity algorithm is to minimize the average complexity over all possible input types, including the cost of classifying the inputs. We study two of the fundamental operations in standard image/video compression, namely, the discrete cosine transform (DCT) and motion estimation (ME). We first explore variable complexity in inverse DCT by testing for zero inputs. The test structure can also be optimized for minimal total complexity for a given inputs statistics. In this case, the larger the number of zero coefficients, i.e., the coarser the quantization stepsize, the greater the complexity reduction. As a consequence, tradeoffs between complexity and distortion can be achieved. For direct DCT we propose a variable complexity fast approximation algorithm. The variable complexity part computes only DCT coefficients that will not be quantized to zeros according to the classification results (in addition the quantizer can benefit from this information by by-passing its operations for zero coefficients). The classification structure can also be optimized for a given input statistics. On the other hand, the fast approximation part approximates the DCT coefficients with much less complexity. The complexity can be scaled, i.e., it allows more complexity reduction at lower quality coding, and can be made quantization-dependent to keep the distortion degradation at a certain level. In video coding, ME is the part of the encoder that requires the most complexity and therefore achieving significant complexity reduction in ME has always been a goal in video coding research. We propose two fast algorithms based on fast distance metric computation or fast matching approaches. Both of our algorithms allow computational scalability in distance computation with graceful degradation in the overall image quality. The first algorithm exploits hypothesis testing in fast metric computation whereas the second algorithm uses thresholds obtained from partial distances in hierarchical candidate elimination. (Abstract shortened by UMI.)

4 citations


Network Information
Related Topics (5)
Cluster analysis
146.5K papers, 2.9M citations
80% related
Artificial neural network
207K papers, 4.5M citations
78% related
Fuzzy logic
151.2K papers, 2.3M citations
77% related
The Internet
213.2K papers, 3.8M citations
77% related
Deep learning
79.8K papers, 2.1M citations
77% related
Performance
Metrics
No. of papers in the topic in previous years
YearPapers
202310
202224
2021101
2020163
2019158
2018121