scispace - formally typeset
Search or ask a question

Showing papers on "Decision tree model published in 2012"


Journal ArticleDOI
TL;DR: The lightweight IDS has been developed by using a wrapper based feature selection algorithm that maximizes the specificity and sensitivity of the IDS as well as by employing a neural ensemble decision tree iterative procedure to evolve optimal features.
Abstract: The objective of this paper is to construct a lightweight Intrusion Detection System (IDS) aimed at detecting anomalies in networks. The crucial part of building lightweight IDS depends on preprocessing of network data, identifying important features and in the design of efficient learning algorithm that classify normal and anomalous patterns. Therefore in this work, the design of IDS is investigated from these three perspectives. The goals of this paper are (i) removing redundant instances that causes the learning algorithm to be unbiased (ii) identifying suitable subset of features by employing a wrapper based feature selection algorithm (iii) realizing proposed IDS with neurotree to achieve better detection accuracy. The lightweight IDS has been developed by using a wrapper based feature selection algorithm that maximizes the specificity and sensitivity of the IDS as well as by employing a neural ensemble decision tree iterative procedure to evolve optimal features. An extensive experimental evaluation of the proposed approach with a family of six decision tree classifiers namely Decision Stump, C4.5, Naive Baye's Tree, Random Forest, Random Tree and Representative Tree model to perform the detection of anomalous network pattern has been introduced.

277 citations


Journal ArticleDOI
TL;DR: This study contributes to the spatially explicit assessment of an important structural aspect of dry heathland vegetation, namely the heather age structure, using Airborne Hyperspectral line-Scanner radiometer data of the Kalmthoutse Heide in northern Belgium.

89 citations


Journal ArticleDOI
TL;DR: In this paper, a formwork method selection model based on boosted decision trees is proposed to assist the practitioner's decision making in tall building construction with reinforced concrete structures, and the proposed model was compared with an artificial neural network model and a decision tree model.

56 citations


Journal ArticleDOI
30 Mar 2012
TL;DR: This paper discusses one of the most widely used supervised classification techniques is the decision tree and performs own Decision Tree evaluate strength of own classification with Performance analysis and Results analysis.
Abstract: ISSN 2277 – 5048 | © 2012 Bonfring Abstract--Data classification means categorization of data into different category according to rules. The aim of this proposed research is to extract a kind of “structure” from a sample of objects. To rephrase it better to learn a concise representation of these data. Present research performed over the classification algorithm learns from the training set and builds a model and that model is used to classify new objects. This paper discusses one of the most widely used supervised classification techniques is the decision tree. And perform own Decision Tree evaluate strength of own classification with Performance analysis and Results analysis.

53 citations


Proceedings ArticleDOI
12 Aug 2012
TL;DR: A new incremental decision tree algorithm so called incrementally optimized very fast decision tree (iOVFDT) is proposed and evaluates the proposed algorithm in comparison to existing methods under noisy data streams environment, showing iOVF DT has outperformance on the aspects of higher accuracy and smaller model size.
Abstract: How to extract meaningful information from big data has been a popular open problem. Decision tree, which has a high degree of knowledge interpretation, has been favored in many real world applications. However noisy values commonly exist in high-speed data streams, e.g. real-time online data feeds that are prone to interference. When processing big data, it is hard to implement pre-processing and sampling in full batches. To solve this tradeoff, this paper proposes a new incremental decision tree algorithm so called incrementally optimized very fast decision tree (iOVFDT). The experiment evaluates the proposed algorithm in comparison to existing methods under noisy data streams environment. Result shows iOVFDT has outperformance on the aspects of higher accuracy and smaller model size.

44 citations


Journal ArticleDOI
01 Nov 2012
TL;DR: A method to compute developmental stages that approximate the tree's natural growth that enables users to create rich natural scenes from a small number of static tree models.
Abstract: Given a static tree model we present a method to compute developmental stages that approximate the tree's natural growth. The tree model is analyzed and a graph-based description its skeleton is determined. Based on structural similarity, branches are added where pruning has been applied or branches have died off over time. Botanic growth models and allometric rules enable us to produce convincing animations from a young tree that converge to the given model. Furthermore, the user can explore all intermediate stages. By selectively applying the process to parts of the tree even complex models can be edited easily. This form of reverse engineering enables users to create rich natural scenes from a small number of static tree models.

42 citations


Journal ArticleDOI
TL;DR: A method to evaluate the algorithmic complexity of landscapes is developed here, based on the notion of Kolmogorov complexity (or K-complexity), which can be a descriptor not only of the landscape's structural complexity, but also of its functional complexity.
Abstract: A method to evaluate the algorithmic complexity of landscapes is developed here, based on the notion of Kolmogorov complexity (or K-complexity). The K-complexity of a landscape is calculated from a string x of symbols representing the landscape's features (e.g. land use), whereby each symbol belongs to an alphabet L, and can be defined as the size of the shortest string y that fully describes x. K-complexity presents several useful aspects as a measure of landscape complexity: a) it is a direct measure of complexity and not a surrogate measure, well supported by the literature of Informatics; b) it is easy to apply to landscapes of ‘small' size’ c) it can be used to compare the complexity of two or more landscapes; d) it allows calculations of a landscape's changes in complexity with time; e) it can be a descriptor not only of the landscape's structural complexity, but also of its functional complexity; and f) it makes possible to distinguish two landscapes with the same diversity but with differ...

40 citations


Journal ArticleDOI
TL;DR: The 2N-ary choice tree model accounts for response times and choice probabilities in multi-alternative preferential choice, which implements pairwise comparison of alternatives on weighted attributes into an information sampling process which, in turn, results in a preference process.
Abstract: The 2N-ary choice tree model accounts for response times and choice probabilities in multi-alternative preferential choice. It implements pairwise comparison of alternatives on weighted attributes into an information sampling process which, in turn, results in a preference process. The model provides expected choice probabilities and response time distributions in closed form for optional and fixed stopping times. The theoretical background of the 2N-ary choice tree model is explained in detail with focus on the transition probabilities that take into account constituents of human preferences such as expectations, emotions, or socially influenced attention. Then it is shown how the model accounts for several context-effects observed in human preferential choice like similarity, attraction, and compromise effects and how long it takes, on average, for the decision. The model is extended to deal with more than three choice alternatives. A short discussion on how the 2N-ary choice tree model differs from the multi-alternative decision field theory and the leaky competing accumulator model is provided.

37 citations


Journal ArticleDOI
TL;DR: An advanced review of regression tree methods for mining data streams summarizes the performance results of the reviewed methods and crystallizes 10 requirements for successful implementation of a regression tree algorithm in data stream mining area.
Abstract: This paper presents an advanced review of regression tree methods for mining data streams. Batch regression tree methods are known for their simplicity, interpretability, accuracy, and efficiency. They use fast divide-and-conquer greedy algorithms that recursively partition the given training data into smaller subsets. The result is a tree-shaped model with splitting rules in the internal nodes and predictions in the leaves. Most batch regression tree methods take a complete dataset and build a model using that data. Generally, this tree model cannot be modified if new data is acquired later. Their successors, the incremental model and interval trees algorithms, are able to build and retrain a model on a step-by-step basis by incorporating new numerical training instances into the model as they become available. Moreover, these algorithms produce even more compact and accurate models than batch regression tree algorithms because they use intervals or functional models with a change detection mechanism, which makes them a more suitable choice for regression analysis of data streams. Finally, this review summarizes the performance results of the reviewed methods and crystallizes 10 requirements for successful implementation of a regression tree algorithm in data stream mining area. © 2011 Wiley Periodicals, Inc.

36 citations


Journal ArticleDOI
TL;DR: In this article, a chi-square automatic interaction detector-based algorithm is applied to derive a decision tree using a large activity diary dataset recently collected in the Netherlands, and the results show a satisfactory improvement in the goodness-of-fit of the decision tree model compared to the null model.
Abstract: This study examines the household interactions in the context of the car allocation choice decision in car-deficient households as part of an activity-scheduling process, focusing on non-work tours. A chi-square automatic interaction detector-based algorithm is applied to derive a decision tree using a large activity diary dataset recently collected in the Netherlands. The results show a satisfactory improvement in the goodness-of-fit of the decision tree model compared to the null model. Gender still plays a role. A descriptive analysis indicates that men, more often than women, get the car for non-work tours for which a car allocation decision needs to be made. Tour-level attributes also influence the household car allocation decision for non-work tours. Overall, men exert more influence on the car allocation decision for non-work tours, as indicated by the number of influential variables that relate to males. The developed models will be incorporated in a refinement of the ALBATROSS model - an existing computational process model of activity-travel choice.

31 citations


Journal ArticleDOI
TL;DR: A general framework based on copulas for modeling dependent multivariate uncertainties through the use of a decision tree and an efficient computational method for multivariate decision and risk analysis that can be standardized for convenient application is presented.
Abstract: This paper presents a general framework based on copulas for modeling dependent multivariate uncertainties through the use of a decision tree. The proposed dependent decision tree model allows multiple dependent uncertainties with arbitrary marginal distributions to be represented in a decision tree with a sequence of conditional probability distributions. This general framework could be naturally applied in decision analysis and real options valuations, as well as in more general applications of dependent probability trees. While this approach to modeling dependencies can be based on several popular copula families as we illustrate, we focus on the use of the normal copula and present an efficient computational method for multivariate decision and risk analysis that can be standardized for convenient application.

Journal ArticleDOI
TL;DR: This paper presents a method for secure and scalable many-to-one lossy transmission based on asymmetric homomorphisms which enables the root of the tree to compute any mathematical function on the data sent by the leaves.

Proceedings ArticleDOI
07 Jul 2012
TL;DR: The computational complexity analysis of genetic programming has been started recently in this paper by analyzing simple (1+1) GP algorithms for the problems ORDER and MAJORITY and the expected time until simple multi-objective GA algorithms have computed the Pareto front when taking the complexity of a syntax tree as an equally important objective.
Abstract: The computational complexity analysis of genetic programming (GP) has been started recently in [7] by analyzing simple (1+1) GP algorithms for the problems ORDER and MAJORITY. In this paper, we study how taking the complexity as an additional criteria influences the runtime behavior. We consider generalizations of ORDER and MAJORITY and present a computational complexity analysis of (1+1) GP using multi-criteria fitness functions that take into account the original objective and the complexity of a syntax tree as a secondary measure. Furthermore, we study the expected time until simple multi-objective genetic programming algorithms have computed the Pareto front when taking the complexity of a syntax tree as an equally important objective.

Patent
02 Nov 2012
TL;DR: In this paper, a decision tree model is automatically pruned based on characteristics of nodes or branches in the decision tree or based on artifacts associated with model generation, and the nodes may be displayed in different colors and the colors may be associated with different node questions or answers.
Abstract: A decision tree model is generated from sample data. A visualization system may automatically prune the decision tree model based on characteristics of nodes or branches in the decision tree or based on artifacts associated with model generation. For example, only nodes or questions in the decision tree receiving a largest amount of the sample data may be displayed in the decision tree. The nodes also may be displayed in a manner to more readily identify associated fields or metrics. For example, the nodes may be displayed in different colors and the colors may be associated with different node questions or answers.

Journal ArticleDOI
TL;DR: This article proves that the decision tree has a wide applicable future in the sale field on site and constitutes a decision tree based on information gain and thus produces some useful purchasing behavior rules.

Journal ArticleDOI
TL;DR: It is argued that the psychological complexity is easily defined and quantified in terms of change and support this argument with a measure of complexity for binary patterns, which correlates well with a number of existing complexity and randomness measures, both subjective and objective.

Proceedings ArticleDOI
01 Dec 2012
TL;DR: This work presents a novel algorithm named Importance Aided Decision Tree that takes Feature Importance as an additional domain knowledge for decision tree algorithm and uses a novel approach to incorporate this feature importance score into decision tree learning.
Abstract: Decision Tree is a widely used supervised learning algorithm due its many advantages like fast non parametric learning, comprehensibility and son But, Decision Tree require large training set to learn accurately because, decision tree algorithms recursively partition the data set that leaves very few instances in the lower levels of the tree In order to address this drawback, we present a novel algorithm named Importance Aided Decision Tree (IADT) It that takes Feature Importance as an additional domain knowledge Additional domain knowledge have been shown to enhance the performance of learners Decision Tree algorithm always finds the most important attributes in each node Thus, Importance of features is a relevant domain knowledge for decision tree algorithm Our algorithm uses a novel approach to incorporate this feature importance score into decision tree learning This approach makes decision trees more accurate and robust We presented theoretical and empirical performance analysis to show that IADT is superior to standard decision tree learning algorithms

Proceedings ArticleDOI
19 Oct 2012
TL;DR: In this paper, a point-based local maxima algorithm was implemented to distinguish single tree from multiple tree components and then the changes were detected by comparing the parameters of corresponding tree components which were matched by a tree to tree matching algorithm using the overlapping of bounding boxes and point to point distances.
Abstract: Light detection and ranging (lidar) provides a promising way of detecting changes of vegetation in three dimensions (3D) because the beam of laser may penetrate through the foliage of vegetation. This study aims at the detection of changes in trees in urban areas with a high level of automation using mutil-temporal airborne lidar point clouds. Three datasets covering a part of Rotterdam, the Netherlands, have been classified into several classes including trees. A connected components algorithm was applied first to group the points of trees together. The attributes of components were utilized to differentiate tree components from misclassified non-tree components. A point based local maxima algorithm was implemented to distinguish single tree from multiple tree components. After that, the parameters of trees were derived through two independent ways: a point based method using 3D alpha shapes and convex hulls; and a model based method which fits a Pollock tree model to the points. Then the changes were detected by comparing the parameters of corresponding tree components which were matched by a tree to tree matching algorithm using the overlapping of bounding boxes and point to point distances. The results were visualized and statistically analyzed. The difference of parameters and the difference of changes derived from point based and model based methods were both lower than 10%. The comparison of these two methods illustrates the consistency and stability of the parameters. The detected changes show the potential to monitor the growth and pruning of trees.

Proceedings ArticleDOI
16 Jun 2012
TL;DR: Wang et al. as discussed by the authors proposed a simple yet effective method to learn the hierarchical object shape model consisting of local contour fragments, which represents a category of shapes in the form of an And-Or tree.
Abstract: This paper proposes a simple yet effective method to learn the hierarchical object shape model consisting of local contour fragments, which represents a category of shapes in the form of an And-Or tree. This model extends the traditional hierarchical tree structures by introducing the “switch” variables (i.e. the or-nodes) that explicitly specify production rules to capture shape variations. We thus define the model with three layers: the leaf-nodes for detecting local contour fragments, the or-nodes specifying selection of leaf-nodes, and the root-node encoding the holistic distortion. In the training stage, for optimization of the And-Or tree learning, we extend the concave-convex procedure (CCCP) by embedding the structural clustering during the iterative learning steps. The inference of shape detection is consistent with the model optimization, which integrates the local testings via the leaf-nodes and or-nodes with the global verification via the root-node. The advantages of our approach are validated on the challenging shape databases (i.e., ETHZ and INRIA Horse) and summarized as follows. (1) The proposed method is able to accurately localize shape contours against unreliable edge detection and edge tracing. (2) The And-Or tree model enables us to well capture the intraclass variance.

Patent
16 May 2012
TL;DR: In this article, a decision tree based game cheat detection method was proposed for solving the problem of the prior art that the passive defensive mode cannot detect external hanging. But, this method was used in an active defensive mode to detect cheating players, thereby improving the external hanging preventing ability.
Abstract: The invention discloses a decision tree based game cheat detection method invented for solving the problem of the prior art that the passive defensive mode cannot detect external hanging. The method of the invention comprises the following procedures: the non-redundant characteristic attribute data set is extracted from the information database of a player at predetermined time; data in the characteristic attribute data set are divided into characteristic attribute exercise data and characteristic attribute test data; a decision tree is generated with the characteristic attribute exercise data and clipping on the decision tree through the characteristic attribute test data is performed to generate an objective decision tree model; the objective decision tree is assessed to obtain an appropriate objective decision tree model; analytical treatment is performed on the objective decision tree to produce categorized player databases and then on-line analysis of the categorized player databases is conducted to detect cheating players with external hanging. The invention can conduct data mining analysis with the data mining method in an active defensive mode to detect cheating players, thereby improving the external hanging preventing ability in online network games.

Journal ArticleDOI
TL;DR: The Self-Organizing feature Map neural network is applied to establish the predictive model of lithology for the K-Means optimized data set and the decision tree and support vector machine are utilized to process four new wells in the complicated Carboniferous reservoirs of the Wucaiwan Sag, eastern Junggar Basin.

Dissertation
01 Jan 2012
TL;DR: This thesis investigates the power and limits of efficient joint computation, in several computational models: query algorithms, circuits, and Turing machines; significantly improve and extend past results on limits; identify barriers to progress towards better circuit lower bounds for multiple-output operators; and begin an original line of inquiry into the complexity of joint computation.
Abstract: Joint computation is the ubiquitous scenario in which a computer is presented with not one, but many computational tasks to perform. A fundamental question arises: when can we cleverly combine computations, to perform them with greater efficiency or reliability than by tackling them separately? This thesis investigates the power and, especially, the limits of efficient joint computation, in several computational models: query algorithms, circuits, and Turing machines. We significantly improve and extend past results on limits to efficient joint computation for multiple independent tasks; identify barriers to progress towards better circuit lower bounds for multiple-output operators; and begin an original line of inquiry into the complexity of joint computation. In more detail, we make contributions in the following areas: Improved direct product theorems for randomized query complexity: The "direct product problem" seeks to understand how the difficulty of computing a function on each of k independent inputs scales with k. We prove the following direct product theorem (DPT) for query complexity: if every T-query algorithm has success probability at most 1 – e in computing the Boolean function f on input distribution μ, then for α ≤ 1, every αeTk-query algorithm has success probability at most (2αe(1 – e))k in computing the k-fold direct product f ⊗k correctly on k independent inputs from μ. In light of examples due to Shaltiel, this statement gives an essentially optimal tradeoff between the query bound and the error probability. Using this DPT, we show that for an absolute constant α > 0, the worst-case success probability of any αR 2(f)k-query randomized algorithm for f⊗k falls exponentially with k. The best previous statement of this type, due to Klauck, Spalek, and de Wolf, required a query bound of O( bs(f)k). Our proof technique involves defining and analyzing a collection of martingales associated with an algorithm attempting to solve f ⊗k. Our method is quite general and yields a new XOR lemma and threshold DPT for the query model, as well as DPTs for the query complexity of learning tasks, search problems, and tasks involving interaction with dynamic entities. We also give a version of our DPT in which decision tree size is the resource of interest. Joint complexity in the Decision Tree Model: We study the diversity of possible behaviors of the joint computational complexity of a collection f1, …, fk of Boolean functions over a shared input. We focus on the deterministic decision tree model, with depth as the complexity measure; in this model, we prove a result to the effect that the "obvious" constraints on joint computational complexity are essentially the only ones. The proof uses an intriguing new type of cryptographic data structure called a "mystery bin," which we construct using a polynomial separation between deterministic and unambiguous query complexity shown by Savický. We also pose a conjecture in the communication model which, if proved, would extend our result to that model. (Copies available exclusively from MIT Libraries, Rm. 14-0551, Cambridge, MA 02139-4307. Ph. 617-253-5668; Fax 617-253-1690.) (Abstract shortened by UMI.)

Journal ArticleDOI
TL;DR: Experiment with the Knowledge Discovery and Data mining data set which have information on traffic pattern, during normal and intrusive behaviour shows that the proposed algorithm produces optimised decision rules and outperforms other machine-learning algorithm.
Abstract: The aim of this article is to construct a practical intrusion detection system IDS that properly analyses the statistics of network traffic pattern and classify them as normal or anomalous class. The objective of this article is to prove that the choice of effective network traffic features and a proficient machine-learning paradigm enhances the detection accuracy of IDS. In this article, a rule-based approach with a family of six decision tree classifiers, namely Decision Stump, C4.5, Naive Baye's Tree, Random Forest, Random Tree and Representative Tree model to perform the detection of anomalous network pattern is introduced. In particular, the proposed swarm optimisation-based approach selects instances that compose training set and optimised decision tree operate over this trained set producing classification rules with improved coverage, classification capability and generalisation ability. Experiment with the Knowledge Discovery and Data mining KDD data set which have information on traffic pattern, during normal and intrusive behaviour shows that the proposed algorithm produces optimised decision rules and outperforms other machine-learning algorithm.

Patent
11 Jul 2012
TL;DR: In this paper, the authors proposed a method for predicting gas card customer churn using a decision tree model, which is based on a multidimensional association rule and decision tree based model.
Abstract: The invention provides a method for predicating gas card customer churn The method comprises the following steps of collecting initial data of behaviors of each gas card customer within a certain period, and creating a database; and systemizing and summarizing the initial data, calculating a plurality of basic attributes related to churn behaviors of the gas card customer, carrying out Boolean process on the basic attributes, estimating the importance of the attributes by utilizing an information gain parameter, obtaining frequent itemsets of the attributes by utilizing a multidimensional association rule, merging the attributes in each frequent itemset, creating a model by the adoption of a decision tree, and correcting the decision tree model according to the gas card customer data in continuous change, thereby predicating the customer churn situation, and releasing alarm information According to the method provided by the invention, the attribute relevancy and the decision tree model are integrated and improved, the generation efficiency and the intelligibility of the decision tree are increased, the mergence of the attributes has the characteristic of petrochemical industry, thus the problem which cannot be solved by the traditional decision tree model is solved, and a viable early-warning scheme of customer churn is provided for the petrochemical industry

Journal ArticleDOI
TL;DR: A new approach to tree induction is introduced to improve the efficiency of the CART algorithm by combining the existing functionality of CART with the addition of artificial neural networks (ANNs).
Abstract: Accuracy is a critical factor in predictive modeling. A predictive model such as a decision tree must be accurate to draw conclusions about the system being modeled. This research aims at analyzing and improving the performance of classification and regression trees (CART), a decision tree algorithm, by evaluating and deriving a new methodology based on the performance of real-world data sets that were studied. This paper introduces a new approach to tree induction to improve the efficiency of the CART algorithm by combining the existing functionality of CART with the addition of artificial neural networks (ANNs). Trained ANNs are utilized by the tree induction algorithm by generating new, synthetic data, which have been shown to improve the overall accuracy of the decision tree model when actual training samples are limited. In this paper, traditional decision trees developed by the standard CART methodology are compared with the enhanced decision trees that utilize the ANN’s synthetic data generation, or CART+. This research demonstrates the improved accuracies that can be obtained with CART+, which can ultimately improve the knowledge that can be extracted by researchers about a system being modeled.

Proceedings Article
01 Jan 2012
TL;DR: Using decision tree to model motorcycle accident occurrences and compared its classification performance with Poisson Regression and Negative Binomial Regression model shows that the decision tree model using CART (Classification and Regression Tree) slightly performs better than Poisson regressive models.
Abstract: The Poisson Regression and Negative Binomial Regression models are the conventional statistical models for count data. This paper presents using decision tree to model motorcycle accident occurrences and compared its classification performance with Poisson Regression and Negative Binomial Regression model. The frequency of motorcycle accidents that involve death or serious injury based were converted into a categorical dependent variable (zero, low and high frequency) and the factors considered are collision types, road geometry, time, weather condition, road surface condition and type of days. Based on classification accuracy, results show that the decision tree model using CART (Classification and Regression Tree) slightly performs better (78.1%) than Poisson Regression (76.3%) with Negative Binomial Regression (77.6%) models. The CART decision rules revealed that the most significant factor contributing to high frequency of motorcycle accidents that result in death or serious injury is when the accidents happen on a straight road, junction or bend.

Patent
19 Dec 2012
TL;DR: In this article, a multilevel power system fault diagnosis system based on fault tree, which comprises a data collecting unit, a data processing unit, an expert confirming unit and a data storage unit, is presented.
Abstract: The invention refers to the technical field of power system safety and fault treatment. Especially, the invention relates to a multilevel power system fault diagnosis system based on fault tree, which comprises a data collecting unit, a data processing unit, a fault tree analysis (FTA) diagnosis unit, an expert confirming unit and a data storage unit; the data collecting unit is in charge of a communication unit of an external data source; the data processing unit is used for obtaining original state data through a communication between a standard communication interface and a data collecting unit and responsible for integrating the original data; the FTA diagnosis unit is used for providing a fault diagnosis algorithm and obtaining the diagnosis result according to the variable quantity of the fault tree; the expert confirming unit is responsible for confirming or revising the diagnosis result of the FTA diagnosis unit; and the data storage unit is responsible for storing the process data and conclusion data of every unit. By using the FTA technology, a classifying tree model of the power system with internal fault is set up, and the model is applied to the process of establishing the diagnosis system, and therefore a simple way is found to build the system diagnosis. Besides, the entire layer is clear in level and likely to be expanded and maintained.

Proceedings ArticleDOI
21 May 2012
TL;DR: A temporary installation of ALPR system is used for training the decision tree model which consequently provides reliable travel time estimation on the most widely used arterial in Prague and also in the Czech Republic.
Abstract: This paper presents a time travel estimation model based on a decision tree. The proposed model was tested on the most widely used arterial in Prague and also in the Czech Republic. This road section has many unmeasured inputs and outputs and with regards to only two detectors within a section it is difficult to estimate the travel time. A temporary installation of ALPR system is used for training the decision tree model which consequently provides reliable travel time estimation.


Proceedings ArticleDOI
10 Jun 2012
TL;DR: An innovization design principle for procedural tree model of woody plants (trees) reconstruction by multi-objective optimization is presented, which gives the decision maker a chance to select the final resulting model, and helps determine the optimization criteria tradeoff weights for later production.
Abstract: This paper presents an innovization design principle for procedural tree model of woody plants (trees) reconstruction by multi-objective optimization. Reconstruction of a parameterized procedural model from imagery is addressed by a multi-objective differential evolution algorithm, which evolves a parametrized procedural model by fitting a set of its rendered images to a set of given projected images using bi-objective comparisons made on a pixel level of the images. The use of multi-objective approach gives the decision maker a chance to select the final resulting model, and helps determine the optimization criteria tradeoff weights for later production.