scispace - formally typeset
Search or ask a question

Showing papers on "Tree (data structure) published in 2012"


Journal ArticleDOI
TL;DR: A vocabulary tree is built that discretizes a binary descriptor space and uses the tree to speed up correspondences for geometrical verification, and presents competitive results with no false positives in very different datasets.
Abstract: We propose a novel method for visual place recognition using bag of words obtained from accelerated segment test (FAST)+BRIEF features. For the first time, we build a vocabulary tree that discretizes a binary descriptor space and use the tree to speed up correspondences for geometrical verification. We present competitive results with no false positives in very different datasets, using exactly the same vocabulary and settings. The whole technique, including feature extraction, requires 22 ms/frame in a sequence with 26 300 images that is one order of magnitude faster than previous approaches.

1,560 citations


Journal ArticleDOI
TL;DR: The Bayes tree is applied to obtain a completely novel algorithm for sparse nonlinear incremental optimization, named iSAM2, which achieves improvements in efficiency through incremental variable re-ordering and fluid relinearization, eliminating the need for periodic batch steps.
Abstract: We present a novel data structure, the Bayes tree, that provides an algorithmic foundation enabling a better understanding of existing graphical model inference algorithms and their connection to sparse matrix factorization methods. Similar to a clique tree, a Bayes tree encodes a factored probability density, but unlike the clique tree it is directed and maps more naturally to the square root information matrix of the simultaneous localization and mapping (SLAM) problem. In this paper, we highlight three insights provided by our new data structure. First, the Bayes tree provides a better understanding of the matrix factorization in terms of probability densities. Second, we show how the fairly abstract updates to a matrix factorization translate to a simple editing of the Bayes tree and its conditional densities. Third, we apply the Bayes tree to obtain a completely novel algorithm for sparse nonlinear incremental optimization, named iSAM2, which achieves improvements in efficiency through incremental variable re-ordering and fluid relinearization, eliminating the need for periodic batch steps. We analyze various properties of iSAM2 in detail, and show on a range of real and simulated datasets that our algorithm compares favorably with other recent mapping algorithms in both quality and efficiency.

1,085 citations


Journal ArticleDOI
TL;DR: The authorsOUND The authors is a multi-purpose N-body code for collisional dynamics such as planetary rings but can also solve the classical N -body problem, which is highly modular and can be customized easily to work on a wide variety of different problems in astrophysics and beyond.
Abstract: REBOUND is a new multi-purpose N -body code which is freely available under an open-source license. It was designed for collisional dynamics such as planetary rings but can also solve the classical N -body problem. It is highly modular and can be customized easily to work on a wide variety of different problems in astrophysics and beyond. REBOUND comes with three symplectic integrators: leap-frog, the symplectic epicycle integrator (SEI) and a Wisdom-Holman mapping (WH). It supports open, periodic and shearing-sheet boundary conditions. REBOUND can use a Barnes-Hut tree to calculate both self-gravity and collisions. These modules are fully parallelized with MPI as well as OpenMP. The former makes use of a static domain decomposition and a distributed essential tree. Two new collision detection modules based on a plane-sweep algorithm are also implemented. The performance of the plane-sweep algorithm is superior to a tree code for simulations in which one dimension is much longer than the other two and in simulations which are quasi-two dimensional with less than one million particles. In this work, we discuss the different algorithms implemented in REBOUND, the philosophy behind the code’s structure as well as implementation specific details of the different modules. We present results of accuracy and scaling tests which show that the code can run efficiently on both desktop machines and large computing clusters.

699 citations


Journal ArticleDOI
TL;DR: The suitability of 8-band WorldView-2 satellite data for the identification of 10 tree species in a temperate forest in Austria is examined and an extensive literature review on tree species classification comprising about 20 studies is presented.
Abstract: Tree species diversity is a key parameter to describe forest ecosystems. It is, for example, important for issues such as wildlife habitat modeling and close-to-nature forest management. We examined the suitability of 8-band WorldView-2 satellite data for the identification of 10 tree species in a temperate forest in Austria. We performed a Random Forest (RF) classification (object-based and pixel-based) using spectra of manually delineated sunlit regions of tree crowns. The overall accuracy for classifying 10 tree species was around 82% (8 bands, object-based). The class-specific producer’s accuracies ranged between 33% (European hornbeam) and 94% (European beech) and the user’s accuracies between 57% (European hornbeam) and 92% (Lawson’s cypress). The object-based approach outperformed the pixel-based approach. We could show that the 4 new WorldView-2 bands (Coastal, Yellow, Red Edge, and Near Infrared 2) have only limited impact on classification accuracy if only the 4 main tree species (Norway spruce, Scots pine, European beech, and English oak) are to be separated. However, classification accuracy increased significantly using the full spectral resolution if further tree species were included. Beside the impact on overall classification accuracy, the importance of the spectral bands was evaluated with two measures provided by RF. An in-depth analysis of the RF output was carried out to evaluate the impact of reference data quality and the resulting reliability of final class assignments. Finally, an extensive literature review on tree species classification comprising about 20 studies is presented.

624 citations


Journal ArticleDOI
TL;DR: The accuracy of tree height, after removing gross errors, was better than 0.5 m in all tree height classes with the best methods investigated in this experiment, suggesting minimum curvature-based tree detection accompanied by point cloud-based cluster detection for suppressed trees is a solution that deserves attention in the future.
Abstract: The objective of the “Tree Extraction” project organized by EuroSDR (European Spatial data Research) and ISPRS (International Society of Photogrammetry and Remote Sensing) was to evaluate the quality, accuracy, and feasibility of automatic tree extraction methods, mainly based on laser scanner data. In the final report of the project, Kaartinen and Hyyppa (2008) reported a high variation in the quality of the published methods under boreal forest conditions and with varying laser point densities. This paper summarizes the findings beyond the final report after analyzing the results obtained in different tree height classes. Omission/Commission statistics as well as neighborhood relations are taken into account. Additionally, four automatic tree detection and extraction techniques were added to the test. Several methods in this experiment were superior to manual processing in the dominant, co-dominant and suppressed tree storeys. In general, as expected, the taller the tree, the better the location accuracy. The accuracy of tree height, after removing gross errors, was better than 0.5 m in all tree height classes with the best methods investigated in this experiment. For forest inventory, minimum curvature-based tree detection accompanied by point cloud-based cluster detection for suppressed trees is a solution that deserves attention in the future.

434 citations


Proceedings ArticleDOI
10 Apr 2012
TL;DR: This work presents Masstree, a fast key-value database designed for SMP machines, which is comparable to that of memcached, a non-persistent hash table server, and higher than that of VoltDB, MongoDB, and Redis.
Abstract: We present Masstree, a fast key-value database designed for SMP machines. Masstree keeps all data in memory. Its main data structure is a trie-like concatenation of B+-trees, each of which handles a fixed-length slice of a variable-length key. This structure effectively handles arbitrary-length possiblybinary keys, including keys with long shared prefixes. +-tree fanout was chosen to minimize total DRAM delay when descending the tree and prefetching each tree node. Lookups use optimistic concurrency control, a read-copy-update-like technique, and do not write shared data structures; updates lock only affected nodes. Logging and checkpointing provide consistency and durability. Though some of these ideas appear elsewhere, Masstree is the first to combine them. We discuss design variants and their consequences.On a 16-core machine, with logging enabled and queries arriving over a network, Masstree executes more than six million simple queries per second. This performance is comparable to that of memcached, a non-persistent hash table server, and higher (often much higher) than that of VoltDB, MongoDB, and Redis.

412 citations


Proceedings ArticleDOI
16 Jun 2012
TL;DR: The cost aggregation problem is re-examined and a non-local solution is proposed which outperforms all local cost aggregation methods on the standard (Middlebury) benchmark and has great advantage in extremely low computational complexity.
Abstract: Matching cost aggregation is one of the oldest and still popular methods for stereo correspondence While effective and efficient, cost aggregation methods typically aggregate the matching cost by summing/averaging over a user-specified, local support region This is obviously only locally-optimal, and the computational complexity of the full-kernel implementation usually depends on the region size In this paper, the cost aggregation problem is re-examined and a non-local solution is proposed The matching cost values are aggregated adaptively based on pixel similarity on a tree structure derived from the stereo image pair to preserve depth edges The nodes of this tree are all the image pixels, and the edges are all the edges between the nearest neighboring pixels The similarity between any two pixels is decided by their shortest distance on the tree The proposed method is non-local as every node receives supports from all other nodes on the tree As can be expected, the proposed non-local solution outperforms all local cost aggregation methods on the standard (Middlebury) benchmark Besides, it has great advantage in extremely low computational complexity: only a total of 2 addition/subtraction operations and 3 multiplication operations are required for each pixel at each disparity level It is very close to the complexity of unnormalized box filtering using integral image which requires 6 addition/subtraction operations Unnormalized box filter is the fastest local cost aggregation method but blurs across depth edges The proposed method was tested on a MacBook Air laptop computer with a 18 GHz Intel Core i7 CPU and 4 GB memory The average runtime on the Middlebury data sets is about 90 milliseconds, and is only about 125× slower than unnormalized box filter A non-local disparity refinement method is also proposed based on the non-local cost aggregation method

401 citations


Journal ArticleDOI
TL;DR: EvolView is a web application for visualizing, annotating and managing phylogenetic trees, and a tree and dataset management tool that users can easily organize related trees into distinct projects, add new datasets to trees and edit and manage existing trees and datasets.
Abstract: EvolView is a web application for visualizing, annotating and managing phylogenetic trees. First, EvolView is a phylogenetic tree viewer and customization tool; it visualizes trees in various formats, customizes them through built-in functions that can link information from external datasets, and exports the customized results to publication-ready figures. Second, EvolView is a tree and dataset management tool: users can easily organize related trees into distinct projects, add new datasets to trees and edit and manage existing trees and datasets. To make EvolView easy to use, it is equipped with an intuitive user interface. With a free account, users can save data and manipulations on the EvolView server. EvolView is freely available at: http://www.evolgenius.info/evolview.html.

358 citations


Journal ArticleDOI
01 Jan 2012-Forestry
TL;DR: Airborne laser scanning data and corresponding field data were acquired from boreal forests in Norway and Sweden, coniferous and broadleaved forests in Germany and tropical pulpwood plantations in Brazil, and showed that forest structure strongly affected the performance of all algorithms.

322 citations


Journal ArticleDOI
TL;DR: This is the first reconciliation algorithm to capture all four evolutionary processes driving tree incongruence and the first to reconcile non-binary species trees with a transfer model.
Abstract: Motivation: Gene duplication (D), transfer (T), loss (L) and incomplete lineage sorting (I) are crucial to the evolution of gene families and the emergence of novel functions. The history of these events can be inferred via comparison of gene and species trees, a process called reconciliation, yet current reconciliation algorithms model only a subset of these evolutionary processes. Results: We present an algorithm to reconcile a binary gene tree with a nonbinary species tree under a DTLI parsimony criterion. This is the first reconciliation algorithm to capture all four evolutionary processes driving tree incongruence and the first to reconcile non-binary species trees with a transfer model. Our algorithm infers all optimal solutions and reports complete, temporally feasible event histories, giving the gene and species lineages in which each event occurred. It is fixed-parameter tractable, with polytime complexity when the maximum species outdegree is fixed. Application of our algorithms to prokaryotic and eukaryotic data show that use of an incomplete event model has substantial impact on the events inferred and resulting biological conclusions. Availability: Our algorithms have been implemented in Notung, a freely available phylogenetic reconciliation software package, available at http://www.cs.cmu.edu/~durand/Notung. Contact: mstolzer@andrew.cmu.edu

295 citations


Journal ArticleDOI
TL;DR: In this paper, a 3D segmentation technique applied directly to laser point clouds, which uses the normalized cut segmentation combined with a stem detection method, was applied to estimate the stem volume and diameter at breast height.

Journal ArticleDOI
TL;DR: The lightweight IDS has been developed by using a wrapper based feature selection algorithm that maximizes the specificity and sensitivity of the IDS as well as by employing a neural ensemble decision tree iterative procedure to evolve optimal features.
Abstract: The objective of this paper is to construct a lightweight Intrusion Detection System (IDS) aimed at detecting anomalies in networks. The crucial part of building lightweight IDS depends on preprocessing of network data, identifying important features and in the design of efficient learning algorithm that classify normal and anomalous patterns. Therefore in this work, the design of IDS is investigated from these three perspectives. The goals of this paper are (i) removing redundant instances that causes the learning algorithm to be unbiased (ii) identifying suitable subset of features by employing a wrapper based feature selection algorithm (iii) realizing proposed IDS with neurotree to achieve better detection accuracy. The lightweight IDS has been developed by using a wrapper based feature selection algorithm that maximizes the specificity and sensitivity of the IDS as well as by employing a neural ensemble decision tree iterative procedure to evolve optimal features. An extensive experimental evaluation of the proposed approach with a family of six decision tree classifiers namely Decision Stump, C4.5, Naive Baye's Tree, Random Forest, Random Tree and Representative Tree model to perform the detection of anomalous network pattern has been introduced.

Proceedings Article
01 Dec 2012
TL;DR: This work uses an improved oracle for the arc-eager transition system to train a deterministic left-to-right dependency parser that is less sensitive to error propagation and outperforms greedy parsers trained using conventional oracles on a range of data sets.
Abstract: The standard training regime for transition-based dependency parsers makes use of an oracle, which predicts an optimal transition sequence for a sentence and its gold tree. We present an improved oracle for the arc-eager transition system, which provides a set of optimal transitions for every valid parser configuration, including configurations from which the gold tree is not reachable. In such cases, the oracle provides transitions that will lead to the best reachable tree from the given configuration. The oracle is efficient to implement and provably correct. We use the oracle to train a deterministic left-to-right dependency parser that is less sensitive to error propagation, using an online training procedure that also explores parser configurations resulting from non-optimal sequences of transitions. This new parser outperforms greedy parsers trained using conventional oracles on a range of data sets, with an average improvement of over 1.2 LAS points and up to almost 3 LAS points on some data sets.

Journal ArticleDOI
TL;DR: This paper describes the leading algorithms for Monte-Carlo tree search and explains how they have advanced the state of the art in computer Go.
Abstract: The ancient oriental game of Go has long been considered a grand challenge for artificial intelligence. For decades, computer Go has defied the classical methods in game tree search that worked so successfully for chess and checkers. However, recent play in computer Go has been transformed by a new paradigm for tree search based on Monte-Carlo methods. Programs based on Monte-Carlo tree search now play at human-master levels and are beginning to challenge top professional players. In this paper, we describe the leading algorithms for Monte-Carlo tree search and explain how they have advanced the state of the art in computer Go.

Journal ArticleDOI
TL;DR: In this article, the authors investigate the performance of common MCMC proposal distributions in terms of median and variance of run time to convergence on 11 data sets and introduce two new Metropolized Gibbs Samplers for moving through "tree space."
Abstract: Increasingly, large data sets pose a challenge for computationally intensive phylogenetic methods such as Bayesian Markov chain Monte Carlo (MCMC). Here, we investigate the performance of common MCMC proposal distributions in terms of median and variance of run time to convergence on 11 data sets. We introduce two new Metropolized Gibbs Samplers for moving through "tree space." MCMC simulation using these new proposals shows faster average run time and dramatically improved predictability in performance, with a 20-fold reduction in the variance of the time to estimate the posterior distribution to a given accuracy. We also introduce conditional clade probabilities and demonstrate that they provide a superior means of approximating tree topology posterior probabilities from samples recorded during MCMC.

Journal ArticleDOI
TL;DR: This paper presents a methodology that combines the structure of mixed effects models for longitudinal and clustered data with the flexibility of tree-based estimation methods, and applies the resulting estimation method to pricing in online transactions, showing that the RE-EM tree is less sensitive to parametric assumptions and provides improved predictive power compared to linear models with random effects and regression trees without random effects.
Abstract: Longitudinal data refer to the situation where repeated observations are available for each sampled object. Clustered data, where observations are nested in a hierarchical structure within objects (without time necessarily being involved) represent a similar type of situation. Methodologies that take this structure into account allow for the possibilities of systematic differences between objects that are not related to attributes and autocorrelation within objects across time periods. A standard methodology in the statistics literature for this type of data is the mixed effects model, where these differences between objects are represented by so-called "random effects" that are estimated from the data (population-level relationships are termed "fixed effects," together resulting in a mixed effects model). This paper presents a methodology that combines the structure of mixed effects models for longitudinal and clustered data with the flexibility of tree-based estimation methods. We apply the resulting estimation method, called the RE-EM tree, to pricing in online transactions, showing that the RE-EM tree is less sensitive to parametric assumptions and provides improved predictive power compared to linear models with random effects and regression trees without random effects. We also apply it to a smaller data set examining accident fatalities, and show that the RE-EM tree strongly outperforms a tree without random effects while performing comparably to a linear model with random effects. We also perform extensive simulation experiments to show that the estimator improves predictive performance relative to regression trees without random effects and is comparable or superior to using linear models with random effects in more general situations.

Proceedings ArticleDOI
Tero Karras1
25 Jun 2012
TL;DR: This work presents a novel approach that improves scalability by constructing the entire tree in parallel, an in-place algorithm for constructing binary radix trees, which is used as a building block for other types of trees.
Abstract: A number of methods for constructing bounding volume hierarchies and point-based octrees on the GPU are based on the idea of ordering primitives along a space-filling curve. A major shortcoming with these methods is that they construct levels of the tree sequentially, which limits the amount of parallelism that they can achieve. We present a novel approach that improves scalability by constructing the entire tree in parallel. Our main contribution is an in-place algorithm for constructing binary radix trees, which we use as a building block for other types of trees.

Proceedings ArticleDOI
12 Aug 2012
TL;DR: This paper proposes a general branch-and-bound algorithm based on a (single) tree data structure and presents a dual-tree algorithm for the case where there are multiple queries, based on novel inner-product bounds.
Abstract: The problem of efficiently finding the best match for a query in a given set with respect to the Euclidean distance or the cosine similarity has been extensively studied. However, the closely related problem of efficiently finding the best match with respect to the inner-product has never been explored in the general setting to the best of our knowledge. In this paper we consider this problem and contrast it with the previous problems considered. First, we propose a general branch-and-bound algorithm based on a (single) tree data structure. Subsequently, we present a dual-tree algorithm for the case where there are multiple queries. Our proposed branch-and-bound algorithms are based on novel inner-product bounds. Finally we present a new data structure, the cone tree, for increasing the efficiency of the dual-tree algorithm. We evaluate our proposed algorithms on a variety of data sets from various applications, and exhibit up to five orders of magnitude improvement in query time over the naive search technique in some cases.

Journal ArticleDOI
TL;DR: A water resource-based approach to tree mortality is proposed that considers the tree as a complex organism with a distinct growth strategy that provides insight into mortality mechanisms at the tree and landscape scales and presents promising avenues into modeling tree death from drought and temperature stress.

Proceedings ArticleDOI
01 Nov 2012
TL;DR: An experiment was conducted using Weka application to compare the performance in term of complexity of tree structure and accuracy of classification for J 48, REPTree, PART, JRip, and Ridor algorithms using seven standard datasets from UCI machine learning repository.
Abstract: Decision tree is one of the most popular and efficient technique in data mining. This technique has been established and well-explored by many researchers. However, some decision tree algorithms may produce a large structure of tree size and it is difficult to understand. Furthermore, misclassification of data often occurs in learning process. Therefore, a decision tree algorithm that can produce a simple tree structure with high accuracy in term of classification rate is a need to work with huge volume of data. Pruning methods have been introduced to reduce the complexity of tree structure without decrease the accuracy of classification. One of pruning methods is the Reduced Error Pruning (REP). To better understand pruning methods, an experiment was conducted using Weka application to compare the performance in term of complexity of tree structure and accuracy of classification for J 48, REPTree, PART, JRip, and Ridor algorithms using seven standard datasets from UCI machine learning repository. In data modeling, J48 and REPTree generate tree structure as an output while PART, Ridor and JRip generate rules. In additional J48, REPTree and PART using REP method for pruning while Ridor and JRip using improvement of REP method, namely IREP and RIPPER methods. The experiment result shown performance of J48 and REPTree are competitive in producing better result. Between J48 and REPTree, average differences performance of accuracy of classification is 7.1006% and 6.2857% for complexity of tree structure. For classification rules algorithms, Ridor is the best algorithms compare to PART and JRip due to highest percentage of accuracy of classification in five dataset from seven datasets. An algorithm that produces high accuracy with simple tree structure or simple rules can be awarded as the best algorithm in decision tree.

Journal ArticleDOI
TL;DR: In this paper, the problem of constructing private classifiers using decision trees, within the framework of differential privacy, was studied and a differentially private decision tree ensemble algorithm based on random decision trees was proposed.
Abstract: In this paper, we study the problem of constructing private classifiers using decision trees, within the framework of differential privacy. We first present experimental evidence that creating a differentially private ID3 tree using differentially private low-level queries does not simultaneously provide good privacy and good accuracy, particularly for small datasets. In search of better privacy and accuracy, we then present a differentially private decision tree ensemble algorithm based on random decision trees. We demonstrate experimentally that this approach yields good prediction while maintaining good privacy, even for small datasets. We also present differentially private extensions of our algorithm to two settings: (1) new data is periodically appended to an existing database and (2) the database is horizontally or vertically partitioned between multiple users.

Proceedings ArticleDOI
10 Nov 2012
TL;DR: This work employs the disjoint-set data structure to break the access sequentiality of DBSCAN and uses a tree-based bottom-up approach to construct the clusters, which yields a better-balanced workload distribution.
Abstract: DBSCAN is a well-known density based clustering algorithm capable of discovering arbitrary shaped clusters and eliminating noise data. However, parallelization of Dbscan is challenging as it exhibits an inherent sequential data access order. Moreover, existing parallel implementations adopt a master-slave strategy which can easily cause an unbalanced workload and hence result in low parallel efficiency. We present a new parallel Dbscan algorithm (Pdsdbscan) using graph algorithmic concepts. More specifically, we employ the disjoint-set data structure to break the access sequentiality of Dbscan. In addition, we use a tree-based bottom-up approach to construct the clusters. This yields a better-balanced workload distribution. We implement the algorithm both for shared and for distributed memory. Using data sets containing up to several hundred million high-dimensional points, we show that Pdsdbscan significantly outperforms the master-slave approach, achieving speedups up to 25.97 using 40 cores on shared memory architecture, and speedups up to 5,765 using 8,192 cores on distributed memory architecture.

Journal ArticleDOI
TL;DR: The aim of the article is to present four subcategories of models, the first two of which are based on a tree representation for response categories: 1. linear response tree models, 2. nested responseTree models, 3. linear latent-variable tree models (e.g., models for change processes), and 4. bi-factor models, which are members of the family of generalized linear mixed models (GLMM).
Abstract: A category of item response models is presented with two defining features: they all (i) have a tree representation, and (ii) are members of the family of generalized linear mixed models (GLMM). Because the models are based on trees, they are denoted as IRTree models. The GLMM nature of the models implies that they can all be estimated with the glmer function of the lme4 package in R. The aim of the article is to present four subcategories of models, the first two of which are based on a tree representation for response categories: 1. linear response tree models (e.g., missing response models), 2. nested response tree models (e.g., models for parallel observations regarding item responses such as agreement and certainty), while the last two are based on a tree representation for latent variables: 3. linear latent-variable tree models (e.g., models for change processes), and 4. nested latent-variable tree models (e.g., bi-factor models). The use of the glmer function is illustrated for all four subcategories. Simulated example data sets and two service functions useful in preparing the data for IRTree modeling with glmer are provided in the form of an R package, irtrees. For all four subcategories also a real data application is discussed.

Journal ArticleDOI
TL;DR: An efficient, Bayesian Uncertainty Quantification framework using a novel treed Gaussian process model is developed and numerically demonstrate the effectiveness of the suggested framework in identifying discontinuities, local features and unimportant dimensions in the solution of stochastic differential equations.

Journal ArticleDOI
TL;DR: This work introduces a shape-motion prototype-based approach for action recognition that enables robust action matching in challenging situations and allows automatic alignment of action sequences.
Abstract: A shape-motion prototype-based approach is introduced for action recognition. The approach represents an action as a sequence of prototypes for efficient and flexible action matching in long video sequences. During training, an action prototype tree is learned in a joint shape and motion space via hierarchical K-means clustering and each training sequence is represented as a labeled prototype sequence; then a look-up table of prototype-to-prototype distances is generated. During testing, based on a joint probability model of the actor location and action prototype, the actor is tracked while a frame-to-prototype correspondence is established by maximizing the joint probability, which is efficiently performed by searching the learned prototype tree; then actions are recognized using dynamic prototype sequence matching. Distance measures used for sequence matching are rapidly obtained by look-up table indexing, which is an order of magnitude faster than brute-force computation of frame-to-frame distances. Our approach enables robust action matching in challenging situations (such as moving cameras, dynamic backgrounds) and allows automatic alignment of action sequences. Experimental results demonstrate that our approach achieves recognition rates of 92.86 percent on a large gesture data set (with dynamic backgrounds), 100 percent on the Weizmann action data set, 95.77 percent on the KTH action data set, 88 percent on the UCF sports data set, and 87.27 percent on the CMU action data set.

Patent
07 Feb 2012
TL;DR: In this paper, a framework for performing graphics animation and compositing operations has a layer tree for interfacing with the application and a render tree for interconnection with a render engine, which can be content, windows, views, video, images, text, media or any other type of object for a user interface of an application.
Abstract: A framework for performing graphics animation and compositing operations has a layer tree for interfacing with the application and a render tree for interfacing with a render engine. Layers in the layer tree can be content, windows, views, video, images, text, media, or any other type of object for a user interface of an application. The application commits change to the state of the layers of the layer tree. The application does not need to include explicit code for animating the changes to the layers. Instead, an animation is determined for animating the change in state. In determining the animation, the framework can define a set of predetermined animations based on motion, visibility, and transition. The determined animation is explicitly applied to the affected layers in the render tree. A render engine renders from the render tree into a frame buffer for display on the computer system. Those portions of the render tree that have changed relative to prior versions can be tracked to improve resource management.

Proceedings Article
03 Dec 2012
TL;DR: This paper introduces a tractable, sample-based method for approximate Bayes-optimal planning which exploits Monte-Carlo tree search and shows it working in an infinite state space domain which is qualitatively out of reach of almost all previous work in Bayesian exploration.
Abstract: Bayesian model-based reinforcement learning is a formally elegant approach to learning optimal behaviour under model uncertainty, trading off exploration and exploitation in an ideal way. Unfortunately, finding the resulting Bayes-optimal policies is notoriously taxing, since the search space becomes enormous. In this paper we introduce a tractable, sample-based method for approximate Bayes-optimal planning which exploits Monte-Carlo tree search. Our approach outperformed prior Bayesian model-based RL algorithms by a significant margin on several well-known benchmark problems - because it avoids expensive applications of Bayes rule within the search tree by lazily sampling models from the current beliefs. We illustrate the advantages of our approach by showing it working in an infinite state space domain which is qualitatively out of reach of almost all previous work in Bayesian exploration.

Journal ArticleDOI
TL;DR: In this article, a tree-ring-based temperature reconstruction was used to investigate the effect of volcanic cooling on tree growth in the treering chronology of the Northern Hemisphere, and the results showed that the tree growth was not affected by volcanic cooling at some sites near the temperature limit for growth.
Abstract: To the Editor — In their Letter, Mann and colleagues1 claim to have identified a discrepancy between the degree of volcanic cooling in climate model simulations and the analogous cooling indicated in a tree-ring-based Northern Hemisphere temperature reconstruction2, and attribute it to a putative temporary cessation of tree growth at some sites near the temperature limit for growth. They argue that this growth cessation would lead to missing rings in cool years, thus resulting in underestimation of cooling in the tree-ring record. This suggestion implies that periods of volcanic cooling could result in widespread chronological errors in tree-ring-based temperature reconstructions1,3. Mann and colleagues base their conclusions solely on the evidence of a tree-ring-growth model. Here we point to several factors that challenge this hypothesis of missing tree rings; specifically, we highlight problems in their implementation of the tree-ring model used1, a lack of consideration of uncertainty in the amplitude and spatial pattern of volcanic forcing and associated climate responses, and a lack of any empirical evidence for misdating of treering chronologies. Several aspects of their tree-ringgrowth simulations are erroneous. First, they use an algorithm that has not been tested for its ability to reflect actual observations (Supplementary Fig. 1), even though established growth models, such as the Vaganov–Shashkin model4,5, are available. They rely on a minimum growth temperature threshold of 10 °C that is incompatible with real-world observations. This condition is rarely met in regions near the limit of tree growth, where ring formation demonstrably occurs well below this temperature: there is abundant empirical evidence that the temperature limit for tree-ring formation is around 5 °C (refs 6,7). Mann and colleagues arbitrarily and without justification require 26 days with temperatures above their unrealistic threshold for ring formation. Their resulting growing season becomes unusually short, at 50–60 days rather than the more commonly observed 70–137 days4,7. Furthermore, they use a quadratic function to describe growth that has no basis in observation or theory, and they ignore any daylength and moisture constraints on growth. These assumptions all bias Mann and colleagues’ tree-growth model results1 towards erroneously producing missing tree rings. Reconstructing simulated temperatures in the same manner as Mann and colleagues, but using a well-tested tree-ring growth model5 and realistic parameters provides no support for their hypothesis (Fig. 1). Instead we find good agreement between summertime temperatures reconstructed from pseudoproxies and those simulated with a climate model (CSM1.4)8 (Fig. 1a), for the whole record as well as in specific years following major volcanic eruptions (Fig. 1b–d). Mann and colleagues’ principal result arises from their failure to select a realistic minimum temperature for growth, use actual treering chronology locations and recognize Tree rings and volcanic cooling

Journal ArticleDOI
TL;DR: An improved random forest algorithm for classifying text data can effectively reduce subspace size and improve classification performance without increasing error bound with the new feature weighting method for subspace sampling and tree selection method.
Abstract: This paper proposes an improved random forest algorithm for classifying text data. This algorithm is particularly designed for analyzing very high dimensional data with multiple classes whose well-known representative data is text corpus. A novel feature weighting method and tree selection method are developed and synergistically served for making random forest framework well suited to categorize text documents with dozens of topics. With the new feature weighting method for subspace sampling and tree selection method, we can effectively reduce subspace size and improve classification performance without increasing error bound. We apply the proposed method on six text data sets with diverse characteristics. The results have demonstrated that this improved random forests outperformed the popular text classification methods in terms of classification performance.

Book
15 Jun 2012
TL;DR: This is an introduction to phylogenetic biology book that you can open the device and get the book by on-line and the system of this book will be much easier.
Abstract: Reading is a hobby to open the knowledge windows. Besides, it can provide the inspiration and spirit to face this life. By this way, concomitant with the technology development, many companies serve the e-book or book in soft file. The system of this book of course will be much easier. No worry to forget bringing the tree thinking an introduction to phylogenetic biology book. You can open the device and get the book by on-line.