scispace - formally typeset
Search or ask a question

Showing papers on "Tree (data structure) published in 2011"


Journal ArticleDOI
TL;DR: Current version of iTOL introduces numerous new features and greatly expands the number of supported data set types.
Abstract: Interactive Tree Of Life (http://itol.embl.de) is a web-based tool for the display, manipulation and annotation of phylogenetic trees. It is freely available and open to everyone. In addition to classical tree viewer functions, iTOL offers many novel ways of annotating trees with various additional data. Current version introduces numerous new features and greatly expands the number of supported data set types. Trees can be interactively manipulated and edited. A free personal account system is available, providing management and sharing of trees in user defined workspaces and projects. Export to various bitmap and vector graphics formats is supported. Batch access interface is available for programmatic access or inclusion of interactive trees into other web services.

1,446 citations


Journal ArticleDOI
TL;DR: This work presents an evolutionary placement algorithm (EPA) and a Web server for the rapid assignment of sequence fragments (short reads) to edges of a given phylogenetic tree under the maximum-likelihood model and introduces a slow and accurate as well as a fast and less accurate placement algorithm.
Abstract: We present an evolutionary placement algorithm (EPA) and a Web server for the rapid assignment of sequence fragments (short reads) to edges of a given phylogenetic tree under the maximum-likelihood model. The accuracy of the algorithm is evaluated on several real-world data sets and compared with placement by pair-wise sequence comparison, using edit distances and BLAST. We introduce a slow and accurate as well as a fast and less accurate placement algorithm. For the slow algorithm, we develop additional heuristic techniques that yield almost the same run times as the fast version with only a small loss of accuracy. When those additional heuristics are employed, the run time of the more accurate algorithm is comparable with that of a simple BLAST search for data sets with a high number of short query sequences. Moreover, the accuracy of the EPA is significantly higher, in particular when the sample of taxa in the reference topology is sparse or inadequate. Our algorithm, which has been integrated into RAxML, therefore provides an equally fast but more accurate alternative to BLAST for tree-based inference of the evolutionary origin and composition of short sequence reads. We are also actively developing a Web server that offers a freely available service for computing read placements on trees using the EPA.

451 citations


Journal ArticleDOI
TL;DR: The different algorithms implemented in REBOUND, the philosophy behind the code's structure as well as implementation specific details of the different modules are discussed and results of accuracy and scaling tests are presented which show that the code can run efficiently on both desktop machines and large computing clusters.
Abstract: REBOUND is a new multi-purpose N-body code which is freely available under an open-source license. It was designed for collisional dynamics such as planetary rings but can also solve the classical N-body problem. It is highly modular and can be customized easily to work on a wide variety of different problems in astrophysics and beyond. REBOUND comes with three symplectic integrators: leap-frog, the symplectic epicycle integrator (SEI) and a Wisdom-Holman mapping (WH). It supports open, periodic and shearing-sheet boundary conditions. REBOUND can use a Barnes-Hut tree to calculate both self-gravity and collisions. These modules are fully parallelized with MPI as well as OpenMP. The former makes use of a static domain decomposition and a distributed essential tree. Two new collision detection modules based on a plane-sweep algorithm are also implemented. The performance of the plane-sweep algorithm is superior to a tree code for simulations in which one dimension is much longer than the other two and in simulations which are quasi-two dimensional with less than one million particles. In this work, we discuss the different algorithms implemented in REBOUND, the philosophy behind the code's structure as well as implementation specific details of the different modules. We present results of accuracy and scaling tests which show that the code can run efficiently on both desktop machines and large computing clusters.

433 citations


Book ChapterDOI
04 Dec 2011
TL;DR: In this article, the authors propose a novel technique in which they organize the O-RAM storage into a binary tree over data buckets, while moving data blocks obliviously along tree edges.
Abstract: Oblivious RAM is a useful primitive that allows a client to hide its data access patterns from an untrusted server in storage outsourcing applications. Until recently, most prior works on Oblivious RAM aim to optimize its amortized cost, while suffering from linear or even higher worst-case cost. Such poor worst-case behavior renders these schemes impractical in realistic settings, since a data access request can occasionally be blocked waiting for an unreasonably large number of operations to complete. This paper proposes novel Oblivious RAM constructions that achieves poly-logarithmic worst-case cost, while consuming constant client-side storage. To achieve the desired worst-case asymptotic performance, we propose a novel technique in which we organize the O-RAM storage into a binary tree over data buckets, while moving data blocks obliviously along tree edges.

383 citations


Journal ArticleDOI
TL;DR: The Monte-Carlo revolution in computer Go is surveyed, the key ideas that led to the success of MoGo and subsequent Go programs are outlined, and for the first time a comprehensive description, in theory and in practice, of this extended framework for Monte- Carlo tree search is provided.

375 citations


Journal ArticleDOI
TL;DR: In this paper, an approach for predicting individual tree attributes, i.e., tree height, diameter at breast height (DBH) and stem volume, based on both physical and statistical features derived from airborne laser-scanning data utilizing a new detection method for finding individual trees together with random forests as an estimation method.
Abstract: This paper depicts an approach for predicting individual tree attributes, i.e., tree height, diameter at breast height (DBH) and stem volume, based on both physical and statistical features derived from airborne laser-scanning data utilizing a new detection method for finding individual trees together with random forests as an estimation method. The random forests (also called regression forests) technique is a nonparametric regression method consisting of a set of individual regression trees. Tests of the method were performed, using 1476 trees in a boreal forest area in southern Finland and laser data with a density of 2.6 points per m 2 . Correlation coefficients ( R ) between the observed and predicted values of 0.93, 0.79 and 0.87 for individual tree height, DBH and stem volume, respectively, were achieved, based on 26 laser-derived features. The corresponding relative root-mean-squared errors (RMSEs) were 10.03%, 21.35% and 45.77% (38% in best cases), which are similar to those obtained with the linear regression method, with maximum laser heights, laser-estimated DBH or crown diameters as predictors. With random forests, however, the forest models currently used for deriving the tree attributes are not needed. Based on the results, we conclude that the method is capable of providing a stable and consistent solution for determining individual tree attributes using small-footprint laser data.

324 citations


Journal ArticleDOI
TL;DR: In this article, a hierarchical computational structure is proposed to recognize emotions, which maps an input speech utterance into one of the multiple emotion classes through subsequent layers of binary classifications.

291 citations


Journal ArticleDOI
TL;DR: This article categorizes and evaluates methods for automatic tree-crown detection and delineation, and summarizes the commonalities of current algorithms, and the new development that can be expected in the future.
Abstract: Efficient forest management demands detailed, timely information. As high spatial resolution remotely sensed imagery becomes more available, there is a great potential for conducting high accuracy ...

286 citations


Journal ArticleDOI
TL;DR: An efficient and incremental stream mining algorithm which is able to learn regression and model trees from possibly unbounded, high-speed and time-changing data streams and which improves the any-time performance and greatly reduces the costs of adaptation.
Abstract: The problem of real-time extraction of meaningful patterns from time-changing data streams is of increasing importance for the machine learning and data mining communities. Regression in time-changing data streams is a relatively unexplored topic, despite the apparent applications. This paper proposes an efficient and incremental stream mining algorithm which is able to learn regression and model trees from possibly unbounded, high-speed and time-changing data streams. The algorithm is evaluated extensively in a variety of settings involving artificial and real data. To the best of our knowledge there is no other general purpose algorithm for incremental learning regression/model trees able to perform explicit change detection and informed adaptation. The algorithm performs online and in real-time, observes each example only once at the speed of arrival, and maintains at any-time a ready-to-use model tree. The tree leaves contain linear models induced online from the examples assigned to them, a process with low complexity. The algorithm has mechanisms for drift detection and model adaptation, which enable it to maintain accurate and updated regression models at any time. The drift detection mechanism exploits the structure of the tree in the process of local change detection. As a response to local drift, the algorithm is able to update the tree structure only locally. This approach improves the any-time performance and greatly reduces the costs of adaptation.

280 citations


Proceedings ArticleDOI
24 Jul 2011
TL;DR: Functional matrix factorization is presented, a novel cold-start recommendation method that solves the problem of initial interview construction within the context of learning user and item profiles and associate latent profiles for each node of the tree, which allows the profiles to be gradually refined through the interview process based on user responses.
Abstract: A key challenge in recommender system research is how to effectively profile new users, a problem generally known as cold-start recommendation. Recently the idea of progressively querying user responses through an initial interview process has been proposed as a useful new user preference elicitation strategy. In this paper, we present functional matrix factorization (fMF), a novel cold-start recommendation method that solves the problem of initial interview construction within the context of learning user and item profiles. Specifically, fMF constructs a decision tree for the initial interview with each node being an interview question, enabling the recommender to query a user adaptively according to her prior responses. More importantly, we associate latent profiles for each node of the tree --- in effect restricting the latent profiles to be a function of possible answers to the interview questions --- which allows the profiles to be gradually refined through the interview process based on user responses. We develop an iterative optimization algorithm that alternates between decision tree construction and latent profiles extraction as well as a regularization scheme that takes into account of the tree structure. Experimental results on three benchmark recommendation data sets demonstrate that the proposed fMF algorithm significantly outperforms existing methods for cold-start recommendation.

270 citations


Journal ArticleDOI
TL;DR: This work develops a novel recursive partitioning method for identifying subgroups of subjects with enhanced treatment effects based on a differential effect search algorithm and provides guidance on key topics of interest that include generating multiple promising subgroups using different splitting criteria, choosing optimal values of complexity parameters via cross‐validation, and addressing Type I error rate inflation inherent in data mining applications using a resampling‐based method.
Abstract: We propose a novel recursive partitioning method for identifying subgroups of subjects with enhanced treatment effects based on a differential effect search algorithm. The idea is to build a collection of subgroups by recursively partitioning a database into two subgroups at each parent group, such that the treatment effect within one of the two subgroups is maximized compared with the other subgroup. The process of data splitting continues until a predefined stopping condition has been satisfied. The method is similar to 'interaction tree' approaches that allow incorporation of a treatment-by-split interaction in the splitting criterion. However, unlike other tree-based methods, this method searches only within specific regions of the covariate space and generates multiple subgroups of potential interest. We develop this method and provide guidance on key topics of interest that include generating multiple promising subgroups using different splitting criteria, choosing optimal values of complexity parameters via cross-validation, and addressing Type I error rate inflation inherent in data mining applications using a resampling-based method. We evaluate the operating characteristics of the procedure using a simulation study and illustrate the method with a clinical trial example.

Book
30 Apr 2011
TL;DR: The second edition of the Second edition of this book as discussed by the authors has been published and is available for download and can be used to access the full version of the second edition for a limited time.
Abstract: INTRODUCTION: READ ME FIRST A Brief Overview of the Second Edition Learn More about the Principles Computer Programs Discussed and Where to Obtain Them Download Files and Utilities from the Website Some Conventions Used in This Book PART 1: TUTORIAL: CREATE A TREE! Why Create Phylogenetic Trees? Obtaining Related Sequences by a BLAST Search Creating the Multiple Alignment Phylogenetic Analysis Drawing the Tree Using TreeView Summary PART 2: BASIC ELEMENTS IN CREATING AND PRESENTING TREES Selecting Homologs: What Sequences Can Be Put onto a Single Tree? Fine Tuning Alignments Major Methods for Creating Trees Data Files Used to Illustrate Methods Using PAUP to Create Trees Creating Maximum Likelihood Protein Trees Using Puzzle Creating Bayesian Trees Using MrBayes Presenting and Printing Your Trees Choosing What Form of a Tree to Publish Making a Tree Pretty: Not Just a Cosmetic Matter DNA or Protein Phylogenies: Which Is Better? PART 3: ADVANCED ELEMENTS IN CONSTRUCTING TREES Reconstructing Ancestral DNA Sequences Using Protein Structure Information to Construct Very Deep Phylogenies Analyzing Trees for Evidence of Adaptive Evolution by Detecting Positive Selection in a Phylogeny PART 4: USING ALTERNATIVE SOFTWARE TO CONSTRUCT AND PRESENT TREES Using PAUP on a Windows or UNIX computer Using PHYLIP Appendix I: File Formats and Their Interconversion Appendix II: Printing Alignments Literature Cited Index to Major Programs Discussed Subject Index

Journal ArticleDOI
TL;DR: In this article, the problem of learning a latent tree graphical model where samples are available only from a subset of variables has been studied and two consistent and computationally efficient algorithms for learning minimal latent trees, that is, trees without any redundant hidden nodes, have been proposed.
Abstract: We study the problem of learning a latent tree graphical model where samples are available only from a subset of variables. We propose two consistent and computationally efficient algorithms for learning minimal latent trees, that is, trees without any redundant hidden nodes. Unlike many existing methods, the observed nodes (or variables) are not constrained to be leaf nodes. Our algorithms can be applied to both discrete and Gaussian random variables and our learned models are such that all the observed and latent variables have the same domain (state space). Our first algorithm, recursive grouping, builds the latent tree recursively by identifying sibling groups using so-called information distances. One of the main contributions of this work is our second algorithm, which we refer to as CLGrouping. CLGrouping starts with a pre-processing procedure in which a tree over the observed variables is constructed. This global step groups the observed nodes that are likely to be close to each other in the true latent tree, thereby guiding subsequent recursive grouping (or equivalent procedures such as neighbor-joining) on much smaller subsets of variables. This results in more accurate and efficient learning of latent trees. We also present regularized versions of our algorithms that learn latent tree approximations of arbitrary distributions. We compare the proposed algorithms to other methods by performing extensive numerical experiments on various latent tree graphical models such as hidden Markov models and star graphs. In addition, we demonstrate the applicability of our methods on real-world data sets by modeling the dependency structure of monthly stock returns in the S&P index and of the words in the 20 newsgroups data set.

Journal ArticleDOI
TL;DR: Comparisons of tree-based, sequence similarity- and synteny-based approaches can be combined into flexible hybrid methods show that, despite conceptual differences, they produce similar sets of orthologs, especially at short evolutionary distances.
Abstract: Accurate inference of orthologous genes is a pre-requisite for most comparative genomics studies, and is also important for functional annotation of new genomes. Identification of orthologous gene sets typically involves phylogenetic tree analysis, heuristic algorithms based on sequence conservation, synteny analysis, or some combination of these approaches. The most direct tree-based methods typically rely on the comparison of an individual gene tree with a species tree. Once the two trees are accurately constructed, orthologs are straightforwardly identified by the definition of orthology as those homologs that are related by speciation, rather than gene duplication, at their most recent point of origin. Although ideal for the purpose of orthology identification in principle, phylogenetic trees are computationally expensive to construct for large numbers of genes and genomes, and they often contain errors, especially at large evolutionary distances. Moreover, in many organisms, in particular prokaryotes and viruses, evolution does not appear to have followed a simple ‘tree-like’ mode, which makes conventional tree reconciliation inapplicable. Other, heuristic methods identify probable orthologs as the closest homologous pairs or groups of genes in a set of organisms. These approaches are faster and easier to automate than tree-based methods, with efficient implementations provided by graph-theoretical algorithms enabling comparisons of thousands of genomes. Comparisons of these two approaches show that, despite conceptual differences, they produce similar sets of orthologs, especially at short evolutionary distances. Synteny also can aid in identification of orthologs. Often, tree-based, sequence similarity- and synteny-based approaches can be combined into flexible hybrid methods.


Journal ArticleDOI
TL;DR: Experimental results show that the proposed approach has a better performance than Liu et al.'s two-phase algorithm in execution time and the numbers of tree nodes generated from three different item ordering methods are compared, with results showing that the frequency ordering produces less tree nodes than the other two.
Abstract: Research highlights? In this paper, the high utility pattern tree (HUP tree) is designed. ? The HUP-growth mining algorithm is proposed to derive high utility patterns effectively and efficiently. ? The proposed approach integrates the previous two-phase procedure for utility mining and the FP-tree concept to utilize the downward-closure property and generate a compressed tree structure. In the past, many algorithms were proposed to mine association rules, most of which were based on item frequency values. Considering a customer may buy many copies of an item and each item may have different profits, mining frequent patterns from a traditional database is not suitable for some real-world applications. Utility mining was thus proposed to consider costs, profits and other measures according to user preference. In this paper, the high utility pattern tree (HUP tree) is designed and the HUP-growth mining algorithm is proposed to derive high utility patterns effectively and efficiently. The proposed approach integrates the previous two-phase procedure for utility mining and the FP-tree concept to utilize the downward-closure property and generate a compressed tree structure. Experimental results also show that the proposed approach has a better performance than Liu et al.'s two-phase algorithm in execution time. At last, the numbers of tree nodes generated from three different item ordering methods are also compared, with results showing that the frequency ordering produces less tree nodes than the other two.

Proceedings Article
12 Dec 2011
TL;DR: A novel approach to efficiently learn a label tree for large scale classification with many classes with less training time and more balanced trees compared to the previous state of the art by Bengio et al.
Abstract: We present a novel approach to efficiently learn a label tree for large scale classification with many classes. The key contribution of the approach is a technique to simultaneously determine the structure of the tree and learn the classifiers for each node in the tree. This approach also allows fine grained control over the efficiency vs accuracy trade-off in designing a label tree, leading to more balanced trees. Experiments are performed on large scale image classification with 10184 classes and 9 million images. We demonstrate significant improvements in test accuracy and efficiency with less training time and more balanced trees compared to the previous state of the art by Bengio et al.

Proceedings ArticleDOI
09 May 2011
TL;DR: iSAM2 is a fully incremental, graph-based version of incremental smoothing and mapping (iSAM), based on a novel graphical model-based interpretation of incremental sparse matrix factorization methods, afforded by the recently introduced Bayes tree data structure.
Abstract: We present iSAM2, a fully incremental, graph-based version of incremental smoothing and mapping (iSAM). iSAM2 is based on a novel graphical model-based interpretation of incremental sparse matrix factorization methods, afforded by the recently introduced Bayes tree data structure. The original iSAM algorithm incrementally maintains the square root information matrix by applying matrix factorization updates. We analyze the matrix updates as simple editing operations on the Bayes tree and the conditional densities represented by its cliques. Based on that insight, we present a new method to incrementally change the variable ordering which has a large effect on efficiency. The efficiency and accuracy of the new method is based on fluid relinearization, the concept of selectively relinearizing variables as needed. This allows us to obtain a fully incremental algorithm without any need for periodic batch steps. We analyze the properties of the resulting algorithm in detail, and show on various real and simulated datasets that the iSAM2 algorithm compares favorably with other recent mapping algorithms in both quality and efficiency.

Journal ArticleDOI
TL;DR: It is shown how the list structure being parsimonious can be used to develop a new approach for dealing with non-stationary training images and allows one to parallelize the part of the algorithm in which the conditional probability density function is computed.
Abstract: Among the techniques used to simulate categorical variables, multiple- point statistics is becoming very popular because it allows the user to provide an ex- plicit conceptual model via a training image. In classic implementations, the multiple- point statistics are inferred from the training image by storing all the observed pat- terns of a certain size in a tree structure. This type of algorithm has the advantage of being fast to apply, but it presents some critical limitations. In particular, a tree is extremely RAM demanding. For three-dimensional problems with numerous facies, large templates cannot be used. Complex structures are then difficult to simulate. In this paper, we propose to replace the tree by a list. This structure requires much less RAM. It has three main advantages. First, it allows for the use of larger tem- plates. Second, the list structure being parsimonious, it can be extended to include additional information. Here, we show how this can be used to develop a new ap- proach for dealing with non-stationary training images. Finally, an interesting aspect of the list is that it allows one to parallelize the part of the algorithm in which the conditional probability density function is computed. This is especially important for large problems that can be solved on clusters of PCs with distributed memory or on multicore machines with shared memory.

Posted Content
TL;DR: RTED as mentioned in this paper is a robust tree edit distance algorithm that is both efficient and worst-case optimal, which is the state-of-the-art algorithm in the literature.
Abstract: We consider the classical tree edit distance between ordered labeled trees, which is defined as the minimum-cost sequence of node edit operations that transform one tree into another. The state-of-the-art solutions for the tree edit distance are not satisfactory. The main competitors in the field either have optimal worst-case complexity, but the worst case happens frequently, or they are very efficient for some tree shapes, but degenerate for others. This leads to unpredictable and often infeasible runtimes. There is no obvious way to choose between the algorithms. In this paper we present RTED, a robust tree edit distance algorithm. The asymptotic complexity of RTED is smaller or equal to the complexity of the best competitors for any input instance, i.e., RTED is both efficient and worst-case optimal. We introduce the class of LRH (Left-Right-Heavy) algorithms, which includes RTED and the fastest tree edit distance algorithms presented in literature. We prove that RTED outperforms all previously proposed LRH algorithms in terms of runtime complexity. In our experiments on synthetic and real world data we empirically evaluate our solution and compare it to the state-of-the-art.

Journal ArticleDOI
TL;DR: In this article, the authors present a non-technical account of the developments in tree-based methods for the analysis of survival data with censoring, which mainly extended the existing basic tree methodologies to censored data as well as to more recent work.
Abstract: This paper presents a non–technical account of the developments in tree–based methods for the analysis of survival data with censoring. This review describes the initial developments, which mainly extended the existing basic tree methodologies to censored data as well as to more recent work. We also cover more complex models, more specialized methods, and more specific problems such as multivariate data, the use of time–varying covariates, discrete–scale survival data, and ensemble methods applied to survival trees. A data example is used to illustrate some methods that are implemented in R.

01 May 2011
TL;DR: This work proposes two consistent and computationally efficient algorithms for learning minimal latent trees, that is, trees without any redundant hidden nodes, and applies these algorithms to both discrete and Gaussian random variables.

Journal ArticleDOI
01 Dec 2011
TL;DR: It is proved that RTED outperforms all previously proposed LRH algorithms in terms of runtime complexity and introduces the class of LRH (Left-Right-Heavy) algorithms, which includes RTED and the fastest tree edit distance algorithms presented in literature.
Abstract: We consider the classical tree edit distance between ordered labeled trees, which is defined as the minimum-cost sequence of node edit operations that transform one tree into another. The state-of-the-art solutions for the tree edit distance are not satisfactory. The main competitors in the field either have optimal worst-case complexity, but the worst case happens frequently, or they are very efficient for some tree shapes, but degenerate for others. This leads to unpredictable and often infeasible runtimes. There is no obvious way to choose between the algorithms.In this paper we present RTED, a robust tree edit distance algorithm. The asymptotic complexity of RTED is smaller or equal to the complexity of the best competitors for any input instance, i.e., RTED is both efficient and worst-case optimal. We introduce the class of LRH (Left-Right-Heavy) algorithms, which includes RTED and the fastest tree edit distance algorithms presented in literature. We prove that RTED outperforms all previously proposed LRH algorithms in terms of runtime complexity. In our experiments on synthetic and real world data we empirically evaluate our solution and compare it to the state-of-the-art.

Patent
09 Jun 2011
TL;DR: In this article, a gateway is configured to receive computer code from the non-trusted entity via the network, and the gateway builds a tree representing the computer code and analyzes the statement to identify symbol data.
Abstract: Analyzing computer code using a tree is described. For example, a client device generates a data request for retrieving data from a non-trusted entity via a network. A gateway is communicatively coupled to the client device and to the network. The gateway is configured to receive computer code from the non-trusted entity via the network. The gateway builds a tree representing the computer code. The tree has one or more nodes. A node of the tree represents a statement from the computer code. The gateway analyzes the statement to identify symbol data. The symbol data describes a name of the variable and the value of the variable. The gateway stores the symbol data in a symbol table.

Proceedings Article
12 Dec 2011
TL;DR: This paper shows that if, instead of a flat partitioning, the image is represented by a hierarchical segmentation tree, then the resulting energy combining unary and boundary terms can still be optimized using graph cut (with all the corresponding benefits of global optimality and efficiency).
Abstract: Graph cut optimization is one of the standard workhorses of image segmentation since for binary random field representations of the image, it gives globally optimal results and there are efficient polynomial time implementations. Often, the random field is applied over a flat partitioning of the image into non-intersecting elements, such as pixels or super-pixels. In the paper we show that if, instead of a flat partitioning, the image is represented by a hierarchical segmentation tree, then the resulting energy combining unary and boundary terms can still be optimized using graph cut (with all the corresponding benefits of global optimality and efficiency). As a result of such inference, the image gets partitioned into a set of segments that may come from different layers of the tree. We apply this formulation, which we call the pylon model, to the task of semantic segmentation where the goal is to separate an image into areas belonging to different semantic classes. The experiments highlight the advantage of inference on a segmentation tree (over a flat partitioning) and demonstrate that the optimization in the pylon model is able to flexibly choose the level of segmentation across the image. Overall, the proposed system has superior segmentation accuracy on several datasets (Graz-02, Stanford background) compared to previously suggested approaches.

Journal ArticleDOI
TL;DR: In this article, the authors used repeat photography, dendrochronological analysis, field observations along elevational transects and historical documents to study tree line dynamics in a sub-Arctic model area at different temporal and spatial scales.
Abstract: Aim Models project that climate warming will cause the tree line to move to higher elevations in alpine areas and more northerly latitudes in Arctic environments. We aimed to document changes or stability of the tree line in a sub-Arctic model area at different temporal and spatial scales, and particularly to clarify the ambiguity that currently exists about tree line dynamics and their causes. Location The study was conducted in the Tornetrask area in northern Sweden where climate warmed by 2.5 degrees C between 1913 and 2006. Mountain birch (Betula pubescens ssp. czerepanovii) sets the alpine tree line. Methods We used repeat photography, dendrochronological analysis, field observations along elevational transects and historical documents to study tree line dynamics. Results Since 1912, only four out of eight tree line sites had advanced: on average the tree line had shifted 24 m upslope (+0.2 m year-1 assuming linear shifts). Maximum tree line advance was +145 m (+1.5 m year-1 in elevation and +2.7 m year-1 in actual distance), whereas maximum retreat was 120 m downslope. Counter-intuitively, tree line advance was most pronounced during the cooler late 1960s and 1970s. Tree establishment and tree line advance were significantly correlated with periods of low reindeer (Rangifer tarandus) population numbers. A decreased anthropozoogenic impact since the early 20th century was found to be the main factor shaping the current tree line ecotone and its dynamics. In addition, episodic disturbances by moth outbreaks and geomorphological processes resulted in descent and long-term stability of the tree line position, respectively. Main conclusions In contrast to what is generally stated in the literature, this study shows that in a period of climate warming, disturbance may not only determine when tree line advance will occur but if tree line advance will occur at all. In the case of non-climatic climax tree lines, such as those in our study area, both climate-driven model projections of future tree line positions and the use of the tree line position for bioclimatic monitoring should be used with caution.

Proceedings ArticleDOI
03 Oct 2011
TL;DR: This paper uses the German Traffic Sign Benchmark data set to evaluate the performance of K-d trees and Random Forests for traffic sign classification using different size Histogram of Oriented Gradients (HOG) descriptors and Distance Transforms.
Abstract: In this paper, we evaluate the performance of K-d trees and Random Forests for traffic sign classification using different size Histogram of Oriented Gradients (HOG) descriptors and Distance Transforms. We use the German Traffic Sign Benchmark data set [1] containing 43 classes and more than 50,000 images. The K-d tree is fast to build and search in. We combine the tree classifiers with the HOG descriptors as well as the Distance Transforms and achieve classification rates of up to 97% and 81.8% respectively.


Proceedings ArticleDOI
16 Jul 2011
TL;DR: In this paper, the authors address the problem of optimal path finding for multiple agents where agents must not collide and their total travel cost should be minimized, and present a novel formalization for this problem which includes a search tree called the increasing cost tree (ICT) and a corresponding search algorithm that finds optimal solutions.
Abstract: We address the problem of optimal path finding for multiple agents where agents must not collide and their total travel cost should be minimized Previous work used traditional single-agent search variants of the A* algorithm We present a novel formalization for this problem which includes a search tree called the increasing cost tree (ICT) and a corresponding search algorithm that finds optimal solutions We analyze this new formalization and compare it to the previous state-of-the-art A*-based approach Experimental results on various domains show the benefits and drawbacks of this approach A speedup of up to 3 orders of magnitude was obtained in a number of cases

Book ChapterDOI
01 Jan 2011
TL;DR: This chapter describes the first CUDA implementation of the classical Barnes Hut n-body algorithm that runs entirely on the GPU, concluding that GPUs can be used to accelerate irregular codes, not just regular codes.
Abstract: Publisher Summary This chapter describes the first CUDA implementation of the classical Barnes Hut n-body algorithm that runs entirely on the GPU. The Barnes Hut force-calculation algorithm is widely used in n-body simulations such as modeling the motion of galaxies. It hierarchically decomposes the space around the bodies into successively smaller boxes, called cells, and computes summary information for the bodies contained in each cell, allowing the algorithm to quickly approximate the forces (e.g., gravitational, electric, or magnetic) that the n bodies induce upon each other. The Barnes Hut algorithm is challenging to implement efficiently in CUDA because it repeatedly builds and traverses an irregular tree-based data structure, it performs a lot of pointer-chasing memory operations, and it is typically expressed recursively. GPU-specific operations such as thread-voting functions are exploited to greatly improve performance and make use of fence instructions to implement lightweight synchronization without atomic operations. The main conclusion of the work is that GPUs can be used to accelerate irregular codes, not just regular codes. However, a great deal of programming effort is required to achieve good performance. The biggest performance win, though, came from turning some of the unique architectural features of GPUs, which are often regarded as performance hurdles for irregular codes, into assets. Future direction indicates workings on writing high-performing CUDA implementations for other irregular algorithms.