scispace - formally typeset
Search or ask a question

Showing papers on "Breadth-first search published in 2003"


Proceedings ArticleDOI
06 Oct 2003
TL;DR: This work proposes a practical parallel on-the-fly algorithm for enumerative LTL (linear temporal logic) model checking for cluster of workstations communicating via MPI (message passing interface) and shows promising results.
Abstract: We propose a practical parallel on-the-fly algorithm for enumerative LTL (linear temporal logic) model checking. The algorithm is designed for a cluster of workstations communicating via MPI (message passing interface). The detection of cycles (faulty runs) effectively employs the so called back-level edges. In particular, a parallel level-synchronized breadth-first search of the graph is performed to discover back-level edges. For each level, the back-level edges are checked in parallel by a nested depth-first search to confirm or refute the presence of a cycle. Several optimizations of the basic algorithm are presented and advantages and drawbacks of their application to distributed LTL model-checking are discussed. Experimental implementation of the algorithm shows promising results.

71 citations


Book ChapterDOI
TL;DR: This work presents a self-stabilizing loop-free routing algorithm that is also route preserving, and guarantees self-Stabilization for many metrics (such as minimum distance, shortest path, best transmitter, depth first search metrics, etc.), by reusing previous results on r-operators.
Abstract: A distributed system is self-stabilizing if it returns to a legitimate state in a finite number of steps regardless of the initial state, and the system remains in a legitimate state until another fault occurs. A routing algorithm is loop-free if, a path being constructed between two processors p and q, any edges cost change induces a modification of the routing tables in such a way that at any time, there always exists a path from p to q. We present a self-stabilizing loop-free routing algorithm that is also route preserving. This last property means that, a tree being constructed, any message sent to the root is received in a bounded amount of time, even in the presence of continuous edge cost changes. Also, and unlike previous approaches, we do not require that a bound on the network diameter is known to the processors that perform the routing algorithm. We guarantee self-stabilization for many metrics (such as minimum distance, shortest path, best transmitter, depth first search metrics, etc.), by reusing previous results on r-operators.

43 citations


Patent
26 Nov 2003
TL;DR: In this article, techniques for generating a representation of an access control list, the representation being utilizable in a network processor or other type of processor to perform packet filtering or other types of access control based function.
Abstract: Techniques are disclosed for generating a representation of an access control list, the representation being utilizable in a network processor or other type of processor to perform packet filtering or other type of access control list based function. A plurality of rules of the access-control list are determined, each of at least a subset of the rules having a plurality of fields and a corresponding action, and the rules are processed to generate a multi-level tree representation of the access control list, in which each of one or more of the levels of the tree representation is associated with a corresponding one of the fields. At least one level of the tree representation other than a root level of the tree representation comprises a plurality of nodes, with at least two of the nodes at that level each having a separate matching table associated therewith.

38 citations


Journal ArticleDOI
TL;DR: A simplification of the algorithm of Salembier e.a.ved.

21 citations


01 Jan 2003
TL;DR: The algorithm main features include depth first search with vertical compressed database, diffset, parent equivalence pruning, dynamic reordering and projection, which suggests that the algorithm and implementation significantly outperform existing algorithms/implementations.
Abstract: We present a new algorithm for mining frequent itemsets. Past studies have proposed various algorithms and techniques for improving the efficiency of the mining task. We integrate a combination of these techniques into an algorithm which utilize those techniques dynamically according to the input dataset. The algorithm main features include depth first search with vertical compressed database, diffset, parent equivalence pruning, dynamic reordering and projection. Experimental testing suggests that our algorithm and implementation significantly outperform existing algorithms/implementations.

16 citations


Journal ArticleDOI
TL;DR: In this paper, the authors presented two new results on I/O-efficient depth-first search in an important class of sparse graphs, namely undirected embedded planar graphs.
Abstract: Even though a large number of I/O-efficient graph algorithms have been developed, a number of fundamental problems still remain open. For example, no space- and I/O-efficient algorithms are known for depth-first search or breadth-first search in sparse graphs. In this paper we present two new results on I/O-efficient depth-first search in an important class of sparse graphs, namely undirected embedded planar graphs. We develop a new efficient depth-first search algorithm and show how planar depth-first search in general can be reduced to planar breadth-first search. As part of the first result we develop the first I/O-efficient algorithm for finding a simple cycle separator of a biconnected planar graph. Together with other recent reducibility results, the second result provides further evidence that external memory breadth-first search is among the hardest problems on planar graphs.

15 citations


Book ChapterDOI
15 Dec 2003
TL;DR: The approach suggests a symbolic framework to tackle those problems which are naturally solved by a DFS-based algorithm in the standard case, and the use of spine-sets, introduced in [8] for strongly connected components, as its substitute.
Abstract: We define an algorithm for determining, in a linear number of symbolic steps, the biconnected components of a graph implicitly represented with Ordered Binary Decision Diagrams (OBDDs). Working on symbolically represented data has potential: the standards achieved in graph sizes (playing a crucial role, for example, in verification, VLSI design, and CAD) are definitely higher. On the other hand, symbolic algorithm’s design generates constraints as well. For example, Depth First Search is not feasible in the symbolic setting, and our algorithm relies on the use of spine-sets, introduced in [8] for strongly connected components, as its substitute. Our approach suggests a symbolic framework to tackle those problems which are naturally solved by a DFS-based algorithm in the standard case.

14 citations


Proceedings Article
01 Jan 2003
TL;DR: A novel segmentation algorithm for separating moving objects from the background in video sequences without any prior information of the sequence nature is suggested.
Abstract: This paper suggests a novel segmentation algorithm for separating moving objects from the background in video sequences without any prior information of the sequence nature. We formulate the problem as a connectivity analysis of region adjacency graph (RAG) based on temporal information. The nodes of the RAG represent homogeneous regions and the edges represent temporal information, which is obtained by frames comparison iterations. Connectivity analysis of the RAG nodes is performed after each frames comparison by a breadth first search (BFS) based algorithm. The set of nodes, which achieve maximum weight of theirs surrounding edges are considered as moving object. The number of comparisons that are needed for temporal information is automatically determined.

13 citations


Book ChapterDOI
09 May 2003
TL;DR: This work presents an algorithm for (explicit state) model checking under weak fairness that exploits symmetry for state space reduction and shows that it has significant advantages over the existing full-fledged model checking algorithms that exploit symmetry under strong fairness.
Abstract: We present an algorithm for (explicit state) model checking under weak fairness that exploits symmetry for state space reduction. It is assumed that the checked properties are given as Buchi automata. The algorithm is based on the Nested Depth First Search (NDFS) algorithm by Courcoubetis, Vardi, Wolper and Yannakakis. The weak fairness aspect is captured by a version of the Choueka flag algorithm. As the presented algorithm allows false positives, it is mainly intended for efficient systematic debugging. However, we show that for this more modest goal our algorithm has significant advantages over the existing full-fledged model checking algorithms that exploit symmetry under weak fairness. The prototype implementation on top of Spin showed encouraging results.

13 citations


Journal ArticleDOI
TL;DR: An object tracking algorithm in region-space is proposed for sports applications to overcome any inter-frame inconsistencies in Region-space by using the textual description of the object and the concept of picture tree traversal.
Abstract: An object tracking algorithm in region-space is proposed for sports applications. The region adjacency graphs are used to perform a search for the given object description. It is proposed to overcome any inter-frame inconsistencies in region-space by using the textual description of the object and the concept of picture tree traversal.

7 citations


Proceedings ArticleDOI
19 Nov 2003
TL;DR: A tree-based top-down indexing method that uses an iterative k-means algorithm for tree node splitting and combines three different search pruning criteria from BST, GHT and GNAT into one is presented.
Abstract: We address the problem of indexing data for the k nearest neighbors (k-nn) search. We present a tree-based top-down indexing method that uses an iterative k-means algorithm for tree node splitting and combines three different search pruning criteria from BST, GHT and GNAT into one. The experiments show that the presented indexing tree accelerates the k-nn searching up to several thousands times in case of large data sets.

Journal Article
TL;DR: Based on geometry computing and graph theory, an algorithm of finding path of embroidering needle can be generated automatically from defined start point, end point and needle spacing and the overlap point problem is solved perfectly.
Abstract: Based on geometry computing and graph theory, an algorithm of finding path of embroidering needle is proposed in this paper. Firstly, outline orientations are defined. And finding all local extreme points of inner outline on their gravitational orientation to build sectioning lines so as to divide the picture. By using intersection point character the overlap point problem is solved perfectly. Therefore, the picture is divided into some nodes which can be finished alone. Then, based on connections of these nodes, an adjacency graph of nodes is built. Using half Hamilton path or depth first search method, both embroidering sequence and direction of these nodes could be got from the graph. Finally, from defined start point, end point and needle spacing, a path of embroidering needle can be generated automatically.

Patent
22 Jan 2003
TL;DR: In this paper, a system and method for hierarchically invoking reentrant methods on XML objects is presented, which includes a first store for storing XML objects as an input tree, a second store storing a resultant tree, and a processor for processing the input tree to generate the resultant tree; action attribute indicia representing an API to be invoked associated with at least one object.
Abstract: A system and method for hierarchically invoking reentrant methods on XML objects includes a first store for storing XML objects as an input tree; a second store for storing a resultant tree; a processor for processing the input tree to generate the resultant tree; action attribute indicia representing an API to be invoked associated with at least one object; the processor executing a depth first search through the input tree for generating the resultant tree selectively including action status child nodes for XML objects having an action attribute and new script generated from processing selective action attributes; while generating the resultant tree, the processor removing the action attribute from XML action objects successfully processed; and a reentrant processing path through the processor for processing the resultant tree as a new input tree responsive to the resultant tree including said new script or failure status child nodes.

Proceedings ArticleDOI
07 Jul 2003
TL;DR: Test results reveal that the out-of-core compression method can compress large meshes on a desk-top machine with moderate memory space within reasonable time and achieves better compression ratios than an incore method which was developed in a previous research.
Abstract: In this paper, an out-of-core data compression method is presented to encode large Finite Element Analysis (FEA) meshes. The method is comprised with two stages. At the first stage, the input FEA mesh is divided into blocks, called octants, based on an octree structure. Each octant must contain less FEA cells than a predefined limit such that it can fit into the main memory. Octants produced in the data division are stored in disk files. At the second stage, the octree is traversed to enumerate all the octants. These octants are fetched into the main memory and compressed there one by one. To compress an octant, the cell connectivities of the octant are computed. The connectivities are represented by using an adjacency graph. In the graph, a graph vertex represents an FEA cell, and if two cells are adjacent by sharing a face then an edge is drawn between the corresponding vertices of the cells. Next the adjacency graph is traversed by using a depth first search, and the mesh is split into tetrahedral strips. In a tetrahedral strip, every two consecutive cells share a face, and only one vertex reference is needed for specifying a cell. Therefore, less memory space is required for storing the mesh. According to the different situations encountered during the depth first search, the tetrahedral strips are encoded by using four types of instructions. When the traversal is completed, the tetrahedral strips are converted into a byte string and written into a disk file. To decode the compressed mesh, the instructions kept in the disk file are fetched into the main memory in blocks. For each block of instructions, the instructions are executed one by one to reconstruct the mesh. Test results reveal that the out-of-core compression method can compress large meshes on a desk-top machine with moderate memory space within reasonable time. The out-of-core method also achieves better compression ratios than an incore method which was developed in a previous research.

01 Jan 2003
TL;DR: A breadth-first search algorithm for solving the reachability problem for SPDIs by using an important object of SPDIs’ phase portrait, the invariance kernels, which can be computed non-iteratively, and which play an important role in the termination of the algorithm.
Abstract: Polygonal hybrid systems are a subclass of planar hybrid automata which can be represented by piecewise constant differential inclusions (SPDIs) Using an important object of SPDIs’ phase portrait, the invariance kernels, which can be computed non-iteratively, we present a breadth-first search algorithm for solving the reachability problem for SPDIs Invariance kernels play an important role in the termination of the algorithm A New Breadth-First Search Algorithm for Deciding SPDI Reachability Gordon J Pace Department of Computer Science and AI, University of Malta gordonpace@umedumt Abstract: Polygonal hybrid systems are a subclass of planar hybrid automata which can be represented by piecewise constant differential inclusions (SPDIs) Using an important object of SPDIs’ phase portrait, the invariance kernels, which can be computed non-iteratively, we present a breadth-first search algorithm for solving the reachability problem for SPDIs Invariance kernels play an important role in the termination of the algorithm Polygonal hybrid systems are a subclass of planar hybrid automata which can be represented by piecewise constant differential inclusions (SPDIs) Using an important object of SPDIs’ phase portrait, the invariance kernels, which can be computed non-iteratively, we present a breadth-first search algorithm for solving the reachability problem for SPDIs Invariance kernels play an important role in the termination of the algorithm

Journal Article
TL;DR: Experiments show that spatial association discovery systems powered by FPT Generate are much more time efficient and space scalable than those powered by the classical algorithm, Apriori.
Abstract: Spatial association rule discovery in spatial databases is a very important data mining task. In this paper, a two stage strategy for the discovery of spatial association rules in geographical databases is proposed. The spatial computational overhead is greatly reduced by top down refinement of spatial predicate granularities and multiple recursions of single level boolean association rule discovery step, which is the key step of the algorithm. The single level boolean association rule mining algorithm, FPT Generate, is detailed. FPT Generate uses the frequent item prefix tree, FIPT, to compress and project frequent item sets, and discovers association rules by growing a frequent pattern tree, FPT, by depth first search. The algorithm FPT Generate generates association rules without candidate generation and without redundant scans of databases. Optimizing techniques for the implementation, such as pseudo projecting and pruning, dynamic threading and hashing, and disk based partitioning, are also discussed. Experiments show that spatial association discovery systems powered by FPT Generate are much more time efficient and space scalable than those powered by the classical algorithm, Apriori. Finally, a spatial association rule discovery system, SmartMiner, upon the support of MapInfo Professional, is developed.

Proceedings ArticleDOI
20 Oct 2003
TL;DR: The path query messages algorithm reduces the retry attempts in setting up a path, and also utilizes the network more effectively by gathering much more information about the resources.
Abstract: In constraint-based routing a topology database is maintained on all participating nodes to be used in calculating a path through the network. This database contains a list of the links in the network and the set of constraints the links can meet. Since these constraints change rapidly, the topology database will not be consistent with respect to the real network. A feedback mechanism was proposed by Ashwood-Smith, et al, to help correct the errors in the database. It behaves like a depth first search, and is meant to be useable only when the database sees the availability of resources to be more than they really are. In this mechanism, the source node can learn from the successes or failures of its path selections by receiving feedback from the path it is attempting. The received information is used in the subsequent path calculations. We validated the feedback algorithm to see how it behaves in all database situations, and found out that the feedback algorithm was helpful in all cases (not only when it was optimistic). We also propose adding query messages to make the feedback algorithm behave more like breadth first search. The path query messages algorithm reduces the retry attempts in setting up a path, and also utilizes the network more effectively by gathering much more information about the resources.

Journal ArticleDOI
TL;DR: This paper addresses the problem of finding optimal allocations of a given number of robots in a search engine to heterogeneous Internet domains such that a cost function which characterizes the combined consideration of network/server loads and database currency can be minimized.
Abstract: It is a conflicting requirement to reduce network traffic and loads on Web servers caused by the robots in a World Wide Web search engine and to increase the currency of the database in the search engine. This paper addresses the problem of finding optimal allocations of a given number of robots in a search engine to heterogeneous Internet domains such that a cost function which characterizes the combined consideration of network/server loads and database currency can be minimized. The cost function and the optimization problem are formulated based on a queueing model of search engines. Our algorithm to solve the problem uses the depth-first search strategy to traverse an enumeration tree which generates all possible allocations.