scispace - formally typeset
Search or ask a question

Showing papers in "ACM Computing Surveys in 2004"


Journal ArticleDOI
TL;DR: This survey proposes a framework for analyzing peer-to-peer content distribution technologies and focuses on nonfunctional characteristics such as security, scalability, performance, fairness, and resource management potential, and examines the way in which these characteristics are reflected in and affected by the architectural design decisions adopted by current peer- to-peer systems.
Abstract: Distributed computer architectures labeled "peer-to-peer" are designed for the sharing of computer resources (content, storage, CPU cycles) by direct exchange, rather than requiring the intermediation or support of a centralized server or authority. Peer-to-peer architectures are characterized by their ability to adapt to failures and accommodate transient populations of nodes while maintaining acceptable connectivity and performance.Content distribution is an important peer-to-peer application on the Internet that has received considerable research attention. Content distribution applications typically allow personal computers to function in a coordinated manner as a distributed storage medium by contributing, searching, and obtaining digital content.In this survey, we propose a framework for analyzing peer-to-peer content distribution technologies. Our approach focuses on nonfunctional characteristics such as security, scalability, performance, fairness, and resource management potential, and examines the way in which these characteristics are reflected in---and affected by---the architectural design decisions adopted by current peer-to-peer systems.We study current peer-to-peer systems and infrastructure technologies in terms of their distributed object location and routing mechanisms, their approach to content replication, caching and migration, their support for encryption, access control, authentication and identity, anonymity, deniability, accountability and reputation, and their use of resource trading and management schemes.

1,563 citations


Journal ArticleDOI
TL;DR: This work has mainly targeted the extraction of blood vessels, neurosvascular structure in particular, but has also reviewed some of the segmentation methods for the tubular objects that show similar characteristics to vessels.
Abstract: Vessel segmentation algorithms are the critical components of circulatory blood vessel analysis systems. We present a survey of vessel extraction techniques and algorithms. We put the various vessel extraction approaches and techniques in perspective by means of a classification of the existing research. While we have mainly targeted the extraction of blood vessels, neurosvascular structure in particular, we have also reviewed some of the segmentation methods for the tubular objects that show similar characteristics to vessels. We have divided vessel segmentation algorithms and techniques into six main categories: (1) pattern recognition techniques, (2) model-based approaches, (3) tracking-based approaches, (4) artificial intelligence-based approaches, (5) neural network-based approaches, and (6) tube-like object detection approaches. Some of these categories are further divided into subcategories. We have also created tables to compare the papers in each category against such criteria as dimensionality, input type, preprocessing, user interaction, and result type.

1,020 citations


Journal ArticleDOI
TL;DR: Total order broadcast and multicast (also called atomic broadcast/multicast) present an important problem in distributed systems, especially with respect to fault-tolerance.
Abstract: Total order broadcast and multicast (also called atomic broadcast/multicast) present an important problem in distributed systems, especially with respect to fault-tolerance. In short, the primitive ensures that messages sent to a set of processes are, in turn, delivered by all those processes in the same total order.

535 citations


Journal ArticleDOI
TL;DR: How dataflow programming evolved toward a hybrid von Neumann dataflow formulation, and adopted a more coarse-grained approach is discussed.
Abstract: Many developments have taken place within dataflow programming languages in the past decade. In particular, there has been a great deal of activity and advancement in the field of dataflow visual programming languages. The motivation for this article is to review the content of these recent developments and how they came about. It is supported by an initial review of dataflow programming in the 1970s and 1980s that led to current topics of research. It then discusses how dataflow programming evolved toward a hybrid von Neumann dataflow formulation, and adopted a more coarse-grained approach. Recent trends toward dataflow visual programming languages are then discussed with reference to key graphical dataflow languages and their development environments. Finally, the article details four key open topics in dataflow programming languages.

455 citations


Journal ArticleDOI
TL;DR: It is shown that in order to allow people to profit from all this visual information, there is a need to develop tools that help them to locate the needed images with good precision in a reasonable time and that such tools are useful for many applications and purposes.
Abstract: With the explosive growth of the World Wide Web, the public is gaining access to massive amounts of information. However, locating needed and relevant information remains a difficult task, whether the information is textual or visual. Text search engines have existed for some years now and have achieved a certain degree of success. However, despite the large number of images available on the Web, image search engines are still rare. In this article, we show that in order to allow people to profit from all this visual information, there is a need to develop tools that help them to locate the needed images with good precision in a reasonable time, and that such tools are useful for many applications and purposes. The article surveys the main characteristics of the existing systems most often cited in the literature, such as ImageRover, WebSeek, Diogenes, and Atlas WISE. It then examines the various issues related to the design and implementation of a Web image search engine, such as data gathering and digestion, indexing, query specification, retrieval and similarity, Web coverage, and performance evaluation. A general discussion is given for each of these issues, with examples of the ways they are addressed by existing engines, and 130 related references are given. Some concluding remarks and directions for future research are also presented.

338 citations


Journal ArticleDOI
TL;DR: A bird's eye view of the basic concepts in molecular cell biology is provided, the nature of the existing data is outlined, and the kind of computer algorithms and techniques that are necessary to understand cell behavior are described.
Abstract: The article aims to introduce computer scientists to the new field of bioinformatics. This area has arisen from the needs of biologists to utilize and help interpret the vast amounts of data that are constantly being gathered in genomic research---and its more recent counterparts, proteomics and functional genomics. The ultimate goal of bioinformatics is to develop in silico models that will complement in vitro and in vivo biological experiments. The article provides a bird's eye view of the basic concepts in molecular cell biology, outlines the nature of the existing data, and describes the kind of computer algorithms and techniques that are necessary to understand cell behavior. The underlying motivation for many of the bioinformatics approaches is the evolution of organisms and the complexity of working with incomplete and noisy data. The topics covered include: descriptions of the current software especially developed for biologists, computer and mathematical cell models, and areas of computer science that play an important role in bioinformatics.

233 citations


Journal ArticleDOI
TL;DR: To identify the key issues in designing a wide-area replica hosting system, this work presents an architectural framework that assists in characterizing different systems in a systematic manner and categorizes different research efforts and review their relative merits and demerits.
Abstract: Replication is a well-known technique to improve the accessibility of Web sites. It generally offers reduced client latencies and increases a site's availability. However, applying replication techniques is not trivial, and various Content Delivery Networks (CDNs) have been created to facilitate replication for digital content providers. The success of these CDNs has triggered further research efforts into developing advanced Web replica hosting systems. These are systems that host the documents of a website and manage replication automatically. To identify the key issues in designing a wide-area replica hosting system, we present an architectural framework. The framework assists in characterizing different systems in a systematic manner. We categorize different research efforts and review their relative merits and demerits. As an important side-effect, this review and characterization shows that there a number of interesting research questions that have not received much attention yet, but which deserve exploration by the research community.

157 citations


Journal ArticleDOI
TL;DR: The Local Ratio Technique (LRT) as discussed by the authors is a methodology for the design and analysis of algorithms for a broad range of optimization problems, such as packing problems, and scheduling problems.
Abstract: The local ratio technique is a methodology for the design and analysis of algorithms for a broad range of optimization problems. The technique is remarkably simple and elegant, and yet can be applied to several classical and fundamental problems (including covering problems, packing problems, and scheduling problems). The local ratio technique uses elementary math and requires combinatorial insight into the structure and properties of the problem at hand. Typically, when using the technique, one has to invent a weight function for a problem instance under which every "reasonable" solution is "good." The local ratio technique is closely related to the primal-dual schema, though it is not based on weak LP duality (which is the basis of the primal-dual approach) since it is not based on linear programming.In this survey we, introduce the local ratio technique and demonstrate its use in the design and analysis of algorithms for various problems. We trace the evolution path of the technique since its inception in the 1980's, culminating with the most recent development, namely, fractional local ratio, which can be viewed as a new LP rounding technique.

94 citations


Journal ArticleDOI
TL;DR: The notion of a local transaction as the basic building block for fault-tolerant mobile agent execution is introduced and existing approaches are classified according to when and by whom the local transactions are committed.
Abstract: Over the past years, mobile agent technology has attracted considerable attention, and a significant body of literature has been published. To further develop mobile agent technology, reliability mechanisms such as fault tolerance and transaction support are required. This article aims at structuring the field of fault-tolerant and transactional mobile agent execution and thus at guiding the reader to understand the basic strengths and weaknesses of existing approaches. It starts with a discussion on providing fault tolerance in a system in which processes simply fail. For this purpose, we first identify two basic requirements for fault-tolerant mobile agent execution: (1) non-blocking (i.e., a single failure does not prevent progress of the mobile agent execution) and (2) exactly-once (i.e., multiple executions of the agent are prevented). This leads us to introduce the notion of a local transaction as the basic building block for fault-tolerant mobile agent execution and to classify existing approaches according to when and by whom the local transactions are committed. In a second part, we show that transactional mobile agent execution additionally ensures execution atomicity and present a survey of existing approaches. In the last part of the survey, we extend the notion of fault tolerance to arbitrary Byzantine failures and security-related issues of the mobile agent execution.

52 citations


Journal ArticleDOI
TL;DR: Bresenham's algorithm minimizes error in drawing lines on integer grid points; leap year calculations, surprisingly, are a generalization.
Abstract: Bresenham's algorithm minimizes error in drawing lines on integer grid points; leap year calculations, surprisingly, are a generalization. We compare the two calculations, explicate the pattern, and discuss the connection of the leap year/line pattern with integer division and Euclid's algorithm for computing the greatest common divisor.

26 citations


Journal ArticleDOI
TL;DR: An overview is presented of object-based and image-based representations of objects by their interiors that achieve the goal of being able to answer both types of queries with one representation and without possibly having to examine every cell.
Abstract: An overview is presented of object-based and image-based representations of objects by their interiors. The representations are distinguished by the manner in which they can be used to answer two fundamental queries in database applications: (1) Feature query: given an object, determine its constituent cells (i.e., their locations in space). (2) Location query: given a cell (i.e., a location in space), determine the identity of the object (or objects) of which it is a member as well as the remaining constituent cells of the object (or objects). Regardless of the representation that is used, the generation of responses to the feature and location queries is facilitated by building an index (i.e., the result of a sort) either on the objects or on their locations in space, and implementing it using an access structure that correlates the objects with the locations. Assuming the presence of an access structure, implicit (i.e., image-based) representations are described that are good for finding the objects associated with a particular location or cell (i.e., the location query), while requiring that all cells be examined when determining the locations associated with a particular object (i.e., the feature query). In contrast, explicit (i.e., object-based) representations are good for the feature query, while requiring that all objects be examined when trying to respond to the location query. The goal is to be able to answer both types of queries with one representation and without possibly having to examine every cell. Representations are presented that achieve this goal by imposing containment hierarchies on either space (i.e., the cells in the space in which the objects are found), or objects. In the former case, space is aggregated into successively larger-sized chunks (i.e., blocks), while in the latter, objects are aggregated into successively larger groups (in terms of the number of objects that they contain). The former is applicable to image-based interior-based representations of which the space pyramid is an example. The latter is applicable to object-based interior-based representations of which the R-tree is an example. The actual mechanics of many of these representations are demonstrated in the VASCO JAVA applets found at http://www.cs.umd.edu/˜hjs/quadtree/index.html.

Journal ArticleDOI
TL;DR: Categories and Subject Descriptors: D.2.1 [Software Engineering]: Requirements/Specifications-Methodologies; 1.4 [Artificial Intelligence]: Knowledge Representation Formalisms and Methods.
Abstract: Categories and Subject Descriptors: D.2.1 [Software Engineering]: Requirements/Specifications-Methodologies; 1.2.4 [Artificial Intelligence]: Knowledge Representation Formalisms and Methods.