scispace - formally typeset
Search or ask a question

Showing papers in "Lecture Notes in Computer Science in 2001"


Book ChapterDOI
TL;DR: Pastry as mentioned in this paper is a scalable, distributed object location and routing substrate for wide-area peer-to-peer ap- plications, which performs application-level routing and object location in a po- tentially very large overlay network of nodes connected via the Internet.
Abstract: This paper presents the design and evaluation of Pastry, a scalable, distributed object location and routing substrate for wide-area peer-to-peer ap- plications. Pastry performs application-level routing and object location in a po- tentially very large overlay network of nodes connected via the Internet. It can be used to support a variety of peer-to-peer applications, including global data storage, data sharing, group communication and naming. Each node in the Pastry network has a unique identifier (nodeId). When presented with a message and a key, a Pastry node efficiently routes the message to the node with a nodeId that is numerically closest to the key, among all currently live Pastry nodes. Each Pastry node keeps track of its immediate neighbors in the nodeId space, and notifies applications of new node arrivals, node failures and recoveries. Pastry takes into account network locality; it seeks to minimize the distance messages travel, according to a to scalar proximity metric like the number of IP routing hops. Pastry is completely decentralized, scalable, and self-organizing; it automatically adapts to the arrival, departure and failure of nodes. Experimental results obtained with a prototype implementation on an emulated network of up to 100,000 nodes confirm Pastry's scalability and efficiency, its ability to self-organize and adapt to node failures, and its good network locality properties.

7,423 citations


Journal Article
TL;DR: AspectJ as mentioned in this paper is a simple and practical aspect-oriented extension to Java with just a few new constructs, AspectJ provides support for modular implementation of a range of crosscutting concerns.
Abstract: Aspect] is a simple and practical aspect-oriented extension to Java With just a few new constructs, AspectJ provides support for modular implementation of a range of crosscutting concerns. In AspectJ's dynamic join point model, join points are well-defined points in the execution of the program; pointcuts are collections of join points; advice are special method-like constructs that can be attached to pointcuts; and aspects are modular units of crosscutting implementation, comprising pointcuts, advice, and ordinary Java member declarations. AspectJ code is compiled into standard Java bytecode. Simple extensions to existing Java development environments make it possible to browse the crosscutting structure of aspects in the same kind of way as one browses the inheritance structure of classes. Several examples show that AspectJ is powerful, and that programs written using it are easy to understand.

2,947 citations


Book ChapterDOI
TL;DR: The Shuttle Radar Topography Mission (SRTM) is an interferometric SAR system that flew during 11-22 February 2000 aboard NASA's Space Shuttle Endeavour and collected highly specialized data that will allow the generation of Digital Terrain Elevation Data Level 2(DTED® 2).
Abstract: Elevation data is vital to successful mission planning, operations and readiness. Traditional methods for producing elevation data are very expensive and time consuming; major cloud belts would never be completed with existing methods. The Shuttle Radar Topography Mission (SRTM) was selected in 1995 as the best means of supplying nearly global, accurate elevation data. The SRTM is an interferometric SAR system that flew during 11-22 February 2000 aboard NASA's Space Shuttle Endeavour and collected highly specialized data that will allow the generation of Digital Terrain Elevation Data Level 2(DTED® 2). The result of the SRTM will increase the United States Government's coverage of vital and detailed DTED® 2from less than 5% to 80% of the Earth's landmass. This paper describes the shuttle mission and its deliverables.

1,886 citations


Book ChapterDOI
TL;DR: A novel public key cryptosystem in which the public key of a subscriber can be chosen to be a publicly known value, such as his identity, which is related to the difficulty of solving the quadratic residuosity problem.
Abstract: We present a novel public key cryptosystem in which the public key of a subscriber can be chosen to be a publicly known value, such as his identity. We discuss the security of the proposed scheme, and show that this is related to the difficulty of solving the quadratic residuosity problem.

1,228 citations


Book ChapterDOI
TL;DR: It is shown that the electromagnetic attack obtains at least the same result as power consumption and consequently must be carefuly taken into account.
Abstract: A processor can leak information by different ways [1], electromagnetic radiations could bc one of them. This idea, was first introduced by Kocher, with timing and power measurements. Here we developed the continuation of his ideas by measuring the field radiated by the processor. Therefore we show that the electromagnetic attack obtains at least the same result as power consumption and consequently must be carefuly taken into account. Finally we enumerate countermeasures to be implemented.

1,183 citations


Journal Article
TL;DR: This work constructively proves that any system that fails this condition, is incapable of tracing pirate-decoders that contain keys based on a superlogarithmic number of traitor keys, and investigates a weaker form of black-box tracing called single-query "black-box confirmation."
Abstract: We present a new generic black-box traitor tracing model in which the pirate-decoder employs a self-protection technique. This mechanism is simple, easy to implement in any (software or hardware) device and is a natural way by which a pirate (an adversary) which is black-box accessible, may try to evade detection. We present a necessary combinatorial condition for black-box traitor tracing of self-protecting devices. We constructively prove that any system that fails this condition, is incapable of tracing pirate-decoders that contain keys based on a superlogarithmic number of traitor keys. We then combine the above condition with specific properties of concrete systems. We show that the Boneh-Franklin (BF) scheme as well as the Kurosawa-Desmedt scheme have no black-box tracing capability in the self-protecting model when the number of traitors is superlogarithmic, unless the ciphertext size is as large as in a trivial system, namely linear in the number of users. This partially settles in the negative the open problem of Boneh and Franklin regarding the general black-box traceability of the BF scheme: at least for the case of superlogarithmic traitors. Our negative result does not apply to the Chor-Fiat-Naor (CFN) scheme (which, in fact, allows tracing in our self-protecting model); this separates CFN black-box traceability from that of BF. We also investigate a weaker form of black-box tracing called single-query black-box confirmation. We show that, when suspicion is modeled as a confidence weight (which biases the uniform distribution of traitors), such single-query confirmation is essentially not possible against a self-protecting pirate-decoder that contains keys based on a superlogarithmic number of traitor keys.

1,098 citations


Book ChapterDOI
TL;DR: A two-step process that allows both coarse detection and exact localization of faces is presented and an efficient implementation is described, making this approach suitable for real-time applications.
Abstract: The localization of human faces in digital images is a fundamental step in the process of face recognition. This paper presents a shape comparison approach to achieve fast, accurate face detection that is robust to changes in illumination and background. The proposed method is edge-based and works on grayscale still images. The Hausdorff distance is used as a similarity measure between a general face model and possible instances of the object within the image. The paper describes an efficient implementation, making this approach suitable for real-time applications. A two-step process that allows both coarse detection and exact localization of faces is presented. Experiments were performed on a large test set base and rated with a new validation measurement.

984 citations


Journal Article
TL;DR: In this article, the authors define the concept of sensor databases mixing stored data and sensor data represented as time series and describe the design and implementation of the COUGAR sensor database system.
Abstract: Sensor networks are being widely deployed for measurement, detection and surveillance applications. In these new applications, users issue long-running queries over a combination of stored data and sensor data. Most existing applications rely on a centralized system for collecting sensor data. These systems lack flexibility because data is extracted in a predefined way; also, they do not scale to a large number of devices because large volumes of raw data are transferred regardless of the queries that are submitted. In our new concept of sensor database system, queries dictate which data is extracted from the sensors. In this paper, we define the concept of sensor databases mixing stored data represented as relations and sensor data represented as time series. Each long-running query formulated over a sensor database defines a persistent view, which is maintained during a given time interval. We also describe the design and implementation of the COUGAR sensor database system.

794 citations


Book ChapterDOI
TL;DR: This paper addresses the problem of information fusion in verification systems and experimental results on combining three biometric modalities (face, fingerprint and hand geometry) are presented.
Abstract: User verification systems that use a single biometric indicator often have to contend with noisy sensor data, restricted degrees of freedom and unacceptable error rates. Attempting to improve the performance of individual matchers in such situations may not prove to be effective because of these inherent problems. Multimodal biometric systems seek to alleviate some of these drawbacks by providing multiple evidences of the same identity. These systems also help achieve an increase in performance that may not be possible by using a single biometric indicator. This paper addresses the problem of information fusion in verification systems. Experimental results on combining three biometric modalities (face, fingerprint and hand geometry) are also presented.

790 citations


Book ChapterDOI
TL;DR: This paper proposes an application-level multicast scheme capable of scaling to large group sizes without restricting the service model to a single source, and believes it offers the dual advantages of simplicity and scalability.
Abstract: Most currently proposed solutions to application-level multicast organise the group members into an application-level mesh over which a Distance-Vector routing protocol, or a similar algorithm, is used to construct source-rooted distribution trees. The use of a global routing protocol limits the scalability of these systems. Other proposed solutions that scale to larger numbers of receivers do so by restricting the multicast service model to be single-sourced. In this paper, we propose an application-level multicast scheme capable of scaling to large group sizes without restricting the service model to a single source. Our scheme builds on recent work on Content-Addressable Networks (CANs). Extending the CAN framework to support multicast comes at trivial additional cost and, because of the structured nature of CAN topologies, obviates the need for a multicast routingalg orithm. Given the deployment of a distributed infrastructure such as a CAN, we believe our CAN-based multicast scheme offers the dual advantages of simplicity and scalability.

673 citations


Journal Article
TL;DR: In this article, the basic concepts behind access control design and enforcement are investigated, and different security requirements that may need to be taken into consideration, and several access control policies and models formalizing them are discussed.
Abstract: Access control is the process of mediating every request to resources and data maintained by a system and determining whether the request should be granted or denied. The access control decision is enforced by a mechanism implementing regulations established by a security policy. Different access control policies can be applied, corresponding to different criteria for defining what should, and what should not, be allowed, and, in some sense, to different definitions of what ensuring security means. In this chapter we investigate the basic concepts behind access control design and enforcement, and point out different security requirements that may need to be taken into consideration. We discuss several access control policies, and models formalizing them, that have been proposed in the literature or that are currently under investigation.

Book ChapterDOI
TL;DR: Scribe is built on top of Pastry, a generic peer-to-peer object location and routing substrate overlayed on the Internet, and leverages Pastry's reliability, self-organization and locality properties.
Abstract: This paper presents Scribe, a large-scale event notification infrastructure for topic-based publish-subscribe applications. Scribe supports large numbers of topics, with a potentially large number of subscribers per topic. Scribe is built on top of Pastry, a generic peer-to-peer object location and routing substrate overlayed on the Internet, and leverages Pastry's reliability, self-organization and locality properties. Pastryi s used to create a topic (group) and to build an efficient multicast tree for the dissemination of events to the topic's subscribers (members). Scribe provides weak reliability guarantees, but we outline how an application can extend Scribe to provide stronger ones.

Journal Article
TL;DR: The behaviour of several different hyperheuristic approaches for a real-world personnel scheduling problem is analysed and the effectiveness of this approach is shown and wider applicability of hyper heuristic approaches to other problems of scheduling and combinatorial optimisation is suggested.
Abstract: The concept of a hyperheuristic is introduced as an approach that operates at a higher lever of abstraction than current metaheuristic approaches. The hyperheuristic manages the choice of which lower-level heuristic method should be applied at any given time, depending upon the characteristics of the region of the solution space currently under exploration. We analyse the behaviour of several different hyperheuristic approaches for a real-world personnel scheduling problem. Results obtained show the effectiveness of our approach for this problem and suggest wider applicability of hyperheuristic approaches to other problems of scheduling and combinatorial optimisation.

Book ChapterDOI
TL;DR: In this paper, the authors trace how communities have changed from densely-knit "Little Boxes" (densely-knit, linking people door-to-door) to "Glocalized" networks (sparselyknit but with clusters, linking households both locally and globally).
Abstract: Much thinking about digital cities is in terms of community groups. Yet, the world is composed of social networks and not of groups. This paper traces how communities have changed from densely-knit "Little Boxes" (densely-knit, linking people door-to-door) to "Glocalized" networks (sparselyknit but with clusters, linking households both locally and globally) to "Networked Individualism" (sparsely-knit, linking individuals with little regard to space). The transformation affects design considerations for computer systems that would support digital cities.

Book ChapterDOI
TL;DR: This paper illustrates the approach by presenting a three-layer AUML representation for agent interaction protocols: templates and packages to represent the protocol as a whole; sequence and collaboration diagrams to capture inter-agent dynamics; and activity diagrams and state charts to capture both intra-agent and inter- agent dynamics.
Abstract: Gaining wide acceptance for the use of agents in industry requires both relating it to the nearest antecedent technology (objectoriented software development) and using artifacts to support the development environment throughout the full system lifecycle. We address both of these requirements using AUML, the Agent UML (Unified Modeling Language)--a set of UML idioms and extensions. This paper illustrates the approach by presenting a three-layer AUML representation for agent interaction protocols: templates and packages to represent the protocol as a whole; sequence and collaboration diagrams to capture inter-agent dynamics; and activity diagrams and state charts to capture both intra-agent and inter-agent dynamics.

Book ChapterDOI
TL;DR: The goal of the chapter is to present an overview of the background theory and current understanding of SVM, and to discuss the papers presented as well as the issues that arose during the workshop.
Abstract: This chapter presents a summary of the issues discussed during the one day workshop on “Support Vector Machines (SVM) Theory and Applications” organized as part of the Advanced Course on Artificial Intelligence (ACAI ’99) in Chania, Greece [19]. The goal of the chapter is twofold: to present an overview of the background theory and current understanding of SVM, and to discuss the papers presented as well as the issues that arose during the workshop.

Journal Article
TL;DR: The name Quilt suggests both the way in which features from several languages were assembled to make a new query language, and theway in which Quilt queries can combine information from diverse data sources into a query result with a new structure of its own.
Abstract: The World Wide Web promises to transform human society by making virtually all types of information instantly available everywhere. Two prerequisites for this promise to be realized are a universal markup language and a universal query language. The power and flexibility of XML make it the leading candidate for a universal markup language. XML provides a way to label information from diverse data sources including structured and semi-structured documents, relational databases, and object repositories. Several XML-based query languages have been proposed, each oriented toward a specific category of information. Quilt is a new proposal that attempts to unify concepts from several of these query languages, resulting in a new language that exploits the full versatility of XML. The name Quilt suggests both the way in which features from several languages were assembled to make a new query language, and the way in which Quilt queries can combine information from diverse data sources into a query result with a new structure of its own.

Book ChapterDOI
TL;DR: The potential security holes in a biometrics-based authentication scheme are outlined, the numerical strength of one method of fingerprint matching is quantified, then how to combat some of the remaining weaknesses are discussed.
Abstract: In recent years there has been exponential growth in the use of biometrics for user authentication applications because biometrics-based authentication offers several advantages over knowledge and possession-based methods such as password/PIN-based systems. However, it is important that biometrics-based authentication systems be designed to withstand different sources of attacks on the system when employed in security-critical applications. This is even more important for unattended remote applications such as e-commerce. In this paper we outline the potential security holes in a biometrics-based authentication scheme, quantify the numerical strength of one method of fingerprint matching, then discuss how to combat some of the remaining weaknesses.

Book ChapterDOI
TL;DR: It is shown that, given an appropriate clarification of their semantics, activity diagrams are able to capture situations arising in practice, which cannot be captured by most commercial Workflow Management Systems.
Abstract: If UML activity diagrams are to succeed as a standard in the area of organisational process modeling, they need to compare well to alternative languages such as those provided by commercial Workflow Management Systems. This paper examines the expressiveness and the adequacy of activity diagrams for workflow specification, by systematically evaluating their ability to capture a collection of workflow patterns. This analysis provides insights into the relative strengths and weaknesses of activity diagrams. In particular, it is shown that, given an appropriate clarification of their semantics, activity diagrams are able to capture situations arising in practice, which cannot be captured by most commercial Workflow Management Systems. On the other hand, the study shows that activity diagrams fail to capture some useful situations, thereby suggesting directions for improvement.

Book ChapterDOI
TL;DR: This paper believes that fundamental principles for overcoming fundamental problems in meta-modeling theories need to be embodied within the metamodeling framework ultimately adopted for the UML2.0 standard.
Abstract: As the UMLattempts to make the transition from a single, albeit extensible, language to a framework for a family of languages, the nature and form of the underlying meta-modeling architecture will assume growing importance. It is generally recognized that without a simple, clean and intuitive theory of how metamodel levels are created and related to one another, the UML2.0 vision of a coherent family of languages with a common core set of concepts will remain elusive. However, no entirely satisfactory metamodeling approach has yet been found. Current (meta-)modeling theories used or proposed for the UML all have at least one fundamental problem that makes them unsuitable in their present form. In this paper we bring these problems into focus, and present some fundamental principles for overcoming them. We believe that these principles need to be embodied within the metamodeling framework ultimately adopted for the UML2.0 standard.

Book ChapterDOI
TL;DR: This paper introduces a methodology for designing systems of interacting agents and focuses on coordinating the local behavior of individual agents to provide an appropriate system-level behavior.
Abstract: To solve complex problems, agents work cooperatively with other agents in heterogeneous environments. We are interested in coordinating the local behavior of individual agents to provide an appropriate system-level behavior. The use of intelligent agents provides an even greater amount of flexibility to the ability and configuration of the system itself. With these new intricacies, software development is becoming increasingly difficult. Therefore, it is critical that our processes for building the inherently complex distributed software that must run in this environment be adequate for the task. This paper introduces a methodology for designing these systems of interacting agents.

Book ChapterDOI
TL;DR: The results show that the proposed BN and Bayes multinet classifiers are competitive with (or superior to) the best known classifiers; and that the computational time for learning and using these classifiers is relatively small, arguing that BN-based classifiers deserve more attention in the data mining community.
Abstract: This paper investigates the methods for learning predictive classifiers based on Bayesian belief networks (BN) - primarily unrestricted Bayesian networks and Bayesian multi-nets. We present our algorithms for learning these classifiers, and discuss how these methods address the overfitting problem and provide a natural method for feature subset selection. Using a set of standard classification problems, we empirically evaluate the performance of various BN-based classifiers. The results show that the proposed BN and Bayes multinet classifiers are competitive with (or superior to) the best known classifiers, based on both BN and other formalisms; and that the computational time for learning and using these classifiers is relatively small. These results argue that BN-based classifiers deserve more attention in the data mining community.

Book ChapterDOI
TL;DR: In this paper, the authors developed a model of information processing and strategy choice for participants in a double auction, where sellers form beliefs that an offer will be accepted by some buyer, and buyers form belief that a bid will not be accepted, and traders choose an action that maximizes their own expected surplus.
Abstract: We develop a model of information processing and strategy choice for participants in a double auction. Sellers in this model form beliefs that an offer will be accepted by some buyer. Similarly, buyers form beliefs that a bid will be accepted. These beliefs are formed on the basis of observed market data, including frequencies of asks, bids, accepted asks, and accepted bids. Then traders choose an action that maximizes their own expected surplus. The trading activity resulting from these beliefs and strategies is suffcient to achieve transaction prices at competitive equilibrium and complete market effciency after several periods of trading.

Book ChapterDOI
TL;DR: This paper introduces three additional organisational concepts--organisational rules, organisational structures, and organisational patterns--that it believes are necessary for the complete specification of computational organisations.
Abstract: The architecture of a multi-agent system can naturally be viewed as a computational organisation. For this reason, we believe organisational abstractions should play a central role in the analysis and design of such systems. To this end, the concepts of agent roles and role models are increasingly being used to specify and design multi-agent systems. However, this is not the full picture. In this paper we introduce three additional organisational concepts--organisational rules, organisational structures, and organisational patterns--that we believe are necessary for the complete specification of computational organisations.We view the introduction of these concepts as a step towards a comprehensive methodology for agent-oriented systems.

Journal Article
TL;DR: A new method for accelerating this operation on classes of elliptic curves that have efficiently-computable endomorphisms is described, applicable to a larger class of curves than previous such methods.
Abstract: The fundamental operation in elliptic curve cryptographic schemes is the multiplication of an elliptic curve point by an integer. This paper describes a new method for accelerating this operation on classes of elliptic curves that have efficiently-computable endomorphisms. One advantage of the new method is that it is applicable to a larger class of curves than previous such methods. For this special class of curves, a speedup of up to 50% can be expected over the best general methods for point multiplication.

Book ChapterDOI
TL;DR: The heart of AGENT UML is described, i.e., mechanisms to model protocols for multiagent interaction, which include protocol diagrams, agent roles, multithreaded lifelines, extended UML message semantics, nested and interleaved protocols, and protocol templates.
Abstract: In the past, research on agent-oriented software engineering had been widely lacking touch with the world of industrial software development. Recently, a cooperation has been established between the Foundation of Intelligent Physical Agents (FIPA) and the Object Management Group (OMG) aiming to increase acceptance of agent technology in industry by relating to de facto standards (object-oriented software development) and supporting the development environment throughout the full system lifecycle. As a first result of this cooperation, we proposed AGENT UML [1; 20], an extension of the Unified Modeling language (UML), a de facto standard for object-oriented analysis and design. In this paper, we describe the heart of AGENT UML, i.e., mechanisms to model protocols for multiagent interaction. Particular UML extensions described in this paper include protocol diagrams, agent roles, multithreaded lifelines, extended UML message semantics, nested and interleaved protocols, and protocol templates.

Journal Article
TL;DR: In this article, the authors consider 3D object retrieval in which a polygonal mesh serves as a query and similar objects are retrieved from a collection of 3D objects using a normalization step in which models are transformed into canonical coordinates.
Abstract: We consider 3D object retrieval in which a polygonal mesh serves as a query and similar objects are retrieved from a collection of 3D objects. Algorithms proceed first by a normalization step in which models are transformed into canonical coordinates. Second, feature vectors are extracted and compared with those derived from normalized models in the search space. In the feature vector space nearest neighbors are computed and ranked. Retrieved objects are displayed for inspection, selection, and processing. Our feature vectors are based on rays cast from the center of mass of the object. For each ray the object extent in the ray direction yields a sample of a function on the sphere. We compared two kinds of representations of this function, namely spherical harmonics and moments. Our empirical comparison using precision-recall diagrams for retrieval results in a data base of 3D models showed that the method using spherical harmonics performed better.

Journal Article
TL;DR: This chapter defines a cohesive set of conceptual and measurable constructs that captures the essence of trust and distrust definitions across several disciplines, and discusses the importance of viewingTrust and distrust as separate, simultaneously operating concepts.
Abstract: Researchers have remarked and recoiled at the literature confusion regarding the meanings of trust and distrust. The problem involves both the proliferation of narrow intra-disciplinary research definitions of trust and the multiple meanings the word trust possesses in everyday use. To enable trust researchers to more easily compare empirical results, we define a cohesive set of conceptual and measurable constructs that captures the essence of trust and distrust definitions across several disciplines. This chapter defines disposition to trust (and -distrust) constructs from psychology and economics, institution-based trust (and -distrust) constructs from sociology, and trusting/distrusting beliefs, trusting/distrusting intentions, and trust/distrust-related behavior constructs from social psychology and other disciplines. Distrust concepts are defined as separate and opposite from trust concepts. We conclude by discussing the importance of viewing trust and distrust as separate, simultaneously operating concepts.

Book ChapterDOI
TL;DR: SODA promotes the separation of individual and social issues, and focuses on the social aspects of agent-oriented software engineering, and allow the agent environment to be explicitly modelled and mapped onto suitably-defined agent infrastructures.
Abstract: The notion of society should play a central role in agent-oriented software engineering as a first-class abstraction around which complex systems can be designed and built as multi-agent systems. We argue that an effective agentoriented methodology should account for inter-agent aspects by providing engineers with specific abstractions and tools for the analysis and design of agent societies and agent environments. In this paper, we outline the SODA agent-oriented methodology for the analysis and design of Internet-based systems. Based on the core notion of task, SODA promotes the separation of individual and social issues, and focuses on the social aspects of agent-oriented software engineering. In particular, SODA allow the agent environment to be explicitly modelled and mapped onto suitably-defined agent infrastructures.

Book ChapterDOI
TL;DR: It is argued that open agent organisations can be effectively designed and implemented as institutionalized electronic organisations (electronic institutions) composed of a vast amount of heterogeneous agents playing different roles and interacting by means of speech acts.
Abstract: In this article we argue that open agent organisations can be effectively designed and implemented as institutionalized electronic organisations (electronic institutions) composed of a vast amount of heterogeneous (human and software) agents playing different roles and interacting by means of speech acts. Here we take the view that the design and development of electronic institutions must be guided by a principled methodology. Along this direction, we advocate for the presence of an underlying formal method that underpins the use of structured design techniques and formal analysis, facilitating development, composition and reuse. For this purpose we propose a specification formalism for electronic institutions that founds their design, analysis and development.