scispace - formally typeset
Search or ask a question

Showing papers in "Lecture Notes in Computer Science in 1999"


Journal Article
TL;DR: In this article, the authors explore the effect of dimensionality on the nearest neighbor problem and show that under a broad set of conditions (much broader than independent and identically distributed dimensions), as dimensionality increases, the distance to the nearest data point approaches the distance of the farthest data point.
Abstract: We explore the effect of dimensionality on the nearest neighbor problem. We show that under a broad set of conditions (much broader than independent and identically distributed dimensions), as dimensionality increases, the distance to the nearest data point approaches the distance to the farthest data point. To provide a practical perspective, we present empirical results on both real and synthetic data sets that demonstrate that this effect can occur for as few as 10-15 dimensions. These results should not be interpreted to mean that high-dimensional indexing is never meaningful; we illustrate this point by identifying some high-dimensional workloads for which this effect does not occur. However, our results do emphasize that the methodology used almost universally in the database literature to evaluate high-dimensional indexing techniques is flawed, and should be modified. In particular, most such techniques proposed in the literature are not evaluated versus simple linear scan, and are evaluated over workloads for which nearest neighbor is not meaningful. Often, even the reported experiments, when analyzed carefully, show that linear scan would outperform the techniques being proposed on the workloads studied in high (10-15) dimensionality!.

1,992 citations


Book ChapterDOI
TL;DR: The Aware Home project is introduced and some of the technology-and human-centered research objectives in creating the Aware Home are outlined, to create a living laboratory for research in ubiquitous computing for everyday activities.
Abstract: We are building a home, called the Aware Home, to create a living laboratory for research in ubiquitous computing for everyday activities. This paper introduces the Aware Home project and outlines some of our technology-and human-centered research objectives in creating the Aware Home.

1,119 citations


Book ChapterDOI
TL;DR: This work indexes the blob descriptions using a lower-rank approximation to the high-dimensional distance to make large-scale retrieval feasible, and shows encouraging results for both querying and indexing.
Abstract: Blobworld is a system for image retrieval based on finding coherent image regions which roughly correspond to objects. Each image is automatically segmented into regions ("blobs") with associated color and texture descriptors. Queryingi s based on the attributes of one or two regions of interest, rather than a description of the entire image. In order to make large-scale retrieval feasible, we index the blob descriptions usinga tree. Because indexing in the high-dimensional feature space is computationally prohibitive, we use a lower-rank approximation to the high-dimensional distance. Experiments show encouraging results for both queryinga nd indexing.

896 citations


Book ChapterDOI
TL;DR: 3D shape histograms are introduced as an intuitive and powerful similarity model for 3D objects that efficiently supports similarity search based on quadratic forms and has high classification accuracy and good performance.
Abstract: Classification is one of the basic tasks of data mining in modern database applications including molecular biology, astronomy, mechanical engineering, medical imaging or meteorology The underlying models have to consider spatial properties such as shape or extension as well as thematic attributes We introduce 3D shape histograms as an intuitive and powerful similarity model for 3D objects Particular flexibility is provided by using quadratic form distance functions in order to account for errors of measurement, sampling, and numerical rounding that all may result in small displacements and rotations of shapes For query processing, a general filter-refinement architecture is employed that efficiently supports similarity search based on quadratic forms An experimental evaluation in the context of molecular biology demonstrates both, the high classification accuracy of more than 90% and the good performance of the approach

608 citations


Book ChapterDOI
TL;DR: On-going research in the representation of the positions of moving-point objects is reported on, where object positions are sampled using the Global Positioning System, and interpolation is applied to determine positions in-between the samples.
Abstract: Spatiotemporal applications, such as fleet management and air traffic control, involving continuously moving objects are increasingly at the focus of research efforts. The representation of the continuously changing positions of the objects is fundamentally important in these applications. This paper reports on on-going research in the representation of the positions of moving-point objects. More specifically, object positions are sampled using the Global Positioning System, and interpolation is applied to determine positions in-between the samples. Special attention is given in the representation to the quantification of the position uncertainty introduced by the sampling technique and the interpolation. In addition, the paper considers the use for query processing of the proposed representation in conjunction with indexing. It is demonstrated how queries involving uncertainty may be answered using the standard filter-and-refine approach known from spatial query processing.

516 citations


Journal Article
TL;DR: A new construction for PVSS schemes is presented, which compared to previous solutions by Stadler and later by Fujisaki and Okamoto, achieves improvements both in efficiency and in the type of intractability assumptions.
Abstract: A publicly verifiable secret sharing (PVSS) scheme is a verifiable secret sharing scheme with the property that the validity of the shares distributed by the dealer can be verified by any party; hence verification is not limited to the respective participants receiving the shares. We present a new construction for PVSS schemes, which compared to previous solutions by Stadler and later by Fujisaki and Okamoto, achieves improvements both in efficiency and in the type of intractability assumptions. The running time is O(nk), where k is a security parameter, and n is the number of participants, hence essentially optimal. The intractability assumptions are the standard Diffie-Hellman assumption and its decisional variant. We present several applications of our PVSS scheme, among which is a new type of universally verifiable election scheme based on PVSS. The election scheme becomes quite practical and combines several advantages of related electronic voting schemes, which makes it of interest in its own right.

503 citations


Book ChapterDOI
TL;DR: A new model to detect objects in a given image, based on techniques of curve evolution, Mumford-Shah functional for segmentation and level sets, which can detect objects whose boundaries are not necessarily defined by gradient is proposed.
Abstract: In this paper, we propose a new model for active contours to detect objects in a given image, based on techniques of curve evolution, Mumford-Shah functional for segmentation and level sets. Our model can detect objects whose boundaries are not necessarily defined by gradient. The model is a combination between more classical active contour models using mean curvature motion techniques, and the Mumford-Shah model for segmentation. We minimize an energy which can be seen as a particular case of the so-called minimal partition problem. In the level set formulation, the problem becomes a "mean-curvature flow"-like evolving the active contour, which will stop on the desired boundary. However, the stopping term does not depend on the gradient of the image, as in the classical active contour models, but is instead related to a particular segmentation of the image. Finally, we will present various experimental results and in particular some examples for which the classical snakes methods based on the gradient are not applicable.

487 citations


Book ChapterDOI
TL;DR: The event calculus is presented, a logic-based formalism for representing actions and their effects which reduces to monotonic predicate completion and is shown to apply to a variety of domains, including those featuring actions with indirect effects, actions with nondeterministic effects, concurrent actions, and continuous change.
Abstract: This article presents the event calculus, a logic-based formalism for representing actions and their effects. A circumscriptive solution to the frame problem is deployed which reduces to monotonic predicate completion. Using a number of benchmark examples from the literature, the formalism is shown to apply to a variety of domains, including those featuring actions with indirect effects, actions with nondeterministic effects, concurrent actions, and continuous change.

485 citations


Journal Article
TL;DR: This conversion is the first generic transformation from an arbitrary one-way asymmetricryption scheme to a chosen-ciphertext secure asymmetric encryption scheme in the random oracle model.
Abstract: This paper shows a generic and simple conversion from weak asymmetric and symmetric encryption schemes into an asymmetric encryption scheme which is secure in a very strong sense- indistinguishability against adaptive chosen-ciphertext attacks in the random oracle model. In particular, this conversion can be applied efficiently to an asymmetric encryption scheme that provides a large enough coin space and, for every message, many enough variants of the encryption, like the ElGamal encryption scheme.

457 citations


Journal Article
TL;DR: In this paper, the authors describe a message authentication algorithm, UMAC, which can authenticate messages (in software, on contemporary machines) roughly an order of magnitude faster than current practice (e.g., HMAC-SHA1), and about twice as fast as times previously reported for the universal hash function family MMH.
Abstract: We describe a message authentication algorithm, UMAC, which can authenticate messages (in software, on contemporary machines) roughly an order of magnitude faster than current practice (e.g., HMAC-SHA1), and about twice as fast as times previously reported for the universal hash-function family MMH. To achieve such speeds, UMAC uses a new universal hash-function family, NH, and a design which allows effective exploitation of SIMD parallelism The cryptographic work of UMAC is done using standard primitives of the user's choice, such as a block cipher or cryptographic hash function; no new heuristic primitives are developed here. Instead, the security of UMAC is rigorously proven, in the sense of giving exact and quantitatively strong results which demonstrate an inability to forge UMAC-authenticated messages assuming an inability to break the underlying cryptographic primitive. Unlike conventional, inherently serial MACs, UMAC is parallelizable, and will have ever-faster implementation speeds as machines offer up increasing amounts of parallelism. We envision UMAC as a practical algorithm for next-generation message authentication.

359 citations


Journal Article
TL;DR: In this article, a relinearization method was proposed for solving the HFE scheme for any constant ∈ > 0 in expected polynomial time. But the complexity of the attack is infeasibly large for some choices of the parameters and thus some variants of these schemes may remain practically unbroken in spite of the new attack.
Abstract: The RSA public key cryptosystem is based on a single modular equation in one variable. A natural generalization of this approach is to consider systems of several modular equations in several variables. In this paper we consider Patarin's Hidden Field Equations (HFE) scheme, which is believed to be one of the strongest schemes of this type. We represent the published system of multivariate polynomials by a single univariate polynomial of a special form over an extension field, and use it to reduce the cryptanalytic problem to a system of cm 2 quadratic equations in m variables over the extension field, Finally, we develop a new relinearization method for solving such systems for any constant ∈ > 0 in expected polynomial time. The new type of attack is quite general, and in a companion paper we use it to attack other multivariate algebraic schemes, such as the Dragon encryption and signature schemes. However, we would like to emphasize that the polynomal time complexities may be infeasibly large for some choices of the parameters, and thus some variants of these schemes may remain practically unbroken in spite of the new attack.

Journal ArticleDOI
TL;DR: A novel technique is presented that adapts pre-defined state-based specification test data generation criteria to generate test cases from UML statecharts to enable highly effective tests to be developed.
Abstract: Although most industry testing of complex software is conducted at the system level, most formal research has focused on the unit level As a result, most system level testing techniques are only described informally This paper presents a novel technique that adapts pre-defined state-based specification test data generation criteria to generate test cases from UML statecharts UML statecharts provide a solid basis for test generation in a form that can be easily manipulated This technique includes coverage criteria that enable highly effective tests to be developed To demonstrate this technique, a tool has been developed that uses UML statecharts produced by Rational Software Corporation's Rational Rose tool to generate test data Experimental results from using this tool are presented

Book ChapterDOI
TL;DR: A formal framework is introduced for the analysis and specification of models for trust evolution and trust update and different properties of these models are formally defined.
Abstract: The aim of this paper is to analyse and formalise the dynamics of trust in the light of experiences. A formal framework is introduced for the analysis and specification of models for trust evolution and trust update. Different properties of these models are formally defined.

Book ChapterDOI
TL;DR: An algorithm, called "Generate_Spatio_Temporal_Data" (GSTD), which generates sets of moving point or rectangular data that follow an extended set of distributions is proposed, and some actual generated datasets are presented.
Abstract: An efficient benchmarking environment for spatiotemporal access methods should at least include modules for generating synthetic datasets, storing datasets (real datasets included), collecting and running access structures, and visualizing experimental results. Focusing on the dataset repository module, a collection of synthetic data that would simulate a variety of real life scenarios is required. Several algorithms have been implemented in the past to generate static spatial (point or rectangular) data, for instance, following a predefined distribution in the workspace. However, by introducing motion, and thus temporal evolution in spatial object definition, generating synthetic data tends to be a complex problem. In this paper, we discuss the parameters to be considered by a generator for such type of data, propose an algorithm, called "Generate_Spatio_Temporal_Data" (GSTD), which generates sets of moving point or rectangular data that follow an extended set of distributions. Some actual generated datasets are also presented. The GSTD source code and several illustrative examples are currently available to all researchers through the Internet.

Journal Article
TL;DR: This paper presents a technique which exploits the duality between sets of patterns and relations to grow the target relation starting from a small sample and uses it to extract a relation of (author,title) pairs from the World Wide Web.
Abstract: The World Wide Web is a vast resource for information. At the same time it is extremely distributed. A particular type of data such as restaurant lists may be scattered across thousands of independent information sources in many different formats. In this paper, we consider the problem of extracting a relation for such a data type from all of these sources automatically. We present a technique which exploits the duality between sets of patterns and relations to grow the target relation starting from a small sample. To test our technique we use it to extract a relation of (author,title) pairs from the World Wide Web.

Journal Article
TL;DR: This article develops algorithms to select a set of views to materialize in a data warehouse in order to minimize the total query response time under the constraint of a given total view maintenance time and designs an A* heuristic, that delivers an optimal solution.
Abstract: A data warehouse stores materialized views derived from one or more sources for the purpose of efficiently implementing decision-support or OLAP queries. One of the most important decisions in designing a data warehouse is the selection of materialized views to be maintained at the warehouse. The goal is to select an appropriate set of views that minimizes total query response time and/or the cost of maintaining the selected views, given a limited amount of resource such as materialization time, storage space, or total view maintenance time. In this article, we develop algorithms to select a set of views to materialize in a data warehouse in order to minimize the total query response time under the constraint of a given total view maintenance time. As the above maintenance-cost view-selection problem is extremely intractable, we tackle some special cases and design approximation algorithms. First, we design an approximation greedy algorithm for the maintenance-cost view-selection problem in OR view graphs, which arise in many practical applications, e.g., data cubes. We prove that the query benefit of the solution delivered by the proposed greedy heuristic is within 63% of that of the optimal solution. Second, we also design an A * heuristic, that delivers an optimal solution, for the general case of AND-OR view graphs. We implemented our algorithms and a performance study of the algorithms shows that the proposed greedy algorithm for OR view graphs almost always delivers an optimal solution.

Book ChapterDOI
TL;DR: Hspr is a heuristic search planner that searches backward from the goal rather than forward from the initial state, which allows hspr to compute the heuristic estimates only once, and can produce better plans, often in less time.
Abstract: In the recent AIPS98 Planning Competition, the hsp planner, based on a forward state search and a domain-independent heuristic, showed that heuristic search planners can be competitive with state of the art Graphplan and Satisfiability planners. hsp solved more problems than the other planners but it often took more time or produced longer plans. The main bottleneck in hsp is the computation of the heuristic for every new state. This computation may take up to 85% of the processing time. In this paper, we present a solution to this problem that uses a simple change in the direction of the search. The new planner, that we call hspr, is based on the same ideas and heuristic as hsp , but searches backward from the goal rather than forward from the initial state. This allows hspr to compute the heuristic estimates only once. As a result, hspr can produce better plans, often in less time. For example, hspr solves each of the 30 logistics problems from Kautz and Selman in less than 3 seconds. This is two orders of magnitude faster than blackbox. At the same time, in almost all cases, the plans are substantially smaller. hspr is also more robust than hsp as it visits a larger number of states, makes deterministic decisions, and relies on a single adjustable parameter than can be fixed for most domains. hspr, however, is not better than hsp accross all domains and in particular, in the blocks world, hspr fails on some large instances that hsp can solve. We discuss also the relation between hspr and Graphplan, and argue that Graphplan can also be understood as a heuristic search planner with a precise heuristic function and search algorithm.

Journal Article
TL;DR: The notion of abuse-free distributed contract signing is introduced, that is, distributedcontract signing in which no party ever can prove to a third party that he is capable of choosing whether to validate or invalidate the contract.
Abstract: We introduce the notion of abuse-free distributed contract signing, that is, distributed contract signing in which no party ever can prove to a third party that he is capable of choosing whether to validate or invalidate the contract. Assume Alice and Bob are signing a contract. If the contract protocol they use is not abuse-free, then it is possible for one party, say Alice, at some point to convince a third party, Val, that Bob is committed to the contract, whereas she is not yet. Contract protocols with this property are therefore not favorable to Bob, as there is a risk that Alice does not really want to sign the contract with him, but only use his willingness to sign to get leverage for another contract. Most existing optimistic contract signing schemes are not abuse-free. (The only optimistic contract signing scheme to date that does not have this property is mefficient, and is only abuse-free against an off-line attacker.) We give an efficient abuse-free optimistic contract-signing protocol based on ideas introduced for designated verifier proofs (i.e., proofs for which only a designated verifier can be convinced). Our basic solution is for two parties. We show that straightforward extensions to n > 2 party contracts do not work, and then show how to construct a three-party abuse-free optimistic contract-signing protocol An important technique we introduce is a type of signature we call a private contract signature Roughly, these are designated verifier signatures that can be converted into universally-verifiable siguatures by either the signing party or a trnsted third party appointed by the signing party, whose identity and power to convert can be verified (without interaction) by the party who is the designated verifier.

Journal Article
TL;DR: In this article, the authors propose integrative negotiation as a more suitable approach to retail electronic commerce, and identify promising techniques (e.g., multi-attribute utility theory, distributed constraint satisfaction, and conjoint analysis) for implementing agent-mediated integrative negotiations.
Abstract: Software agents help automate a variety of tasks including those involved in buying and selling products over the Internet. Although shopping agents provide convenience for consumers and yield more efficient markets, today's first-generation shopping agents are limited to comparing merchant offerings only on price instead of their full range of value. As such, they do a disservice to both consumers and retailers by hiding important merchant value-added services from consumer consideration. Likewise, the increasingly popular online auctions pit sellers against buyers in distributive negotiation tug-of-wars over price. This paper analyzes these approaches from economic, behavioral, and software agent perspectives then proposes integrative negotiation as a more suitable approach to retail electronic commerce. Finally, we identify promising techniques (e.g., multi-attribute utility theory, distributed constraint satisfaction, and conjoint analysis) for implementing agent-mediated integrative negotiation.


Journal Article
TL;DR: In this paper, the authors proposed a public key encryption scheme in which there is one public encryption key, but many private decryption keys, and the tracing algorithm is deterministic and catches all active traitors.
Abstract: We construct a public key encryption scheme in which there is one public encryption key, but many private decryption keys. If some digital content (e.g., a music clip) is encrypted using the public key and distributed through a broadcast channel, then each legitimate user can decrypt using its own private key. Furthermore, if a coalition of users collude to create a new decryption key then there is an efficient algorithm to trace the new key to its creators. Hence, our system provides a simple and efficient solution to the traitor tracing problem. Our tracing algorithm is deterministic, and catches all active traitors while never accusing innocent users, although it is only partially black box. A minor modification to the scheme enables it to resist an adaptive chosen ciphertext attack. Our techniques apply error correcting codes to the discrete log representation problem.

Book ChapterDOI
TL;DR: TLC is a new model checker for debugging a TLA+ specification by checking invariance properties of a finite-state model of the specification.
Abstract: TLA+ is a specification language for concurrent and reactive systems that combines the temporal logic TLA with full first-order logic and ZF set theory. TLC is a new model checker for debugging a TLA+ specification by checking invariance properties of a finite-state model of the specification. It accepts a subclass of TLA+ specifications that should include most descriptions of real system designs. It has been used by engineers to find errors in the cache coherence protocol for a new Compaq multiprocessor. We describe TLA+ specifications and their TLC models, how TLC works, and our experience using it.

Book ChapterDOI
TL;DR: In 1998, the National Institute of Standards and Technology in the US announced that they intend to initiate the development of a new world-wide encryption standard to replace the Data Encryption Standard (DES), a call for candidates was announced worldwide with the deadline of 15th June 1998.
Abstract: On January 2, 1997, the National Institute of Standards and Technology in the US announced that they intend to initiate the development of a new world-wide encryption standard to replace the Data Encryption Standard (DES). A call for candidates was announced world-wide with the deadline of 15th June 1998. Totally, 15 candidates were submitted from the US, Canada, Europe, Asia and Australia. The author is the designer of one of the candidates, and a codesigner of another proposal.

Book ChapterDOI
TL;DR: The research philosophy behind the Oz Project is presented, a research group at CMU that has spent the last ten years studying believable agents and interactive drama and current work from an Oz perspective is surveyed.
Abstract: Believable agents are autonomous agents that exhibit rich personalities. Interactive dramas take place in virtual worlds inhabited by believable agents with whom an audience interacts. In the course of this interaction, the audience experiences a story. This paper presents the research philosophy behind the Oz Project, a research group at CMU that has spent the last ten years studying believable agents and interactive drama. The paper then surveys current work from an Oz perspective.

Book ChapterDOI
TL;DR: The goal of this paper is to provide an introduction, with various elements of novelty, to the Planning as Model Checking paradigm.
Abstract: The goal of this paper is to provide an introduction, with various elements of novelty, to the Planning as Model Checking paradigm.

Book ChapterDOI
TL;DR: This contribution develops a new technique for content-based image retrieval that classify the images based on local invariants that represent the image in a very compact way and allow fast comparison and feature matching with images in the database.
Abstract: This contribution develops a new technique for content-based image retrieval. Where most existing image retrieval systems mainly focus on color and color distribution or texture, we classify the images based on local invariants. These features represent the image in a very compact way and allow fast comparison and feature matching with images in the database. Using local features makes the system robust to occlusions and changes in the background. Using invariants makes it robust to changes in viewpoint and illumination.

Book ChapterDOI
TL;DR: Experimental results presented in this paper show that the algorithm outperforms in practice the algorithms by Eppstein and Martins and Santos for different kinds of random generated graphs.
Abstract: A new algorithm to compute the K shortest paths (in order of increasing length) between a given pair of nodes in a digraph with n nodes and m arcs is presented. The algorithm recursively and efficiently solves a set of equations which generalize the Bellman equations for the (single) shortest path problem and allows a straightforward implementation. After the shortest path from the initial node to every other node has been computed, the algorithm finds the K shortest paths in O(m+Knlog(m/n)) time. Experimental results presented in this paper show that the algorithm outperforms in practice the algorithms by Eppstein [7,8] and by Martins and Santos [15] for different kinds of random generated graphs.

Journal ArticleDOI
TL;DR: A complete formalisation of UML state machine semantics is given in terms of an operational semantics and it can be used as the basis for code-generation, simulation and verification tools for UML Statecharts diagrams.
Abstract: The paper discusses a complete formalisation of UML state machine semantics. This formalisation is given in terms of an operational semantics and it can be used as the basis for code-generation, simulation and verification tools for UML Statecharts diagrams. The formalisation is done in two steps. First, the structure of a UML state machine is translated into a term rewriting system. In the second step, the operational semantics of state machines is defined. In addition, some problematic situations that may arise are discussed. Our formalisation is able to deal with all the features of UML state machines and it has been implemented in the vUML tool, a tool for model-checking UML models.

Journal Article
TL;DR: In this article, a model checking algorithm for continuous-time Markov chains for an extension of the continuous stochastic logic CSL of Aziz et al. is presented, which contains a time-bounded until operator and a novel operator to express steady-state probabilities.
Abstract: This paper presents a symbolic model checking algorithm for continuous-time Markov chains for an extension of the continuous stochastic logic CSL of Aziz et al [1]. The considered logic contains a time-bounded until-operator and a novel operator to express steady-state probabilities. We show that the model checking problem for this logic reduces to a system of linear equations (for unbounded until and the steady state-operator) and a Volterra integral equation system for time-bounded until. We propose a symbolic approximate method for solving the integrals using MTDDs (multi-terminal decision diagrams), a generalisation of MTBDDs. These new structures are suitable for numerical integration using quadrature formulas based on equally-spaced abscissas. like trapezoidal, Simpson and Romberg integration schemes.

Journal Article
TL;DR: In this article, a new algorithm for maximum weighted matching in general edge-weighted graphs is presented, which calculates a matching with an edge weight of at least one-half of the edge weight for a maximum weighted match.
Abstract: A new approximation algorithm for maximum weighted matching in general edge-weighted graphs is presented. It calculates a matching with an edge weight of at least of the edge weight of a maximum weighted matching. Its time complexity is O(|E|), with |E| being the number of edges in the graph. This improves over the previously known -approximation algorithms for maximum weighted matching which require O(|E| log(|V|)) steps, where |V| is the number of vertices.