scispace - formally typeset
Search or ask a question

Showing papers in "Lecture Notes in Computer Science in 2000"


Book ChapterDOI
TL;DR: The SIMPLIcity system represents an image by a set of regions, roughly corresponding to objects, which are characterized by color, texture, shape, and location, which classifies images into categories intended to distinguish semantically meaningful differences.
Abstract: We present here SIMPLIcity (Semantics-sensitive Integrated Matching for Picture LIbraries), an image retrieval system using semantics classification and integrated region matching (IRM) based upon image segmentation. The SIMPLIcity system represents an image by a set of regions, roughly corresponding to objects, which are characterized by color, texture, shape, and location. The system classifies images into categories which are intended to distinguish semantically meaningful differences, such as textured versus nontextured, indoor versus outdoor, and graph versus photograph. Retrieval is enhanced by narrowing down the searching range in a database to a particular category and exploiting semantically-adaptive searching methods. A measure for the overall similarity between images, the IRM distance, is defined by a region-matching scheme that integrates properties of all the regions in the images. This overall similarity approach reduces the adverse effect of inaccurate segmentation, helps to clarify the semantics of a particular region, and enables a simple querying interface for region-based image retrieval systems. The application of SIMPLIcity to a database of about 200,000 general-purpose images demonstrates accurate retrieval at high speed. The system is also robust to image alterations.

1,475 citations


Journal Article
TL;DR: This work introduces a new provably secure group signature and a companion identity escrow scheme that are significantly more efficient than the state of the art.
Abstract: A group signature scheme allows a group member to sign messages anonymously on behalf of the group. However, in the case of a dispute, the identity of a signature's originator can be revealed (only) by a designated entity. The interactive counterparts of group signatures are identity escrow schemes or group identification scheme with revocable anonymity. This work introduces a new provably secure group signature and a companion identity escrow scheme that are significantly more efficient than the state of the art. In its interactive, identity escrow form, our scheme is proven secure and coalition-resistant under the strong RSA and the decisional Diffie-Hellman assumptions. The security of the non-interactive variant, i.e., the group signature scheme, relies additionally on the Fiat-Shamir heuristic (also known as the random oracle model).

744 citations


Journal Article
TL;DR: This paper introduces the concept of privacy preserving data mining, and presents a solution that is considerably more efficient than generic solutions, and demonstrates that secure multi-party computation can be made practical, even for complex problems and large inputs.
Abstract: In this paper we introduce the concept of privacy preserving data mining. In our model, two parties owning confidential databases wish to run a data mining algorithm on the union of their databases, without revealing any unnecessary information. This problem has many practical and important applications, such as in medical research with confidential patient records. Data mining algorithms are usually complex, especially as the size of the input is measured in megabytes, if not gigabytes. A generic secure multi-party computation solution, based on evaluation of a circuit computing the algorithm on the entire input, is therefore of no practical use. We focus on the problem of decision tree learning and use ID3, a popular and widely used algorithm for this problem. We present a solution that is considerably more efficient than generic solutions. It demands very few rounds of communication and reasonable bandwidth. In our solution, each party performs by itself a computation of the same order as computing the ID3 algorithm for its own database. The results are then combined using efficient cryptographic protocols, whose overhead is only logarithmic in the number of transactions in the databases. We feel that our result is a substantial contribution, demonstrating that secure multi-party computation can be made practical, even for complex problems and large inputs.

669 citations


Book ChapterDOI
TL;DR: This paper gives simple greedy approximation algorithms for these optimization problems of finding subgraphs maximizing these notions of density for undirected and directed graphs and answers an open question about the complexity of the optimization problem for directed graphs.
Abstract: We study the problem of finding highly connected subgraphs of undirected and directed graphs. For undirected graphs, the notion of density of a subgraph we use is the average degree of the subgraph. For directed graphs, a corresponding notion of density was introduced recently by Kannan and Vinay. This is designed to quantify highly connectedness of substructures in a sparse directed graph such as the web graph. We study the optimization problems of finding subgraphs maximizing these notions of density for undirected and directed graphs. This paper gives simple greedy approximation algorithms for these optimization problems. We also answer an open question about the complexity of the optimization problem for directed graphs.

523 citations


Book ChapterDOI
TL;DR: This note is intended as companion to the lecture at CONF 2000, mainly to give pointers to the appropriate references.
Abstract: One of the most flourishing areas of research in the design and analysis of approximation algorithms has been for facility location problems. In particular, for the metric case of two simple models, the uncapacitated facility location and the k-median problems, there are now a variety of techniques that yield constant performance guarantees. These methods include LP rounding, primal-dual algorithms, and local search techniques. Furthermore, the salient ideas in these algorithms and their analyzes are simple-to-explain and reflect a surprising degree of commonality. This note is intended as companion to our lecture at CONF 2000, mainly to give pointers to the appropriate references.

499 citations


Journal Article
TL;DR: In this article, a slightly different proof is presented which provides a tighter security reduction for the full domain hash (FDH) scheme in the random oracle model, assuming that inverting RSA is hard and that smaller RSA moduli can be used for the same level of security.
Abstract: The Full Domain Hash (FDH) scheme is a RSA-based signature scheme in which the message is hashed onto the full domain of the RSA function. The FDH scheme is provably secure in the random oracle model, assuming that inverting RSA is hard. In this paper we exhibit a slightly different proof which provides a tighter security reduction. This in turn improves the efficiency of the scheme since smaller RSA moduli can be used for the same level of security. The same method can be used to obtain a tighter security reduction for Rabin signature scheme, Paillier signature scheme, and the Gennaro-Halevi-Rabin signature scheme.

456 citations


Journal Article
TL;DR: In this article, the authors present an extensive and careful study of the software implementation on workstations of the NIST-recommended elliptic curves over binary fields, and present the results of their implementation in C on a Pentium II 400MHz workstation.
Abstract: This paper presents an extensive and careful study of the software implementation on workstations of the NIST-recommended elliptic curves over binary fields. We also present the results of our implementation in C on a Pentium II 400MHz workstation.

425 citations


Book ChapterDOI
TL;DR: Using the Wilcoxon matched-pairs signed rank test, it is concluded that the C4.5 approach and the method of ignoring examples with missing attribute values are the best methods among all nine approaches.
Abstract: In the paper nine different approaches to missing attribute values are presented and compared. Ten input data files were used to investigate the performance of the nine methods to deal with missing attribute values. For testing both naive classification and new classification techniques of LERS (Learning from Examples based on Rough Sets) were used. The quality criterion was the average error rate achieved by ten-fold cross-validation. Using the Wilcoxon matched-pairs signed rank test, we conclude that the C4.5 approach and the method of ignoring examples with missing attribute values are the best methods among all nine approaches; the most common attribute-value method is the worst method among all nine approaches; while some methods do not differ from other methods significantly. The method of assigning to the missing attribute value all possible values of the attribute and the method of assigning to the missing attribute value all possible values of the attribute restricted to the same concept are excellent approaches based on our limited experimental results. However we do not have enough evidence to support the claim that these approaches are superior.

406 citations


Journal Article
TL;DR: Different ways to attack devices featuring random process interrupts and noisy power consumption are examined.
Abstract: The silicon industry has lately been focusing on side channel attacks, that is attacks that exploit information that leaks from the physical devices Although different countermeasures to thwart these attacks have been proposed and implemented in general, such protections do not make attacks infeasible, but increase the attacker's experimental (data acquisition) and computational (data processing) workload beyond reasonable limits This paper examines different ways to attack devices featuring random process interrupts and noisy power consumption

406 citations


Journal Article
TL;DR: In this article, the authors developed a new algorithm for deciding the winner in parity games, and hence also for the modal μ-calculus model checking, based on a notion of game progress measures.
Abstract: In this paper we develop a new algorithm for deciding the winner in parity games, and hence also for the modal μ-calculus model checking. The design and analysis of the algorithm is based on a notion of game progress measures: they are witnesses for winning strategies in parity games. We characterize game progress measures as pre-fixed points of certain monotone operators on a complete lattice. As a result we get the existence of the least game progress measures and a straightforward way to compute them. The worst-case running time of our algorithm matches the best worst-case running time bounds known so far for the problem, achieved by the algorithms due to Browne et al., and Seidl. Our algorithm has better space complexity: it works in small polynomial space; the other two algorithms have exponential worst-case space complexity.

381 citations


Journal Article
TL;DR: In this paper, the authors provide a computational justification for a formal treatment of encryption, by providing a computational model that considers complexity and probability of a cryptosystem's security properties.
Abstract: Two distinct, rigorous views of cryptography have developed over the years, in two mostly separate communities. One of the views relies on a simple but effective formal approach; the other, on a detailed computational model that considers issues of complexity and probability. There is an uncomfortable and interesting gap between these two approaches to cryptography. This paper starts to bridge the gap, by providing a computational justification for a formal treatment of encryption.

Journal Article
TL;DR: This paper presents an extension to the reflection API of Java that enables structural reflection in Java, and presents the design principles of Javassist, which distinguish Javassists from related work.
Abstract: The standard reflection API of Java provides the ability to introspect a program but not to alter program behavior. This paper presents an extension to the reflection API for addressing this limitation. Unlike other extensions enabling behavioral reflection, our extension called Javassist enables structural reflection in Java. For using a standard Java virtual machine (JVM) and avoiding a performance problem, Javassist allows structural reflection only before a class is loaded into the JVM. However, Javassist still covers various applications including a language extension emulating behavioral reflection. This paper also presents the design principles of Javassist, which distinguish Javassist from related work.

Book ChapterDOI
TL;DR: A modified XCS classifier system is described that learns a non-linear real-vector classification task.
Abstract: Classifier systems have traditionally taken binary strings as inputs, yet in many real problems such as data inference, the inputs have real components. A modified XCS classifier system is described that learns a non-linear real-vector classification task.

Journal Article
TL;DR: Data about students' use of the help facilities of the PACT Geometry Tutor, a cognitive tutor for high school geometry, suggest that students do not always have metacognitive skills to support students in learning domain-specific skills and knowledge.
Abstract: Intelligent tutoring systems often emphasize learner control: They let the students decide when and how to use the system's intelligent and unintelligent help facilities. This means that students must judge when help is needed and which form of help is appropriate. Data about students' use of the help facilities of the PACT Geometry Tutor, a cognitive tutor for high school geometry, suggest that students do not always have these metacognitive skills. Students rarely used the tutor's on-line Glossary of geometry knowledge. They tended to wait long before asking for hints, and tended to focus only on the most specific hints, ignoring the higher hint levels. This suggests that intelligent tutoring systems should support students in learning these skills, just as they support students in learning domain-specific skills and knowledge. Within the framework of cognitive tutors, this requires creating a cognitive model of the metacognitive help-seeking strategies, in the form of production rules. The tutor then can use the model to monitor students' metacognitive strategies and provide feedback.

Book ChapterDOI
TL;DR: In this article, the authors define two new bases for association rules which union is a generating set for all valid association rules with support and confidence, which consist of the nonredundant exact and approximate association rules having minimal antecedents and maximal consequents.
Abstract: The problem of the relevance and the usefulness of extracted association rules is of primary importance because, in the majority of cases, real-life databases lead to several thousands association rules with high confidence and among which are many redundancies. Using the closure of the Galois connection, we define two new bases for association rules which union is a generating set for all valid association rules with support and confidence. These bases are characterized using frequent closed itemsets and their generators; they consist of the nonredundant exact and approximate association rules having minimal antecedents and maximal consequents, i.e. the most relevant association rules. Algorithms for extracting these bases are presented and results of experiments carried out on real-life databases show that the proposed bases are useful, and that their generation is not time consuming.

Journal Article
TL;DR: An algorithm to generate small Buchi automata for LTL formulae using a heuristic approach consisting of three phases: rewriting of the formula, an optimized translation procedure, and simplification of the resulting automaton is presented.
Abstract: We present an algorithm to generate small Buchi automata for LTL formulae. We describe a heuristic approach consisting of three phases: rewriting of the formula, an optimized translation procedure, and simplification of the resulting automaton. We present a translation procedure that is optimal within a certain class of translation procedures. The simplification algorithm can be used for Buchi automata in general. It reduces the number of states and transitions, as well as the number and size of the accepting sets-possibly reducing the strength of the resulting automaton. This leads to more efficient model checking of linear-time logic formulae. We compare our method to previous work, and show that it is significantly more efficient for both random formulae, and formulae in common use and from the literature.

Journal Article
TL;DR: This paper presents a method using an extended logical system for obtaining programs from specifications written in a sublanguage of CASL, and provides a method for producing a program module that maximally respects the original structure of the specification.
Abstract: We present a method using an extended logical system for obtaining programs from specifications written in a sublanguage of CASL. These programs are correct in the sense that they satisfy their specifications. The technique we use is to extract programs from proofs in formal logic by techniques due to Curry and Howard. The logical calculus, however, is novel because it adds structural rules corresponding to the standard ways of modifying specifications: translating (renaming), taking unions, and hiding signatures. Although programs extracted by the Curry-Howard process can be very cumbersome, we use a number of simplifications that ensure that the programs extracted are in a language close to a standard high-level programming language. We use this to produce an executable refinement of a given specification and we then provide a method for producing a program module that maximally respects the original structure of the specification. Throughout the paper we demonstrate the technique with a simple example.

Journal Article
TL;DR: Regular model checking is presented, a framework for algorithmic verification of infinite-state systems with, e.g., queues, stacks, integers, or a parameterized linear topology, by computation of the transitive closure of a transition relation.
Abstract: We present regular model checking, a framework for algorithmic verification of infinite-state systems with, e.g., queues, stacks, integers, or a parameterized linear topology. States are represented by strings over a finite alphabet and the transition relation by a regular length-preserving relation on strings. Major problems in the verification of parameterized and infinite-state systems are to compute the set of states that are reachable from some set of initial states, and to compute the transitive closure of the transition relation. We present two complementary techniques for these problems. One is a direct automata-theoretic construction, and the other is based on widening. Both techniques are incomplete in general, but we give sufficient conditions under which they work. We also present a method for verifying ω-regular properties of parameterized systems, by computation of the transitive closure of a transition relation.

Journal ArticleDOI
TL;DR: DynamicTAO as mentioned in this paper is a CORBA-compliant reflective ORB that supports dynamic configuration and maintains an explicit representation of its own internal structure and uses it to carry out run time customization safely.
Abstract: Conventional middleware systems fail to address important issues related to dynamism. Modern computer systems have to deal not only with heterogeneity in the underlying hardware and software platforms but also with highly dynamic environments. Mobile and distributed applications are greatly affected by dynamic changes of the environment characteristic such as security constraints and resource availability. Existing middleware is not prepared to react to these changes. In many cases, application developers know when adaptive changes in communication and security strategies would improve system performance. But often, they are not able to benefit from it because the middleware lacks the mechanisms to support monitoring (to detect when adaptation should take place) and on-the-fly reconfiguration. dynamicTAO is a CORBA-compliant reflective ORB that supports dynamic configuration. It maintains an explicit representation of its own internal structure and uses it to carry out run time customization safely. After describing dynamicTAO's design and implementation, we discuss our experience on the development of two systems benefiting from the reflective nature of our ORB: a flexible monitoring system for distributed objects and a mechanism for enforcing access control based on dynamic security policies.

Journal Article
TL;DR: This work presents both visual attacks, making use of the ability of humans to clearly discern between noise and visual patterns, and statistical attacks which are much easier to automate.
Abstract: The majority of steganographic utilities for the camouflage of confidential communication suffers from fundamental weaknesses. On the way to more secure steganographic algorithms, the development of attacks is essential to assess security. We present both visual attacks, making use of the ability of humans to clearly discern between noise and visual patterns, and statistical attacks which are much easier to automate.

Journal Article
TL;DR: A brief overview of the use of feature-based methods in structure and motion computation can be found in this paper, where a companion paper by Irani and Anandan [16] reviews direct methods.
Abstract: This report is a brief overview of the use of “feature based” methods in structure and motion computation. A companion paper by Irani and Anandan [16] reviews “direct” methods.

Book ChapterDOI
TL;DR: The simple noun phrase-based system performs roughly as well as a state-of-the-art, corpus-trained keyphrase extractor; ratings for individual keyphrases do not necessarily correlate with ratings for sets of keyphRases for a document.
Abstract: Automatically extracting keyphrases from documents is a task with many applications in information retrieval and natural language processing. Document retrieval can be biased towards documents containing relevant keyphrases; documents can be classified or categorized based on their keyphrases; automatic text summarization may extract sentences with high keyphrase scores. This paper describes a simple system for choosing noun phrases from a document as keyphrases. A noun phrase is chosen based on its length, its frequency and the frequency of its head noun. Noun phrases are extracted from a text using a base noun phrase skimmer and an off-the-shelf online dictionary. Experiments involving human judges reveal several interesting results: the simple noun phrase-based system performs roughly as well as a state-of-the-art, corpus-trained keyphrase extractor; ratings for individual keyphrases do not necessarily correlate with ratings for sets of keyphrases for a document; agreement among unbiased judges on the keyphrase rating task is poor.

Journal Article
TL;DR: In this article, it was shown that the "BooleanToArithmetic" algorithm proposed by T. Messerges is not sufficient to prevent Differential Power Analysis and the "ArithmeticToBoolean" algorithm is not secure either.
Abstract: Since the announcement of the Differential Power Analysis (DPA) by Paul Kocher and al., several countermeasures were proposed in order to protect software implementations of cryptographic algorithms. In an attempt to reduce the resulting memory and execution time overhead, Thomas Messerges recently proposed a general method that masks all the intermediate data. This masking strategy is possible if all the fundamental operations used in a given algorithm can be rewritten with masked input data, giving masked output data. This is easily seen to be the case in classical algorithms such as DES or RSA. However, for algorithms that combine Boolean and arithmetic functions, such as IDEA or several of the AES candidates, two different kinds of masking have to be used. There is thus a need for a method to convert back and forth between Boolean masking and arithmetic masking. In the present paper, we show that the 'BooleanToArithmetic' algorithm proposed by T. Messerges is not sufficient to prevent Differential Power Analysis. In a similar way, the 'ArithmeticToBoolean' algorithm is not secure either.


Journal Article
TL;DR: This paper introduces the XTR public key system, a new method to represent elements of a subgroup of a multiplicative group of a finite field that leads to substantial savings both in communication and computational overhead without compromising security.
Abstract: This paper introduces the XTR public key system. XTR is based on a new method to represent elements of a subgroup of a multiplicative group of a finite field. Application of XTR in cryptographic protocols leads to substantial savings both in communication and computational overhead without compromising security.

Journal Article
Toby Walsh1
TL;DR: In this article, a comprehensive study of mappings between constraint satisfaction problems (CSPs) and propositional satisfiability (SAT) is performed, where the authors compare the impact of achieving arc-consistency on the CSP with unit propagation on the SAT problem.
Abstract: We perform a comprehensive study of mappings between constraint satisfaction problems (CSPs) and propositional satisfiability (SAT). We analyse four different mappings of SAT problems into CSPs, and two of CSPs into SAT problems. For each mapping, we compare the impact of achieving arc-consistency on the CSP with unit propagation on the SAT problem. We then extend these results to CSP algorithms that maintain (some level of) arc-consistency during search like FC and MAC, and to the Davis-Putnam procedure (which performs unit propagation at each search node). Because of differences in the branching structure of their search, a result showing the dominance of achieving arc-consistency on the CSP over unit propagation on the SAT problem does not necessarily translate to the dominance of MAC over the Davis-Putnam procedure. These results provide insight into the relationship between propositional satisfiability and constraint satisfaction.

Book ChapterDOI
TL;DR: In this article, the authors derived an upper bound on the queuing delay as a function of priority traffic utilization and the maximum hop count of any flow, and the shaping parameters at the network ingress.
Abstract: A large number of products implementing aggregate buffering and scheduling mechanisms have been developed and deployed, and still more are under development. With the rapid increase in the demand for reliable end-to-end QoS solutions, it becomes increasingly important to understand the implications of aggregate scheduling on the resulting QoS capabilities. This paper studies the bounds on the worst case delay in a network implementing aggregate scheduling. We derive an upper bound on the queuing delay as a function of priority traffic utilization and the maximum hop count of any flow, and the shaping parameters at the network ingress. Our bound explodes at a certain utilization level which is a function of the hop count. We show that for a general network configuration and larger utilization utilization an upper bound on delay, if it exists, must be a function of the number of nodes and/or the number of flows in the network.

Journal Article
TL;DR: A new type of timing attack is introduced which enables the factorization of an RSA-modulus if the exponentiation with the secret exponent uses the Chinese Remainder Theorem and Montgomery's algorithm.
Abstract: We introduce a new type of timing attack which enables the factorization of an RSA-modulus if the exponentiation with the secret exponent uses the Chinese Remainder Theorem and Montgomery's algorithm Its standard variant assumes that both exponentiations are carried out with a simple square and multiply algorithm However, although its efficiency decreases, our attack can also be adapted to more advanced exponentiation algorithms The previously known timing attacks do not work if the Chinese Remainder Theorem is used

Journal ArticleDOI
TL;DR: Novel test criteria are defined for both static and dynamic testing of specification-level and instance-level collaboration diagrams that allow a formal integration tests to be based on high level design notations, which can help lead to software that is significantly more reliable.
Abstract: Software testing can only be formalized and quantified when a solid basis for test generation can be defined. Tests are commonly generated from program source code, graphical models of software (such as control flow graphs), and specifications/requirements. UML collaboration diagrams represent a significant opportunity for testing because they precisely describe how the functions the software provides are connected in a form that can be easily manipulated by automated means. This paper presents novel test criteria that are based on UML collaboration diagrams. The most novel aspect of this is that tests can be generated automatically from the software design, rather than the code or the specifications. Criteria are defined for both static and dynamic testing of specification-level and instance-level collaboration diagrams. These criteria allow a formal integration tests to be based on high level design notations, which can help lead to software that is significantly more reliable.

Journal Article
TL;DR: Andes as discussed by the authors is an Intelligent Tutoring System for introductory college physics that encourages the student to construct new knowledge by providing hints that require them to derive most of the solution on their own, and facilitates transfer from the system by making the interface as much like a piece of paper as possible.
Abstract: Andes is an Intelligent Tutoring System for introductory college physics. The fundamental principles underlying the design of Andes are: (1) encourage the student to construct new knowledge by providing hints that require them to derive most of the solution on their own, (2) facilitate transfer from the system by making the interface as much like a piece of paper as possible, (3) give immediate feedback after each action to maximize the opportunities for learning and minimize the amount of time spent going down wrong paths, and (4) give the student flexibility in the order in which actions are performed, and allow them to skip steps when appropriate. This paper gives an overview of Andes, focusing on the overall architecture and the student's experience using the system.