scispace - formally typeset
Search or ask a question

Showing papers in "Lecture Notes in Computer Science in 2005"


Journal Article
TL;DR: An independence criterion based on the eigen-spectrum of covariance operators in reproducing kernel Hilbert spaces (RKHSs), consisting of an empirical estimate of the Hilbert-Schmidt norm of the cross-covariance operator, or HSIC, is proposed.
Abstract: We propose an independence criterion based on the eigen-spectrum of covariance operators in reproducing kernel Hilbert spaces (RKHSs), consisting of an empirical estimate of the Hilbert-Schmidt norm of the cross-covariance operator (we term this a Hilbert-Schmidt Independence Criterion, or HSIC). This approach has several advantages, compared with previous kernel-based independence criteria. First, the empirical estimate is simpler than any other kernel dependence test, and requires no user-defined regularisation. Second, there is a clearly defined population quantity which the empirical estimate approaches in the large sample limit, with exponential convergence guaranteed between the two: this ensures that independence tests based on HSIC do not suffer from slow learning rates. Finally, we show in the context of independent component analysis (ICA) that the performance of HSIC is competitive with that of previously published kernel-based criteria, and of other recently published ICA methods.

1,134 citations


Book ChapterDOI
TL;DR: Partial Least Squares (PLS) as mentioned in this paper is a wide class of methods for modeling relations between sets of observed variables by means of latent variables, which comprises of regression and classification tasks as well as dimension reduction techniques and modeling tools.
Abstract: Partial Least Squares (PLS) is a wide class of methods for modeling relations between sets of observed variables by means of latent variables. It comprises of regression and classification tasks as well as dimension reduction techniques and modeling tools. The underlying assumption of all PLS methods is that the observed data is generated by a system or process which is driven by a small number of latent (not directly observed or measured) variables. Projections of the observed data to its latent structure by means of PLS was developed by Herman Wold and coworkers [48,49,52].

981 citations


Journal Article
TL;DR: This paper discusses Cryptography in High Dimensional Tori, a Tool Kit for Finding Small Roots of Bivariate Polynomials over the Integers, and reducing Complexity Assumptions for Statistically-Hiding Commitment.

965 citations


BookDOI
TL;DR: The RSFDGrC 2013 was the 14th International Conference on Distributed Sensor Networks for Computer Science (RSFDG-2013) as mentioned in this paper, held in Halifax, NS, Canada, October 11-14, 2013.
Abstract: 14th International Conference, RSFDGrC 2013, Halifax, NS, Canada, October 11-14, 2013. Proceedings - Part of the Lecture Notes in Computer Science book series

535 citations


Journal Article
TL;DR: In this article, the authors propose a semantic overlay network (SON) for peer-to-peer (P2P) systems, which allows users to decide what content to put in their computers and to whom to connect.
Abstract: In a peer-to-peer (P2P) system, nodes typically connect to a small set of random nodes (their neighbors), and queries are propagated along these connections. Such query flooding tends to be very expensive. We propose that node connections be influenced by content, so that for example, nodes having many Jazz files will connect to other similar nodes. Thus, semantically related nodes form a Semantic Overlay Network (SON). Queries are routed to the appropriate SONs, increasing the chances that matching files will be found quickly, and reducing the search load on nodes that have unrelated content. We have evaluated SONs by using an actual snapshot of music-sharing clients. Our results show that SONs can significantly improve query performance while at the same time allowing users to decide what content to put in their computers and to whom to connect.

457 citations


BookDOI
TL;DR: This chapter discusses Model-Based Testing - A Glossary, which focuses on the development of model-Based Test Case Generation and its applications in I/O-automata Based Testing.
Abstract: Testing of Finite State Machines.- I. Testing of Finite State Machines.- 1 Homing and Synchronizing Sequences.- 2 State Identification.- 3 State Verification.- 4 Conformance Testing.- II. Testing of Labeled Transition Systems.- Testing of Labeled Transition Systems.- 5 Preorder Relations.- 6 Test Generation Algorithms Based on Preorder Relations.- 7 I/O-automata Based Testing.- 8 Test Derivation from Timed Automata.- 9 Testing Theory for Probabilistic Systems.- III. Model-Based Test Case Generation.- Model-Based Test Case Generation.- 10 Methodological Issues in Model-Based Testing.- 11 Evaluating Coverage Based Testing.- 12 Technology of Test-Case Generation.- 13 Real-Time and Hybrid Systems Testing.- IV. Tools and Case Studies.- Tools and Case Studies.- 14 Tools for Test Case Generation.- 15 Case Studies.- V. Standardized Test Notation and Execution Architecture.- Standardized Test Notation and Execution Architecture.- 16 TTCN-3.- 17 UML 2.0 Testing Profile.- VI. Beyond Testing.- Beyond Testing.- 18 Run-Time Verification.- 19 Model Checking.- VII. Appendices.- Appendices.- 20 Model-Based Testing - A Glossary.- 21 Finite State Machines.- 22 Labelled Transition Systems.

443 citations


Journal Article
TL;DR: In this article, a new identity-based signcryption (IBSC) scheme built upon bilinear maps was proposed, which is faster than all known pairing-based IBS methods.
Abstract: In this paper we describe a new identity-based signcryption (IBSC) scheme built upon bilinear maps. This scheme turns out to be more efficient than all others proposed so far. We prove its security in a formal model under recently studied computational assumptions and in the random oracle model. As a result of independent interest, we propose a new provably secure identity-based signature (IBS) scheme that is also faster than all known pairing-based IBS methods.

428 citations


Journal Article
TL;DR: In this paper, the S-box size was reduced to 4 bits and 2 bits by using polynomial bases for each subfield and normal bases for all subfields, and the isomorphism bit matrices were fully optimized, improving the greedy algorithm.
Abstract: A key step in the Advanced Encryption Standard (AES) algorithm is the S-box Many implementations of AES have been proposed, for various goals, that effect the S-box in various ways In particular, the most compact implementations to date of Satoh et al[14] and Mentens et al[6] perform the 8-bit Galois field inversion of the S-box using subfields of 4 bits and of 2 bits Our work refines this approach to achieve a more compact S-box We examined many choices of basis for each subfield, not only polynomial bases as in previous work, but also normal bases, giving 432 cases The isomorphism bit matrices are fully optimized, improving on the greedy algorithm Introducing some NOR gates gives further savings The best case improves on [14] by 20% This decreased size could help for area-limited hardware implementations, eg, smart cards, and to allow more copies of the S-box for parallelism and/or pipelining of AES

406 citations


BookDOI
TL;DR: Find the secret to improve the quality of life by reading this medical image computing and computer assisted intervention miccai 2005 8th international conferenc.
Abstract: Find the secret to improve the quality of life by reading this medical image computing and computer assisted intervention miccai 2005 8th international conferenc. This is a kind of book that you need now. Besides, it can be your favorite book to read after having this book. Do you ask why? Well, this is a book that has different characteristic with others. You may not need to know who the author is, how well-known the work is. As wise word, never judge the words from who speaks, but make the words as your good value to your life.

404 citations


Book ChapterDOI
TL;DR: It is shown that emergence and self-organisation each emphasise different properties of a system and is considered as a promising approach in complex multi-agent systems.
Abstract: A clear terminology is essential in every research discipline. In the context of ESOA, a lot of confusion exists about the meaning of the terms emergence and self-organisation. One of the sources of the confusion comes from the fact that a combination of both phenomena often occurs in dynamical systems. In this paper a historic overview of the use of each concept as well as a working definition, that is compatible with the historic and current meaning of the concepts, is given. Each definition is explained by supporting it with important characteristics found in the literature. We show that emergence and self-organisation each emphasise different properties of a system. Both phenomena can exist in isolation. The paper also outlines some examples of such systems and considers the combination of emergence and self-organisation as a promising approach in complex multi-agent systems.

402 citations


Book ChapterDOI
TL;DR: This paper explores the realization of a previously proposed cryptographic construct, called fuzzy vault, with the fingerprint minutiae data, which aims to secure critical data with the fingerprints in a way that only the authorized user can access the secret by providing the valid fingerprint.
Abstract: Biometrics-based user authentication has several advantages over traditional password-based systems for standalone authentication applications, such as secure cellular phone access. This is also true for new authentication architectures known as crypto-biometric systems, where cryptography and biometrics are merged to achieve high security and user convenience at the same time. In this paper, we explore the realization of a previously proposed cryptographic construct, called fuzzy vault, with the fingerprint minutiae data. This construct aims to secure critical data (e.g., secret encryption key) with the fingerprint data in a way that only the authorized user can access the secret by providing the valid fingerprint. The results show that 128-bit AES keys can be secured with fingerprint minutiae data using the proposed system.

Book ChapterDOI
TL;DR: The feasibility of template protecting biometric authentication systems is shown and it is shown that the scheme achieves an EER of approximately 4.2% with secret length of 40 bits in experiments.
Abstract: In this paper we show the feasibility of template protecting biometric authentication systems In particular, we apply template protection schemes to fingerprint data Therefore we first make a fixed length representation of the fingerprint data by applying Gabor filtering Next we introduce the reliable components scheme In order to make a binary representation of the fingerprint images we extract and then quantize during the enrollment phase the reliable components with the highest signal to noise ratio Finally, error correction coding is applied to the binary representation It is shown that the scheme achieves an EER of approximately 42% with secret length of 40 bits in experiments

Journal Article
TL;DR: In this paper, a side-channel analysis resistant logic style called MDPL is proposed to avoid implementation constraints that are costly to satisfy, such as the capacitive load of complementary wires in an integrated circuit.
Abstract: During the last years, several logic styles that counteract side-channel attacks have been proposed. They all have in common that their level of resistance heavily depends on implementation constraints that are costly to satisfy. For example, the capacitive load of complementary wires in an integrated circuit may need to be balanced. This article describes a novel side-channel analysis resistant logic style called MDPL that completely avoids such constraints. It is a masked and dual-rail pre-charge logic style and can be implemented using common CMOS standard cell libraries. This makes MDPL perfectly suitable for semi-custom designs.

Book ChapterDOI
TL;DR: It is shown experimentally that the machine expert based on local information outperforms the system based on global analysis when enough training data is available and it is found that global analysis is more appropriate in the case of small training set size.
Abstract: An on-line signature verification system exploiting both local and global information through decision-level fusion is presented. Global information is extracted with a feature-based representation and recognized by using Parzen Windows Classifiers. Local information is extracted as time functions of various dynamic properties and recognized by using Hidden Markov Models. Experimental results are given on the large MCYT signature database (330 signers, 16500 signatures) for random and skilled forgeries. Feature selection experiments based on feature ranking are carried out. It is shown experimentally that the machine expert based on local information outperforms the system based on global analysis when enough training data is available. Conversely, it is found that global analysis is more appropriate in the case of small training set size. The two proposed systems are also shown to give complementary recognition information which is successfully exploited using decision-level score fusion.

Journal Article
TL;DR: In this paper, the authors propose a definition for swarm robotics, and propose a set of criteria that can be used to distinguish swarm robotics research from other multi-robot studies, providing a review of some studies which can act as sources of inspiration.
Abstract: Swarm robotics is a novel approach to the coordination of large numbers of relatively simple robots which takes its inspiration from social insects. This paper proposes a definition to this newly emerging approach by 1) describing the desirable properties of swarm robotic systems, as observed in the system-level functioning of social insects, 2) proposing a definition for the term swarm robotics, and putting forward a set of criteria that can be used to distinguish swarm robotics research from other multi-robot studies, 3) providing a review of some studies which can act as sources of inspiration, and a list of promising domains for the utilization of swarm robotic systems.

BookDOI
TL;DR: A new improved distinguishing attack on the stream cipher SNOW 2.0 is given and a large class of functions F(X-1, X-2, ... X-k), for which the approximation obtained when additions modulo 2(n) are replaced by bitwise addition is given.
Abstract: Let X-1, X-2, ... , X-k be independent n bit random variables. If they have arbitrary distributions, we show how to compute distributions like Pr{X-1 circle plus X-2 circle plus (...) circle plus X-k} and Pr{X-1 boxed plus X-2 boxed plus (...) boxed plus X-k} in complexity O(kn2(n)). Furthermore, if X-1, X-2, ... X-k are uniformly distributed we demonstrate a large class of functions F(X-1, X-2, ... X-k), for which we can compute their distributions efficiently. These results have applications in linear cryptanalysis of stream ciphers as well as block ciphers. A typical example is the approximation obtained when additions modulo 2(n) are replaced by bitwise addition. The efficiency of such an approach is given by the bias of a distribution of the above kind. As an example, we give a new improved distinguishing attack on the stream cipher SNOW 2.0. (Less)

Book ChapterDOI
TL;DR: This work gives a surprisingly simple enhancement of a well known algorithm that performs best, and makes triangle listing and counting in huge networks feasible.
Abstract: In the past, the fundamental graph problem of triangle counting and listing has been studied intensively from a theoretical point of view Recently, triangle counting has also become a widely used tool in network analysis Due to the very large size of networks like the Internet, WWW or social networks, the efficiency of algorithms for triangle counting and listing is an important issue The main intention of this work is to evaluate the practicability of triangle counting and listing in very large graphs with various degree distributions We give a surprisingly simple enhancement of a well known algorithm that performs best, and makes triangle listing and counting in huge networks feasible This paper is a condensed presentation of [SW05]

Journal Article
TL;DR: A Hierarchical Identity Based Encryption system where the ciphertext consists of just three group elements and decryption requires only two bilinear map computations, regardless of the hierarchy depth, which is proved to be as efficient as in other HIBE systems.
Abstract: We present a Hierarchical Identity Based Encryption (HIBE) system where the ciphertext consists of just three group elements and decryption requires only two bilinear map computations, regardless of the hierarchy depth. Encryption is as efficient as in other HIBE systems. We prove that the scheme is selective-ID secure in the standard model and fully secure in the random oracle model. Our system has a number of applications: it gives very efficient forward secure public key and identity based cryptosystems (with short ciphertexts), it converts the NNL broadcast encryption system into an efficient public key broadcast system, and it provides an efficient mechanism for encrypting to the future. The system also supports limited delegation where users can be given restricted private keys that only allow delegation to bounded depth. The HIBE system can be modified to support sublinear size private keys at the cost of some ciphertext expansion.

Book ChapterDOI
TL;DR: Weil and Tate pairings on elliptic curves have attracted much attention in recent years, and Boneh and Franklin [8] as discussed by the authors examined the implications of heightened security needs for pairing-based cryptosystems.
Abstract: In recent years cryptographic protocols based on the Weil and Tate pairings on elliptic curves have attracted much attention. A notable success in this area was the elegant solution by Boneh and Franklin [8] of the problem of efficient identity-based encryption. At the same time, the security standards for public key cryptosystems are expected to increase, so that in the future they will be capable of providing security equivalent to 128-, 192-, or 256-bit AES keys. In this paper we examine the implications of heightened security needs for pairing-based cryptosystems. We first describe three different reasons why high-security users might have concerns about the long-term viability of these systems. However, in our view none of the risks inherent in pairing-based systems are sufficiently serious to warrant pulling them from the shelves. We next discuss two families of elliptic curves E for use in pairing-based cryptosystems. The first has the property that the pairing takes values in the prime field $\mathbb{F}_p$ over which the curve is defined; the second family consists of supersingular curves with embedding degree k = 2. Finally, we examine the efficiency of the Weil pairing as opposed to the Tate pairing and compare a range of choices of embedding degree k, including k = 1 and k = 24.

Book ChapterDOI
TL;DR: Both quality indices for fingerprint images are developed and by applying a quality-based weighting scheme in the matching algorithm, the overall matching performance can be improved; a decrease of 1.94% in EER is observed on the FVC2002 DB3 database.
Abstract: The performance of an automatic fingerprint authentication system relies heavily on the quality of the captured fingerprint images. In this paper, two new quality indices for fingerprint images are developed. The first index measures the energy concentration in the frequency domain as a global feature. The second index measures the spatial coherence in local regions. We present a novel framework for evaluating and comparing quality indices in terms of their capability of predicting the system performance at three different stages, namely, image enhancement, feature extraction and matching. Experimental results on the IBM-HURSLEY and FVC2002 DB3 databases demonstrate that the global index is better than the local index in the enhancement stage (correlation of 0.70 vs. 0.50) and comparative in the feature extraction stage (correlation of 0.70 vs. 0.71). Both quality indices are effective in predicting the matching performance, and by applying a quality-based weighting scheme in the matching algorithm, the overall matching performance can be improved; a decrease of 1.94% in EER is observed on the FVC2002 DB3 database.

Book ChapterDOI
TL;DR: T-MAN as discussed by the authors is a generic protocol for constructing and maintaining a large class of topologies, where a topology is defined with the help of a ranking function, and nodes participating in the protocol can use this ranking function to order any set of other nodes according to preference for choosing them as a neighbor.
Abstract: Overlay topology plays an important role in P2P systems. Topology serves as a basis for achieving functions such as routing, searching and information dissemination, and it has a major impact on their efficiency, cost and robustness. Furthermore, the solution to problems such as sorting and clustering of nodes can also be interpreted as a topology. In this paper we propose a generic protocol, T-MAN, for constructing and maintaining a large class of topologies. In the proposed framework, a topology is defined with the help of a ranking function. The nodes participating in the protocol can use this ranking function to order any set of other nodes according to preference for choosing them as a neighbor. This simple abstraction makes it possible to control the self-organization process of topologies in a straightforward, intuitive and flexible manner. At the same time, the T-MAN protocol involves only local communication to increase the quality of the current set of neighbors of each node. We show that this bottom-up approach results in fast convergence and high robustness in dynamic environments. The protocol can be applied as a standalone solution as well as a component for recovery or bootstrapping of other protocols.

Journal Article
TL;DR: A technique that selects, from a large set of test inputs, a small subset likely to reveal faults in the software under test, which is seen as an error-detection technique and implemented in the Eclat tool.
Abstract: This paper describes a technique that selects, from a large set of test inputs, a small subset likely to reveal faults in the software under test. The technique takes a program or software component, plus a set of correct executions-say, from observations of the software running properly, or from an existing test suite that a user wishes to enhance. The technique first infers an operational model of the software's operation. Then, inputs whose operational pattern of execution differs from the model in specific ways are suggestive of faults. These inputs are further reduced by selecting only one input per operational pattern. The result is a small portion of the original inputs, deemed by the technique as most likely to reveal faults. Thus, the technique can also be seen as an error-detection technique. The paper describes two additional techniques that complement test input selection. One is a technique for automatically producing an oracle (a set of assertions) for a test input from. the operational model, thus transforming the test input into a test case. The other is a classification-guided test input generation technique that also makes use of operational models and patterns. When generating inputs, it filters out code sequences that are unlikely to contribute to legal inputs, improving the efficiency of its search for fault-revealing inputs. We have implemented these techniques in the Eclat tool, which generates unit tests for Java classes. Eclat's input is a set of classes to test and an example program execution-say, a passing test suite. Eclat's output is a set of JUnit test cases, each containing a potentially fault-revealing input and a set of assertions at least one of which fails. In our experiments, Eclat successfully generated inputs that exposed fault-revealing behavior; we have used Eclat to reveal real errors in programs. The inputs it selects as fault-revealing are an order of magnitude as likely to reveal a fault as all generated inputs.

Journal Article
TL;DR: A comprehensive overview of the state-of-the-art in web personalization can be found in this article, where the authors discuss the various sources of data available to personalization systems, the modelling approaches employed and the current approaches to evaluating these systems.
Abstract: In this chapter we provide a comprehensive overview of the topic of Intelligent Techniques for Web Personalization. Web Personalization is viewed as an application of data mining and machine learning techniques to build models of user behaviour that can be applied to the task of predicting user needs and adapting future interactions with the ultimate goal of improved user satisfaction. This chapter survey's the state-of-the-art in Web personalization. We start by providing a description of the personalization process and a classification of the current approaches to Web personalization. We discuss the various sources of data available to personalization systems, the modelling approaches employed and the current approaches to evaluating these systems. A number of challenges faced by researchers developing these systems are described as are solutions to these challenges proposed in literature. The chapter concludes with a discussion on the open challenges that must be addressed by the research community if this technology is to make a positive impact on user satisfaction with the Web.

Book ChapterDOI
TL;DR: A new approach to the prediction of probable biological units from protein structures obtained by means of protein crystallography, which employs graph-theoretical technique in order to find all possible assemblies in crystal.
Abstract: The paper describes a new approach to the prediction of probable biological units from protein structures obtained by means of protein crystallography. The method first employs graph-theoretical technique in order to find all possible assemblies in crystal. In second step, found assemblies are analysed for chemical stability and only stable oligomers are left as a potential solution. We also discuss theoretical models for the assessment of protein affinity and entropy loss on complex formation, used in stability analysis.

Journal Article
TL;DR: A novel automatic specification mining algorithm that uses information about error handling to learn temporal safety rules based on the observation that programs often make mistakes along exceptional control-flow paths, even when they behave correctly on normal execution paths is presented.
Abstract: Specifications are necessary in order to find software bugs using program verification tools. This paper presents a novel automatic specification mining algorithm that uses information about error handling to learn temporal safety rules. Our algorithm is based on the observation that programs often make mistakes along exceptional control-flow paths, even when they behave correctly on normal execution paths. We show that this focus improves the effectiveness of the miner for discovering specifications beneficial for bug finding. We present quantitative results comparing our technique to four existing miners. We highlight assumptions made by various miners that are not always born out in practice. Additionally, we apply our algorithm to existing Java programs and analyze its ability to learn specifications that find bugs in those programs. In our experiments, we find filtering candidate specifications to be more important than ranking them. We find 430 bugs in 1 million lines of code. Notably, we find 250 more bugs using per-program specifications learned by our algorithm than with generic specifications that apply to all programs.

Journal Article
TL;DR: In this article, it was shown that Shannon entropy can be generalized to smooth Renyi entropies, which are tight bounds for data compression and randomness extraction in the case of independent repetitions.
Abstract: Shannon entropy is a useful and important measure in information processing, for instance, data compression or randomness extraction, under the assumption-which can typically safely be made in communication theory-that a certain random experiment is independently repeated many times. In cryptography, however, where a system's working has to be proven with respect to a malicious adversary, this assumption usually translates to a restriction on the latter's knowledge or behavior and is generally not satisfied. An example is quantum key agreement, where the adversary can attack each particle sent through the quantum channel differently or even carry out coherent attacks, combining a number of particles together. In information-theoretic key agreement, the central functionalities of information reconciliation and privacy amplification have, therefore, been extensively studied in the scenario of general distributions: Partial solutions have been given, but the obtained bounds are arbitrarily far from tight, and a full analysis appeared to be rather involved to do. We show that, actually, the general case is not more difficult than the scenario of independent repetitions-in fact, given our new point of view, even simpler. When one analyzes the possible efficiency of data compression and randomness extraction in the case of independent repetitions, then Shannon entropy H is the answer. We show that H can, in these two contexts, be generalized to two very simple quantitiesH e 0 and H e ∞, called smooth Renyi entropies-which are tight bounds for data compression (hence, information reconciliation) and randomness extraction (privacy amplification), respectively. It is shown that the two new quantities, and related notions, do not only extend Shannon entropy in the described contexts, but they also share central properties of the latter such as the chain rule as well as sub-additivity and monotonicity.

Book ChapterDOI
Ueli Maurer1
TL;DR: An abstract model of computation is proposed which allows to capture reasonable restrictions on the power of algorithms and is proved that computing discrete logarithms is generically hard even if an oracle for the decisional Diffie-Hellman problem and/or other low degree relations were available.
Abstract: Computational security proofs in cryptography, without unproven intractability assumptions, exist today only if one restricts the computational model For example, one can prove a lower bound on the complexity of computing discrete logarithms in a cyclic group if one considers only generic algorithms which can not exploit the properties of the representation of the group elements We propose an abstract model of computation which allows to capture such reasonable restrictions on the power of algorithms The algorithm interacts with a black-box with hidden internal state variables which allows to perform a certain set of operations on the internal state variables, and which provides output only by allowing to check whether some state variables satisfy certain relations For example, generic algorithms correspond to the special case where only the equality relation, and possibly also an abstract total order relation, can be tested We consider several instantiation of the model and different types of computational problems and prove a few known and new lower bounds for computational problems of interest in cryptography, for example that computing discrete logarithms is generically hard even if an oracle for the decisional Diffie-Hellman problem and/or other low degree relations were available

Journal Article
TL;DR: It is shown that under the notion of a shape that is independent of scale this is indeed so: in the Tile Assembly Model, the minimal number of distinct tile types necessary to self-assemble an arbitrarily scaled shape can be bounded both above and below in terms of the shape's Kolmogorov complexity.
Abstract: The connection between self-assembly and computation suggests that a shape can be considered the output of a self-assembly program, a set of tiles that fit together to create a shape. It seems plausible that the size of the smallest self-assembly program that builds a shape and the shape's descriptional (Kolmogorov) complexity should be related. We show that under the notion of a shape that is independent of scale this is indeed so: in the Tile Assembly Model, the minimal number of distinct tile types necessary to self-assemble an arbitrarily scaled shape can be bounded both above and below in terms of the shape's Kolmogorov complexity. As part of the proof of the main result, we sketch a general method for converting a program outputting a shape as a list of locations into a set of tile types that self-assembles into a scaled up version of that shape. Our result implies, somewhat counter-intuitively, that self-assembly of a scaled up version of a shape often requires fewer tile types, and suggests that the independence of scale in self-assembly theory plays the same crucial role as the independence of running time in the theory of computability.

Book ChapterDOI
TL;DR: It is shown how concept map-based knowledge models can be used to organize repositories of information in a way that makes them easily browsable, and how concept maps can improve searching algorithms for the Web.
Abstract: Information visualization has been a research topic for many years, leading to a mature field where guidelines and practices are well established. Knowledge visualization, in contrast, is a relatively new area of research that has received more attention recently due to the interest from the business community in Knowledge Management. In this paper we present the CmapTools software as an example of how concept maps, a knowledge visualization tool, can be combined with recent technology to provide integration between knowledge and information visualizations. We show how concept map-based knowledge models can be used to organize repositories of information in a way that makes them easily browsable, and how concept maps can improve searching algorithms for the Web. We also report on how information can be used to complement knowledge models and, based on the searching algorithms, improve the process of constructing concept maps.

Journal Article
TL;DR: The Mobile Resource Guarantees framework is presented: a system for ensuring that downloaded programs are free from run-time violations of resource bounds, and a novel programming language with resource constraints encoded in function types is used to streamline the generation of proofs of resource usage.
Abstract: We present the Mobile Resource Guarantees framework: a system for ensuring that downloaded programs are free from run-time violations of resource bounds. Certificates are attached to code in the form of efficiently checkable proofs of resource bounds; in contrast to cryptographic certificates of code origin, these are independent of trust networks. A novel programming language with resource constraints encoded in function types is used to streamline the generation of proofs of resource usage.