Showing papers in "Lecture Notes in Computer Science in 2006"
Journal Article•
TL;DR: The study is extended to general functions f, proving that privacy can be preserved by calibrating the standard deviation of the noise according to the sensitivity of the function f, which is the amount that any single argument to f can change its output.
Abstract: We continue a line of research initiated in [10, 11] on privacy-preserving statistical databases. Consider a trusted server that holds a database of sensitive information. Given a query function f mapping databases to reals, the so-called true answer is the result of applying f to the database. To protect privacy, the true answer is perturbed by the addition of random noise generated according to a carefully chosen distribution, and this response, the true answer plus noise, is returned to the user. Previous work focused on the case of noisy sums, in which f = Σ i g(x i ), where x i denotes the ith row of the database and g maps database rows to [0,1]. We extend the study to general functions f, proving that privacy can be preserved by calibrating the standard deviation of the noise according to the sensitivity of the function f. Roughly speaking, this is the amount that any single argument to f can change its output. The new analysis shows that for several particular applications substantially less noise is needed than was previously understood to be the case. The first step is a very clean characterization of privacy in terms of indistinguishability of transcripts. Additionally, we obtain separation results showing the increased value of interactive sanitization mechanisms over non-interactive.
3,629 citations
Journal Article•
TL;DR: In this paper, the same scene viewed from two different positions should yield features which correspond to the same real-world 3D locations, and a comparison of corner detectors based on this criterion applied to 3D scenes is made.
Abstract: Where feature points are used in real-time frame-rate applications, a high-speed feature detector is necessary. Feature detectors such as SIFT (DoG), Harris and SUSAN are good methods which yield high quality features, however they are too computationally intensive for use in real-time applications of any complexity. Here we show that machine learning can be used to derive a feature detector which can fully process live PAL video using less than 7% of the available processing time. By comparison neither the Harris detector (120%) nor the detection stage of SIFT (300%) can operate at full frame rate. Clearly a high-speed detector is of limited use if the features produced are unsuitable for downstream processing. In particular, the same scene viewed from two different positions should yield features which correspond to the same real-world 3D locations[1]. Hence the second contribution of this paper is a comparison corner detectors based on this criterion applied to 3D scenes. This comparison supports a number of claims made elsewhere concerning existing corner detectors. Further, contrary to our initial expectations, we show that despite being principally constructed for speed, our detector significantly outperforms existing feature detectors according to this criterion. © Springer-Verlag Berlin Heidelberg 2006.
3,413 citations
Journal Article•
TL;DR: The PASCAL Network of Excellence first Recognising Textual Entailment (RTE-1) Challenge as mentioned in this paper was defined as recognizing, given two text fragments, whether the meaning of one text can be inferred from the other.
Abstract: This paper describes the PASCAL Network of Excellence first Recognising Textual Entailment (RTE-1) Challenge benchmark 1 . The RTE task is defined as recognizing, given two text fragments, whether the meaning of one text can be inferred (entailed) from the other. This application-independent task is suggested as capturing major inferences about the variability of semantic expression which are commonly needed across multiple applications. The Challenge has raised noticeable attention in the research community, attracting 17 submissions from diverse groups, suggesting the generic relevance of the task.
1,735 citations
Journal Article•
TL;DR: In this paper, a fast method for computation of covariance matrices based on integral images is described, which is more general than the image sums or histograms, which were already published before, and with a series of integral images the covariances are obtained by a few arithmetic operations.
Abstract: We describe a new region descriptor and apply it to two problems, object detection and texture classification. The covariance of d-features, e.g., the three-dimensional color vector, the norm of first and second derivatives of intensity with respect to x and y, etc., characterizes a region of interest. We describe a fast method for computation of covariances based on integral images. The idea presented here is more general than the image sums or histograms, which were already published before, and with a series of integral images the covariances are obtained by a few arithmetic operations. Covariance matrices do not lie on Euclidean space, therefore we use a distance metric involving generalized eigenvalues which also follows from the Lie group structure of positive definite matrices. Feature matching is a simple nearest neighbor search under the distance metric and performed extremely rapidly using the integral images. The performance of the covariance features is superior to other methods, as it is shown, and large rotations and illumination changes are also absorbed by the covariance matrix.
1,057 citations
Journal Article•
TL;DR: FaCT++ as discussed by the authors implements a tableaux decision procedure for the well known SHOIQ description logic, with additional support for datatypes, including strings and integers, and can be used to provide reasoning services for ontology engineering tools supporting the OWL DL ontology language.
Abstract: This is a system description of the Description Logic reasoner FaCT++. The reasoner implements a tableaux decision procedure for the well known SHOIQ description logic, with additional support for datatypes, including strings and integers. The system employs a wide range of performance enhancing optimisations, including both standard techniques (such as absorption and model merging) and newly developed ones (such as ordering heuristics and taxonomic classification). FaCT++ can, via the standard DIG interface, be used to provide reasoning services for ontology engineering tools supporting the OWL DL ontology language.
1,041 citations
Journal Article•
TL;DR: This paper introduces the transactional locking II (TL2) algorithm, a software transactional memory (STM) algorithm based on a combination of commit-time locking and a novel global version-clock based validation technique, which is ten-fold faster than a single lock.
Abstract: The transactional memory programming paradigm is gaining momentum as the approach of choice for replacing locks in concurrent programming. This paper introduces the transactional locking II (TL2) algorithm, a software transactional memory (STM) algorithm based on a combination of commit-time locking and a novel global version-clock based validation technique. TL2 improves on state-of-the-art STMs in the following ways: (1) unlike all other STMs it fits seamlessly with any systems memory life-cycle, including those using malloc/free (2) unlike all other lock-based STMs it efficiently avoids periods of unsafe execution, that is, using its novel version-clock validation, user code is guaranteed to operate only on consistent memory states, and (3) in a sequence of high performance benchmarks, while providing these new properties, it delivered overall performance comparable to (and in many cases better than) that of all former STM algorithms, both lock-based and non-blocking. Perhaps more importantly, on various benchmarks, TL2 delivers performance that is competitive with the best hand-crafted fine-grained concurrent structures. Specifically, it is ten-fold faster than a single lock. We believe these characteristics make TL2 a viable candidate for deployment of transactional memory today, long before hardware transactional support is available.
893 citations
Journal Article•
TL;DR: A program verifier as discussed by the authors is a complex system that uses compiler technology, program semantics, property inference, verification-condition generation, automatic decision procedures, and a user interface, such as a graphical user interface.
Abstract: A program verifier is a complex system that uses compiler technology, program semantics, property inference, verification-condition generation, automatic decision procedures, and a user interface. This paper describes the architecture of a state-of-the-art program verifier for object-oriented programs.
757 citations
Journal Article•
TL;DR: This paper presents an overview of all the main features of PRISM, a probabilistic model checking tool which has already been successfully deployed in a wide range of application domains, from real-time communication protocols to biological signalling pathways.
Abstract: Probabilistic model checking is an automatic formal verification technique for analysing quantitative properties of systems which exhibit stochastic behaviour. PRISM is a probabilistic model checking tool which has already been successfully deployed in a wide range of application domains, from real-time communication protocols to biological signalling pathways. The tool has recently undergone a significant amount of development. Major additions include facilities to manually explore models, Monte-Carlo discrete-event simulation techniques for approximate model analysis (including support for distributed simulation) and the ability to compute cost- and reward-based measures, e.g. the expected energy consumption of the system before the first failure occurs. This paper presents an overview of all the main features of PRISM. More information can be found on the website: www.cs.bham.ac.uk/∼dxp/prism.
723 citations
Journal Article•
TL;DR: It is demonstrated that for DES parameters (56-bit keys and 64-bit plaintexts) an adversary's maximal advantage against triple encryption is small until it asks about 278 queries.
Abstract: We show that, in the ideal-cipher model, triple encryption (the cascade of three independently-keyed blockciphers) is more secure than single or double encryption, thereby resolving a long-standing open problem. Our result demonstrates that for DES parameters (56-bit keys and 64-bit plaintexts) an adversary's maximal advantage against triple encryption is small until it asks about 2 78 queries. Our proof uses code-based game-playing in an integral way, and is facilitated by a framework for such proofs that we provide.
704 citations
Journal Article•
TL;DR: This work presents an Identity Based Encryption system that is fully secure in the standard model and has several advantages over previous such systems – namely, computational efficiency, shorter public parameters, and a “tight” security reduction, albeit to a stronger assumption that depends on the number of private key generation queries made by the adversary.
Abstract: We present an Identity Based Encryption (IBE) system that is fully secure in the standard model and has several advantages over previous such systems - namely, computational efficiency, shorter public parameters, and a tight security reduction, albeit to a stronger assumption that depends on the number of private key generation queries made by the adversary. Our assumption is a variant of Boneh et al.'s decisional Bilinear Diffie-Hellman Exponent assumption, which has been used to construct efficient hierarchical IBE and broadcast encryption systems. The construction is remarkably simple. It also provides recipient anonymity automatically, providing a second (and more efficient) solution to the problem of achieving anonymous IBE without random oracles. Finally, our proof of CCA2 security, which has more in common with the security proof for the Cramer-Shoup encryption scheme than with security proofs for other IBE systems, may be of independent interest.
666 citations
Journal Article•
TL;DR: In this paper, a Simplex-based linear arithmetic solver that can be integrated efficiently in the DPLL(T) framework is presented. But this solver does not support a priori simplification to reduce the problem size, and provides an efficient form of theory propagation.
Abstract: We present a new Simplex-based linear arithmetic solver that can be integrated efficiently in the DPLL(T) framework. The new solver improves over existing approaches by enabling fast backtracking, supporting a priori simplification to reduce the problem size, and providing an efficient form of theory propagation. We also present a new and simple approach for solving strict inequalities. Experimental results show substantial performance improvements over existing tools that use other Simplex-based solvers in DPLL(T) decision procedures. The new solver is even competitive with state-of-the-art tools specialized for the difference logic fragment.
Journal Article•
TL;DR: In this paper, a personal activity recognition system based on a cell phone platform augmented with a Bluetooth-connected sensor board is proposed to recognize 8 different activities collected from 12 different subjects.
Abstract: We are developing a personal activity recognition system that is practical, reliable, and can be incorporated into a variety of health-care related applications ranging from personal fitness to elder care. To make our system appealing and useful, we require it to have the following properties: (i) data only from a single body location needed, and it is not required to be from the same point for every user; (ii) should work out of the box across individuals, with personalization only enhancing its recognition abilities; and (iii) should be effective even with a cost-sensitive subset of the sensors and data features. In this paper, we present an approach to building a system that exhibits these properties and provide evidence based on data for 8 different activities collected from 12 different subjects. Our results indicate that the system has an accuracy rate of approximately 90% while meeting our requirements. We are now developing a fully embedded version of our system based on a cell-phone platform augmented with a Bluetooth-connected sensor board.
Journal Article•
TL;DR: In this paper, the authors treated the challenge of automatically inferring aesthetic quality of pictures using their visual content as a machine learning problem, with a peer-rated online photo sharing website as data source.
Abstract: Aesthetics, in the world of art and photography, refers to the principles of the nature and appreciation of beauty. Judging beauty and other aesthetic qualities of photographs is a highly subjective task. Hence, there is no unanimously agreed standard for measuring aesthetic value. In spite of the lack of firm rules, certain features in photographic images are believed, by many, to please humans more than certain others. In this paper, we treat the challenge of automatically inferring aesthetic quality of pictures using their visual content as a machine learning problem, with a peer-rated online photo sharing Website as data source. We extract certain visual features based on the intuition that they can discriminate between aesthetically pleasing and displeasing images. Automated classifiers are built using support vector machines and classification trees. Linear regression on polynomial terms of the features is also applied to infer numerical aesthetics ratings. The work attempts to explore the relationship between emotions which pictures arouse in people, and their low-level content. Potential applications include content-based image retrieval and digital photography.
Journal Article•
TL;DR: Experimental results using six test functions demonstrate that CSO has much better performance than Particle Swarm Optimization (PSO).
Abstract: In this paper, we present a new algorithm of swarm intelligence, namely, Cat Swarm Optimization (CSO). CSO is generated by observing the behaviors of cats, and composed of two sub-models, i.e., tracing mode and seeking mode, which model upon the behaviors of cats. Experimental results using six test functions demonstrate that CSO has much better performance than Particle Swarm Optimization (PSO).
Journal Article•
TL;DR: The ATL (ATLAS Transformation Language) as discussed by the authors is a hybrid model transformation language that allows both declarative and imperative constructs to be used in transformation definitions, and it is supported by a set of development tools such as an editor, a compiler, a virtual machine, and a debugger.
Abstract: This paper presents ATL (ATLAS Transformation Language): a hybrid model transformation language that allows both declarative and imperative constructs to be used in transformation definitions. The paper describes the language syntax and semantics by using examples. ATL is supported by a set of development tools such as an editor, a compiler, a virtual machine, and a debugger. A case study shows the applicability of the language constructs. Alternative ways for implementing the case study are outlined. In addition to the current features, the planned future ATL features are briefly discussed.
Journal Article•
TL;DR: In particular, for embedding degree k = 2q where q is prime, the authors showed that the ability to handle log(D)/log(r) ∼ (q - 3)/(q - 1) enables building elliptic curves with p ∼ q/(q- 1).
Abstract: Previously known techniques to construct pairing-friendly curves of prime or near-prime order are restricted to embedding degree k ≤ 6. More general methods produce curves over Fp where the bit length of p is often twice as large as that of the order r of the subgroup with embedding degree k; the best published results achieve p = log(p)/log(r) ∼ 5/4. In this paper we make the first step towards surpassing these limitations by describing a method to construct elliptic curves of prime order and embedding degree k = 12. The new curves lead to very efficient implementation: non-pairing operations need no more than F p 4 arithmetic, and pairing values can be compressed to one third of their length in a way compatible with point reduction techniques. We also discuss the role of large CM discriminants D to minimize p; in particular, for embedding degree k = 2q where q is prime we show that the ability to handle log(D)/log(r) ∼ (q - 3)/(q - 1) enables building curves with p ∼ q/(q - 1).
Journal Article•
TL;DR: In this paper, a model checker for infinite-state sequential programs, based on Craig interpolation and the lazy abstraction paradigm, is described. But it is not a model checking tool for program analysis.
Abstract: We describe a model checker for infinite-state sequential programs, based on Craig interpolation and the lazy abstraction paradigm. On device driver benchmarks, we observe a speedup of up to two orders of magnitude relative to a similar tool using predicate abstraction.
Journal Article•
TL;DR: A new convolution kernel, namely the Partial Tree (PT) kernel, is proposed, to fully exploit dependency trees and an efficient algorithm for its computation is proposed which is futhermore sped-up by applying the selection of tree nodes with non-null kernel.
Abstract: In this paper, we provide a study on the use of tree kernels to encode syntactic parsing information in natural language learning. In particular, we propose a new convolution kernel, namely the Partial Tree (PT) kernel, to fully exploit dependency trees. We also propose an efficient algorithm for its computation which is futhermore sped-up by applying the selection of tree nodes with non-null kernel. The experiments with Support Vector Machines on the task of semantic role labeling and question classification show that (a) the kernel running time is linear on the average case and (b) the PT kernel improves on the other tree kernels when applied to the appropriate parsing paradigm.
Journal Article•
TL;DR: A novel method for real-time, simultaneous multi-view face detection and facial pose estimation that employs a convolutional network to map face images to points on a manifold, parametrized by pose, and non-face images to Points far from that manifold is described.
Abstract: We describe a novel method for real-time, simultaneous multi-view face detection and facial pose estimation. The method employs a convolutional network to map face images to points on a manifold, parametrized by pose, and non-face images to points far from that manifold. This network is trained by optimizing a loss function of three variables: image, pose, and face/non-face label. We test the resulting system, in a single configuration, on three standard data sets - one for frontal pose, one for rotated faces, and one for profiles - and find that its performance on each set is comparable to previous multi-view face detectors that can only handle one form of pose variation. We also show experimentally that the system's accuracy on both face detection and pose estimation is improved by training for the two tasks together.
Journal Article•
TL;DR: This work provides efficient distributed protocols for generating shares of random noise, secure against malicious participants, and introduces a technique for distributing shares of many unbiased coins with fewer executions of verifiable secret sharing than would be needed using previous approaches.
Abstract: In this work we provide efficient distributed protocols for generating shares of random noise, secure against malicious participants. The purpose of the noise generation is to create a distributed implementation of the privacy-preserving statistical databases described in recent papers [14,4,13]. In these databases, privacy is obtained by perturbing the true answer to a database query by the addition of a small amount of Gaussian or exponentially distributed random noise. The computational power of even a simple form of these databases, when the query is just of the form Σ i f(d i ), that is, the sum over all rows i in the database of a function f applied to the data in row i, has been demonstrated in [4]. A distributed implementation eliminates the need for a trusted database administrator. The results for noise generation are of independent interest. The generation of Gaussian noise introduces a technique for distributing shares of many unbiased coins with fewer executions of verifiable secret sharing than would be needed using previous approaches (reduced by a factor of n). The generation of exponentially distributed noise uses two shallow circuits: one for generating many arbitrarily but identically biased coins at an amortized cost of two unbiased random bits apiece, independent of the bias, and the other to combine bits of appropriate biases to obtain an exponential distribution.
Journal Article•
TL;DR: A Bayesian framework for parsing images into their constituent visual patterns that optimizes the posterior probability and outputs a scene representation as a “parsing graph”, in a spirit similar to parsing sentences in speech and natural language is presented.
Abstract: In this chapter we present a Bayesian framework for parsing images into their constituent visual patterns. The parsing algorithm optimizes the posterior probability and outputs a scene representation as a parsing graph, in a spirit similar to parsing sentences in speech and natural language. The algorithm constructs the parsing graph and re-configures it dynamically using a set of moves, which are mostly reversible Markov chain jumps. This computational framework integrates two popular inference approaches - generative (top-down) methods and discriminative (bottom-up) methods. The former formulates the posterior probability in terms of generative models for images defined by likelihood functions and priors. The latter computes discriminative probabilities based on a sequence (cascade) of bottom-up tests/filters. In our Markov chain algorithm design, the posterior probability, defined by the generative models, is the invariant (target) probability for the Markov chain, and the discriminative probabilities are used to construct proposal probabilities to drive the Markov chain. Intuitively, the bottom-up discriminative probabilities activate top-down generative models. In this chapter, we focus on two types of visual patterns - generic visual patterns, such as texture and shading, and object patterns including human faces and text. These types of patterns compete and cooperate to explain the image and so image parsing unifies image segmentation, object detection, and recognition (if we use generic visual patterns only then image parsing will correspond to image segmentation [48].). We illustrate our algorithm on natural images of complex city scenes and show examples where image segmentation can be improved by allowing object specific knowledge to disambiguate low-level segmentation cues, and conversely where object detection can be improved by using generic visual patterns to explain away shadows and occlusions.
Journal Article•
TL;DR: In this paper, the authors investigated how an RFID-tag can be made unclonable by linking it inseparably to a Physical Unclonability Function (PUF) and presented the security protocols that are needed for the detection of the authenticity of a product when it is equipped with such a system.
Abstract: RFID-tags are becoming very popular tools for identification of products. As they have a small microchip on board, they offer functionality that can be used for security purposes. This chip functionality makes it possible to verify the authenticity of a product and hence to detect and prevent counterfeiting. In order to be successful for these security purposes too, RFID-tags have to be resistant against many attacks, in particular against cloning of the tag. In this paper, we investigate how an RFID-tag can be made unclonable by linking it inseparably to a Physical Unclonable Function (PUF). We present the security protocols that are needed for the detection of the authenticity of a product when it is equipped with such a system. We focus on off-line authentication because it is very attractive from a practical point of view. We show that a PUF based solution for RFID-tags is feasible in the off-line case.
Journal Article•
TL;DR: A novel technique based on the structural analysis of binary code that allows one to identify structural similarities between different worm mutations is presented, which has been used as a basis for a worm detection system that is resilient to many of the mechanisms used to evade approaches based on instruction sequences only.
Abstract: Network worms are malicious programs that spread automatically across networks by exploiting vulnerabilities that affect a large number of hosts. Because of the speed at which worms spread to large computer populations, countermeasures based on human reaction time are not feasible. Therefore, recent research has focused on devising new techniques to detect and contain network worms without the need of human supervision. In particular, a number of approaches have been proposed to automatically derive signatures to detect network worms by analyzing a number of worm-related network streams. Most of these techniques, however, assume that the worm code does not change during the infection process. Unfortunately, worms can be polymorphic. That is, they can mutate as they spread across the network. To detect these types of worms, it is necessary to devise new techniques that are able to identify similarities between different mutations of a worm. This paper presents a novel technique based on the structural analysis of binary code that allows one to identify structural similarities between different worm mutations. The approach is based on the analysis of a worm's control flow graph and introduces an original graph coloring technique that supports a more precise characterization of the worm's structure. The technique has been used as a basis to implement a worm detection system that is resilient to many of the mechanisms used to evade approaches based on instruction sequences only.
Journal Article•
TL;DR: In this article, the authors propose a three-stage model called SAIBA, where the stages represent intent planning, behavior planning and behavior realization, and a Function Markup Language (FML), describing intent without referring to physical behavior, mediates between the first two stages.
Abstract: This paper describes an international effort to unify a multimodal behavior generation framework for Embodied Conversational Agents (ECAs). We propose a three stage model we call SAIBA where the stages represent intent planning, behavior planning and behavior realization. A Function Markup Language (FML), describing intent without referring to physical behavior, mediates between the first two stages and a Behavior Markup Language (BML) describing desired physical realization, mediates between the last two stages. In this paper we will focus on BML. The hope is that this ion and modularization will help ECA researchers pool their resources to build more sophisticated virtual humans.
Journal Article•
TL;DR: A use-case model for an architectural knowledge base, together with its underlying ontology, is described and a small case study in which available architectural knowledge is model in a commercial tool, the Aduna Cluster Map Viewer, which is aimed at ontology-based visualization.
Abstract: Architectural knowledge consists of architecture design as well as the design decisions, assumptions, context, and other factors that together determine why a particular solution is the way it is. Except for the architecture design part, most of the architectural knowledge usually remains hidden, tacit in the heads of the architects. We conjecture that an explicit representation of architectural knowledge is helpful for building and evolving quality systems. If we had a repository of architectural knowledge for a system, what would it ideally contain, how would we build it, and exploit it in practice? In this paper we describe a use-case model for an architectural knowledge base, together with its underlying ontology. We present a small case study in which we model available architectural knowledge in a commercial tool, the Aduna Cluster Map Viewer, which is aimed at ontology-based visualization. Putting together ontologies, use cases and tool support, we are able to reason about which types of architecting tasks can be supported, and how this can be done.
Journal Article•
TL;DR: In this article, Masking techniques are employed to counter side-channel attacks that are based on multiple measurements of the same operation on different data, and they are not effective in the presence of glitches.
Abstract: Implementations of cryptographic algorithms are vulnerable to side-channel attacks. Masking techniques are employed to counter side-channel attacks that are based on multiple measurements of the same operation on different data. Most currently known techniques require new random values after every nonlinear operation and they are not effective in the presence of glitches. We present a new method to protect implementations. Our method has a higher computational complexity, but requires random values only at the start, and stays effective in the presence of glitches.
Journal Article•
TL;DR: In this paper, a new stream cipher construction based on block cipher design principles is proposed, where the building blocks used in block ciphers are replaced by equivalent stream cipher components.
Abstract: In this paper, we propose a new stream cipher construction based on block cipher design principles The main idea is to replace the building blocks used in block ciphers by equivalent stream cipher components In order to illustrate this approach, we construct a very simple synchronous stream cipher which provides a lot of flexibility for hardware implementations, and seems to have a number of desirable cryptographic properties
Journal Article•
TL;DR: In this paper, it was shown that while the function proposed by Micciancio is not collision resistant, it can be easily modified to achieve collision resistance under essentially the same complexity assumptions on cyclic lattices.
Abstract: In (Micciancio, FOCS 2002), it was proved that solving the generalized compact knapsack problem on the average is as hard as solving certain worst-case problems for cyclic lattices. This result immediately yielded very efficient one-way functions whose security was based on worst-case hardness assumptions. In this work, we show that, while the function proposed by Micciancio is not collision resistant, it can be easily modified to achieve collision resistance under essentially the same complexity assumptions on cyclic lattices. Our modified function is obtained as a special case of a more general result, which yields efficient collision-resistant hash functions based on the worst-case hardness of various new problems. These include new problems from algebraic number theory as well as classic lattice problems (e.g., the shortest vector problem) over ideal lattices, a class of lattices that includes cyclic lattices as a special case.
Journal Article•
TL;DR: This paper revisits the notion of Tag-Based Encryption (TBE) and provides security definitions for the selective-tag case and shows how to apply the techniques gained from the TBE construction to directly design a new Key Encapsulation Mechanism.
Abstract: One of the celebrated applications of Identity-Based Encryption (IBE) is the Canetti, Halevi, and Katz (CHK) transformation from any (selective-identity secure) IBE scheme into a full chosen-ciphertext secure encryption scheme. Since such IBE schemes in the standard model are known from previous work this immediately provides new chosen-ciphertext secure encryption schemes in the standard model. This paper revisits the notion of Tag-Based Encryption (TBE) and provides security definitions for the selective-tag case. Even though TBE schemes belong to a more general class of cryptographic schemes than IBE, we observe that (selective-tag secure) TBE is a sufficient primitive for the CHK transformation and therefore implies chosen-ciphertext secure encryption. We construct efficient and practical TBE schemes and give tight security reductions in the standard model from the Decisional Linear Assumption in gap-groups. In contrast to all known IBE schemes our TBE construction does not directly deploy pairings. Instantiating the CHK transformation with our TBE scheme results in an encryption scheme whose decryption can be carried out in one single multi-exponentiation. Furthermore, we show how to apply the techniques gained from the TBE construction to directly design a new Key Encapsulation Mechanism. Since in this case we can avoid the CHK transformation the scheme results in improved efficiency.
Journal Article•
TL;DR: In this article, the authors proposed a measure on local outliers based on a symmetric neighborhood relationship, which considers both neighbors and reverse neighbors of an object when estimating its density distribution.
Abstract: Mining outliers in database is to find exceptional objects that deviate from the rest of the data set. Besides classical outlier analysis algorithms, recent studies have focused on mining local outliers, i.e., the outliers that have density distribution significantly different from their neighborhood. The estimation of density distribution at the location of an object has so far been based on the density distribution of its k-nearest neighbors [2,11]. However, when outliers are in the location where the density distributions in the neighborhood are significantly different, for example, in the case of objects from a sparse cluster close to a denser cluster, this may result in wrong estimation. To avoid this problem, here we propose a simple but effective measure on local outliers based on a symmetric neighborhood relationship. The proposed measure considers both neighbors and reverse neighbors of an object when estimating its density distribution. As a result, outliers so discovered are more meaningful. To compute such local outliers efficiently, several mining algorithms are developed that detects top-n outliers based on our definition. A comprehensive performance evaluation and analysis shows that our methods are not only efficient in the computation but also more effective in ranking outliers.