scispace - formally typeset
Search or ask a question

Showing papers in "Lecture Notes in Computer Science in 2003"


Book ChapterDOI
TL;DR: In this paper, the authors give a basic introduction to Gaussian Process regression models and present the simple equations for incorporating training data and examine how to learn the hyperparameters using the marginal likelihood.
Abstract: We give a basic introduction to Gaussian Process regression models. We focus on understanding the role of the stochastic process and how it is used to define a distribution over functions. We present the simple equations for incorporating training data and examine how to learn the hyperparameters using the marginal likelihood. We explain the practical advantages of Gaussian Process and end with conclusions and a look at the current trends in GP work.

6,295 citations


Journal Article
TL;DR: In this article, the concept of certificateless public key cryptography (CL-PKC) was introduced and made concrete, which does not require certificates to guarantee the authenticity of public keys.
Abstract: This paper introduces and makes concrete the concept of certificateless public key cryptography (CL-PKC), a model for the use of public key cryptography which avoids the inherent escrow of identity-based cryptography and yet which does not require certificates to guarantee the authenticity of public keys. The lack of certificates and the presence of an adversary who has access to a master key necessitates the careful development of a new security model. We focus on certificateless public key encryption (CL-PKE), showing that a concrete pairing-based CL-PKE scheme is secure provided that an underlying problem closely related to the Bilinear Diffie-Hellman Problem is hard.

1,568 citations


Book ChapterDOI
TL;DR: This paper proposes a novel kNN type method for classification that reduces the dependency on k, makes classification faster, and compares well with C5.0 and kNN in terms of classification accuracy.
Abstract: The k-Nearest-Neighbours (kNN) is a simple but effective method for classification The major drawbacks with respect to kNN are (1) its low efficiency – being a lazy learning method prohibits it in many applications such as dynamic web mining for a large repository, and (2) its dependency on the selection of a “good value” for k In this paper, we propose a novel kNN type method for classification that is aimed at overcoming these shortcomings Our method constructs a kNN model for the data, which replaces the data to serve as the basis of classification The value of k is automatically determined, is varied for different data, and is optimal in terms of classification accuracy The construction of the model reduces the dependency on k and makes classification faster Experiments were carried out on some public datasets collected from the UCI machine learning repository in order to test our method The experimental results show that the kNN based model compares well with C50 and kNN in terms of classification accuracy, but is more efficient than the standard kNN

1,024 citations


Journal Article
TL;DR: In this paper, an efficient identity based signature scheme based on pairings whose security relies on the hardness of the Diffie-Hellman problem in the random oracle model was proposed.
Abstract: We develop an efficient identity based signature scheme based on pairings whose security relies on the hardness of the Diffie-Hellman problem in the random oracle model. We describe how this scheme is obtained as a special version of a more general generic scheme which yields further new provably secure identity based signature schemes if pairings are used. The generic scheme also includes traditional public key signature schemes. We further discuss issues of key escrow and the distribution of keys to multiple trust authorities. The appendix contains a brief description of the relevant properties of supersingular elliptic curves and the Weil and Tate pairings.

885 citations


Book ChapterDOI
TL;DR: EvalVid is targeted for researchers who want to evaluate their network designs or setups in terms of user perceived video quality, and has a modular construction, making it possible to exchange both the network and the codec.
Abstract: With EvalVid we present a complete framework and tool-set for evaluation of the quality of video transmitted over a real or simulated communication network. Besides measuring QoS parameters of the underlying network, like loss rates, delays, and jitter, we support also a subjective video quality evaluation of the received video based on the frame-by-frame PSNR calculation. The tool-set has a modular construction, making it possible to exchange both the network and the codec. We present here its application for MPEG-4 as example. EvalVid is targeted for researchers who want to evaluate their network designs or setups in terms of user perceived video quality. The tool-set is publicly available [11].

825 citations


Journal Article
TL;DR: In this article, a novel set of rotated haar-like features is introduced to enrich the simple features of Viola et al.'s approach, which can also be calculated efficiently.
Abstract: Recently Viola et al. have introduced a rapid object detection scheme based on a boosted cascade of simple feature classifiers. In this paper we introduce and empirically analysis two extensions to their approach: Firstly, a novel set of rotated haar-like features is introduced. These novel features significantly enrich the simple features of [6] and can also be calculated efficiently. With these new rotated features our sample face detector shows off on average a 10% lower false alarm rate at a given hit rate. Secondly, we present a through analysis of different boosting algorithms (namely Discrete, Real and Gentle Adaboost) and weak classifiers on the detection performance and computational complexity. We will see that Gentle Adaboost with small CART trees as base classifiers outperform Discrete Adaboost and stumps. The complete object detection training and detection system as well as a trained face detector are available in the Open Computer Vision Library at sourceforge.net [8].

823 citations


Journal Article
TL;DR: In benchmark studies using a set of large industrial circuit verification instances, this method is greatly more efficient than BDD-based symbolic model checking, and compares favorably to some recent SAT-based model checking methods on positive instances.
Abstract: We consider a fully SAT-based method of unbounded symbolic model checking based on computing Craig interpolants. In benchmark studies using a set of large industrial circuit verification instances, this method is greatly more efficient than BDD-based symbolic model checking, and compares favorably to some recent SAT-based model checking methods on positive instances.

775 citations


Journal Article
TL;DR: An alternative information theoretic measure of anonymity is proposed which takes into account the probabilities of users sending and receiving the messages and is shown how to calculate it for a message in a standard mix-based anonymity system.
Abstract: In this paper we look closely at the popular metric of anonymity, the anonymity set, and point out a number of problems associated with it. We then propose an alternative information theoretic measure of anonymity which takes into account the probabilities of users sending and receiving the messages and show how to calculate it for a message in a standard mix-based anonymity system. We also use our metric to compare a pool mix to a traditional threshold mix, which was impossible using anonymity sets. We also show how the maximum route length restriction which exists in some fielded anonymity systems can lead to the attacker performing more powerful traffic analysis. Finally, we discuss open problems and future work on anonymity measurements.

760 citations


Book ChapterDOI
TL;DR: In this article, the authors propose an organization centered multi-agent system (OCMAS) model called AGR for agent/group/role and propose a set of notations and a methodological framework to help the designer to build MAS using AGR.
Abstract: While multi-agent systems seem to provide a good basis for building complex software systems, this paper points out some of the drawbacks of classical “agent centered” multi-agent systems. To resolve these difficulties we claim that organization centered multi-agent system, or OCMAS for short, may be used. We propose a set of general principles from which true OCMAS may be designed. One of these principles is not to assume anything about the cognitive capabilities of agents. In order to show how OCMAS models may be designed, we propose a very concise and minimal OCMAS model called AGR, for Agent/Group/Role. We propose a set of notations and a methodological framework to help the designer to build MAS using AGR. We then show that it is possible to design multi-agent systems using only OCMAS models.

704 citations


Journal Article
TL;DR: In this paper, the first constructions of a (non-interactive) forward-secure public-key encryption scheme were presented, and the main construction achieves security against chosen plaintext attacks under the decisional bilinear Diffie-Hellman assumption in the standard model.
Abstract: Cryptographic computations are often carried out on insecure devices for which the threat of key exposure represents a serious and realistic concern. In an effort to mitigate the damage caused by exposure of secret data (e,g., keys) stored on such devices, the paradigm of forward security was introduced. In a forward-secure scheme, secret keys are updated at regular periods of time; furthermore, exposure of a secret key corresponding to a given time period does not enable an adversary to break the scheme (in the appropriate sense) for any prior time period. A number of constructions of forward-secure digital signature schemes, key-exchange protocols, and symmetric-key schemes are known. We present the first constructions of a (non-interactive) forward-secure public-key encryption scheme. Our main construction achieves security against chosen plaintext attacks under the decisional bilinear Diffie-Hellman assumption in the standard model. It is practical, and all complexity parameters grow at most logarithmically with the total number of time periods. The scheme can also be extended to achieve security against chosen ciphertext attacks.

677 citations



Journal Article
TL;DR: This work presents the first experiences in using PROB on several case studies, highlighting that PROB enables users to uncover errors that are not easily discovered by existing tools.
Abstract: We present PROB, an animation and model checking tool for the B method PROB's animation facilities allow users to gain confidence in their specifications, and unlike the animator provided by the B-Toolkit, the user does not have to guess the right values for the operation arguments or choice variables PROB contains a model checker and a constraint-based checker, both of which can be used to detect various errors in B specifications We present our first experiences in using PROB on several case studies, highlighting that PROB enables users to uncover errors that are not easily discovered by existing tools

Book ChapterDOI
TL;DR: Experimental results on the image dataset from 100 users confirm the utility of hand geometry features with those from palmprints and achieve promising results with a simple image acquisition setup.
Abstract: A new approach for the personal identification using hand images is presented This paper attempts to improve the performance of palmprint-based verification system by integrating hand geometry features Unlike other bimodal biometric systems, the users does not have to undergo the inconvenience of passing through two sensors since the palmprint and hand geometry features can be are acquired from the same image, using a digital camera, at the same time Each of these gray level images are aligned and then used to extract palmprint and hand geometry features These features are then examined for their individual and combined performance The image acquisition setup used in this work was inherently simple and it does not employ any special illumination nor does it use any pegs to cause any inconvenience to the users Our experimental results on the image dataset from 100 users confirm the utility of hand geometry features with those from palmprints and achieve promising results with a simple image acquisition setup

Journal Article
TL;DR: In this paper, a new enhancement of RANSAC, the locally optimized version (LO-RANSAC), is introduced, which makes the above-mentioned assumption valid by applying local optimization to the solution estimated from the random sample.
Abstract: A new enhancement of RANSAC, the locally optimized RANSAC (LO-RANSAC), is introduced. It has been observed that, to find an optimal solution (with a given probability), the number of samples drawn in RANSAC is significantly higher than predicted from the mathematical model. This is due to the incorrect assumption, that a model with. parameters computed from an outlier-free sample is consistent with all inliers. The assumption rarely holds in practice. The locally optimized RANSAC makes no new assumptions about the data, on the contrary - it makes the above-mentioned assumption valid by applying local optimization to the solution estimated from the random sample. The performance of the improved RANSAC is evaluated in a number of epipolar geometry and homography estimation experiments. Compared with standard RANSAC, the speed-up achieved is two to three fold and the quality of the solution (measured by the number of inliers) is increased by 10-20%. The number of samples drawn is in good agreement with theoretical predictions.

Journal Article
TL;DR: In this article, the problem representation together with the variation operators is seen as an integral part of the optimization problem and can hence be easily separated from the selection operators, which makes it possible to specify and implement representation-independent selection modules, which form the essence of modern multiobjective optimization algorithms.
Abstract: This paper introduces an interface specification (PISA) that allows to separate the problem-specific part of an optimizer from the problem-independent part. We propose a view of the general optimization scenario, where the problem representation together with the variation operators is seen as an integral part of the optimization problem and can hence be easily separated from the selection operators. Both parts are implemented as independent programs, that can be provided as ready-to-use packages and arbitrarily combined. This makes it possible to specify and implement representation-independent selection modules, which form the essence of modern multiobjective optimization algorithms. The variation operators, on the other hand, have to be defined in one module together with the optimization problem, facilitating a customized problem description. Besides the specification, the paper contains a correctness proof for the protocol and measured efficiency results.

Book ChapterDOI
TL;DR: An introduction to theoretical and practical aspects ofboosting and Ensemble learning is provided, providing a useful reference for researchers in the field of Boosting as well as for those seeking to enter this fascinating area of research.
Abstract: We provide an introduction to theoretical and practical aspects of Boosting and Ensemble learning, providing a useful reference for researchers in the field of Boosting as well as for those seeking to enter this fascinating area of research. We begin with a short background concerning the necessary learning theoretical foundations of weak learners and their linear combinations. We then point out the useful connection between Boosting and the Theory of Optimization, which facilitates the understanding of Boosting and later on enables us to move on to new Boosting algorithms, applicable to a broad spectrum of problems. In order to increase the relevance of the paper to practitioners, we have added remarks, pseudo code, "tricks of the trade", and algorithmic considerations where appropriate. Finally, we illustrate the usefulness of Boosting algorithms by giving an overview of some existing applications. The main ideas are illustrated on the problem of binary classification, although several extensions are discussed.

Book ChapterDOI
TL;DR: A protocol for evaluating verification algorithms on the BANCA database, a new large, realistic and challenging multi-modal database intended for training and testing multi- modal verification systems, is described.
Abstract: In this paper we describe the acquisition and content of a new large, realistic and challenging multi-modal database intended for training and testing multi-modal verification systems. The BANCA database was captured in four European languages in two modalities (face and voice). For recording, both high and low quality microphones and cameras were used. The subjects were recorded in three different scenarios, controlled, degraded and adverse over a period of three months. In total 208 people were captured, half men and half women. In this paper we also describe a protocol for evaluating verification algorithms on the database. The database will be made available to the research community through http://www.ee.surrey.ac.uk/Research/VSSP/banca.

Book ChapterDOI
TL;DR: This work introduces the concept of a delta-contracting and epsilon-revealing function which executes preprocessing in the biometric authentication scheme and believes that this concept can become a building block of a public infrastructure for biometrics authentication that nonetheless preserves privacy of the participants.
Abstract: In biometrics, a human being needs to be identified based on some characteristic physiological parameters. Often this recognition is part of some security system. Secure storage of reference data (i.e., user templates) of individuals is a key concern. It is undesirable that a dishonest verifier can misuse parameters that he obtains before or during a recognition process. We propose a method that allows a verifier to check the authenticity of the prover in a way that the verifier does not learn any information about the biometrics of the prover, unless the prover willingly releases these parameters. To this end, we introduce the concept of a delta-contracting and epsilon-revealing function which executes preprocessing in the biometric authentication scheme. It is believed that this concept can become a building block of a public infrastructure for biometric authentication that nonetheless preserves privacy of the participants.

Journal Article
TL;DR: A steganalytic method that can reliably detect messages (and estimate their size) hidden in JPEG images using the steganographic algorithm F5 is presented.
Abstract: In this paper, we present a steganalytic method that can reliably detect messages (and estimate their size) hidden in JPEG images using the steganographic algorithm F5. The key element of the method is estimation of the cover-image histogram from the stego-image. This is done by decompressing the stego-image, cropping it by four pixels in both directions to remove the quantization in the frequency domain, and recompressing it using the same quality factor as the stego-image. The number of relative changes introduced by F5 is determined using the least square fit by comparing the estimated histograms of selected DCT coefficients with those of the stego-image. Experimental results indicate that relative modifications as small as 10% of the usable DCT coefficients can be reliably detected. The method is tested on a diverse set of test images that include both raw and processed images in the JPEG and BMP formats.

Book ChapterDOI
TL;DR: The architecture of a new service has been developed, the Virtual Organization Membership Service (VOMS), to manage authorization information in Virtual Organization scope, focusing on the framework of the DataGrid and DataTAG Projects.
Abstract: We briefly describe the authorization requirements, focusing on the framework of the DataGrid and DataTAG Projects and illustrate the architecture of a new service we have developed, the Virtual Organization Membership Service (VOMS), to manage authorization information in Virtual Organization scope.

Journal Article
TL;DR: A theorem 1 is presented that shows that the maximization of this scalar value constitutes the necessary and sufficient condition for the function's arguments to be maximally diverse Pareto optimal solutions of a discrete, multi-objective, optimization problem.
Abstract: This article describes a set function that maps a set of Pareto optimal points to a scalar. A theorem 1 is presented that shows that the maximization of this scalar value constitutes the necessary and sufficient condition for the function's arguments to be maximally diverse Pareto optimal solutions of a discrete, multi-objective, optimization problem. This scalar quantity, a hypervolume based on a Lebesgue measure, is therefore the best metric to assess the quality of multiobjective optimization algorithms. Moreover, it can be used as the objective function in simulated annealing (SA) to induce convergence in probability to the Pareto optima. An efficient, polynomial-time algorithm for calculating this scalar and an analysis of its complexity is also presented.

Journal Article
TL;DR: eXist as discussed by the authors is an Open Source native XML database system, which supports keyword search on element and attribute contents and an enhanced indexing scheme at the architecture's core supports quick identification of structural node relationships.
Abstract: With the advent of native and XML enabled database systems, techniques for efficiently storing, indexing and querying large collections of XML documents have become an important research topic. This paper presents the storage, indexing and query processing architecture of eXist, an Open Source native XML database system. eXist is tightly integrated with existing tools and covers most of the native XML database features. An enhanced indexing scheme at the architecture's core supports quick identification of structural node relationships. Based on this scheme, we extend the application of path join algorithms to implement most parts of the XPath query language specification and add support for keyword search on element and attribute contents.

Book ChapterDOI
TL;DR: R rigourously is established that, even in this setting, the area under the ROC (Receiver Operating Characteristics) curve, or simply AUC, provides a better measure than accuracy when measuring and comparing classification systems.
Abstract: Predictive accuracy has been widely used as the main criterion for comparing the predictive ability of classification systems (such as C4.5, neural networks, and Naive Bayes). Most of these classifiers also produce probability estimations of the classification, but they are completely ignored in the accuracy measure. This is often taken for granted because both training and testing sets only provide class labels. In this paper we establish rigourously that, even in this setting, the area under the ROC (Receiver Operating Characteristics) curve, or simply AUC, provides a better measure than accuracy. Our result is quite significant for three reasons. First, we establish, for the first time, rigourous criteria for comparing evaluation measures for learning algorithms. Second, it suggests that AUC should replace accuracy when measuring and comparing classification systems. Third, our result also prompts us to reevaluate many well-established conclusions based on accuracy in machine learning. For example, it is well accepted in the machine learning community that, in terms of predictive accuracy, Naive Bayes and decision trees are very similar. Using AUC, however, we show experimentally that Naive Bayes is significantly better than the decision-tree learning algorithms.

Book ChapterDOI
TL;DR: This work proposes a new image preprocessing algorithm that compensates for illumination variations in images from a single brightness image, which does not require any training steps, knowledge of 3D face models or reflective surface models, and demonstrates large performance improvements.
Abstract: Face recognition algorithms have to deal with significant amounts of illumination variations between gallery and probe images. State-of-the-art commercial face recognition algorithms still struggle with this problem. We propose a new image preprocessing algorithm that compensates for illumination variations in images. From a single brightness image the algorithm first estimates the illumination field and then compensates for it to mostly recover the scene reflectance. Unlike previously proposed approaches for illumination compensation, our algorithm does not require any training steps, knowledge of 3D face models or reflective surface models. We apply the algorithm to face images prior to recognition. We demonstrate large performance improvements with several standard face recognition algorithms across multiple, publicly available face databases.

Book ChapterDOI
TL;DR: A 3D morphable model is used to compute 3D face models from three input images of each subject in the training database and the system achieved a recognition rate significantly better than a comparable global face recognition system.
Abstract: We present a novel approach to pose and illumination invariant face recognition that combines two recent advances in the computer vision field: component-based recognition and 3D morphable models. First, a 3D morphable model is used to generate 3D face models from three input images from each person in the training database. The 3D models are rendered under varying pose and illumination conditions to build a large set of synthetic images. These images are then used to train a component-based face recognition system. The resulting system achieved 90% accuracy on a database of 1200 real images of six people and significantly outperformed a comparable global face recognition system. The results show the potential of the combination of morphable models and component-based recognition towards pose and illumination invariant face recognition based on only three training images of each subject.

Book ChapterDOI
TL;DR: Some procedures for DNA-based cryptography based on one-time-pads that are in principle unbreakable are presented, and a class of DNA steganography systems, which secretly tag the input DNA and then hide it within collections of other DNA are examined.
Abstract: Recent research has considered DNA as a medium for ultra-scale computation and for ultra-compact information storage. One potential key application is DNA-based, molecular cryptography systems. We present some procedures for DNA-based cryptography based on one-time-pads that are in principle unbreakable. Practical applications of cryptographic systems based on one-time-pads are limited in conventional electronic media by the size of the one-time-pad; however DNA provides a much more compact storage medium, and an extremely small amount of DNA suffices even for huge one-time-pads. We detail procedures for two DNA one-time-pad encryption schemes: (i) a substitution method using libraries of distinct pads, each of which defines a specific, randomly generated, pair-wise mapping; and (ii) an XOR scheme utilizing molecular computation and indexed, random key strings. These methods can be applied either for the encryption of natural DNA or for artificial DNA encoding binary data. In the latter case, we also present a novel use of chip-based DNA micro-array technology for 2D data input and output. Finally, we examine a class of DNA steganography systems, which secretly tag the input DNA and then hide it within collections of other DNA. We consider potential limitations of these steganographic techniques, proving that in theory the message hidden with such a method can be recovered by an adversary. We also discuss various modified DNA steganography methods which appear to have improved security.

Book ChapterDOI
TL;DR: This work uses two different strategies for fusing iris and face classifiers to treat the matching distances of face and iris classifiers as a two-dimensional feature vector and uses a classifier such as Fisher's discriminant analysis and a neural network with radial basis function to classify the vector as being genuine or an impostor.
Abstract: Face and iris identification have been employed in various biometric applications. Besides improving verification performance, the fusion of these two biometrics has several other advantages. We use two different strategies for fusing iris and face classifiers. The first strategy is to compute either an unweighted or weighted sum and to compare the result to a threshold. The second strategy is to treat the matching distances of face and iris classifiers as a two-dimensional feature vector and to use a classifier such as Fisher's discriminant analysis and a neural network with radial basis function (RBFNN) to classify the vector as being genuine or an impostor. We compare the results of the combined classifier with the results of the individual face and iris classifiers.

Journal Article
TL;DR: In this paper, the authors present a new and efficient attack of this cryptosystem based on fast algorithms for computing Grobner basis, which can break the first HFE challenge in only two days of CPU time by using the new algorithm F5 implemented in C.
Abstract: In this paper, we review and explain the existing algebraic cryptanalysis of multivariate cryptosystems from the hidden field equation (HFE) family. These cryptanalysis break cryptosystems in the HFE family by solving multivariate systems of equations. In this paper we present a new and efficient attack of this cryptosystem based on fast algorithms for computing Grobner basis. In particular it was was possible to break the first HFE challenge (80 bits) in only two days of CPU time by using the new algorithm F5 implemented in C. From a theoretical point of view we study the algebraic properties of the equations produced by instance of the HFE cryptosystems and show why they yield systems of equations easier to solve than random systems of quadratic equations of the same sizes. Moreover we are able to bound the maximal degree occuring in the Grobner basis computation. As a consequence, we gain a deeper understanding of the algebraic cryptanalysis against these cryptosystems. We use this understanding to devise a specific algorithm based on sparse linear algebra. In general, we conclude that the cryptanalysis of HFE can be performed in polynomial time. We also revisit the security estimates for existing schemes in the *FE family.

Journal Article
TL;DR: In this article, the concept of related-key deriving (RKD) functions is introduced, and a theoretical investigation of the block-cipher design-goal of security against RKAs is initiated.
Abstract: We initiate a theoretical investigation of the popular block-cipher design-goal of security against related-key attacks (RKAs). We begin by introducing definitions for the concepts of PRPs and PRFs secure against classes of RKAs, each such class being specified by an associated set of related-key deriving (RKD) functions. Then for some such classes of attacks, we prove impossibility results, showing that no block-cipher can resist these attacks while, for other, related classes of attacks that include popular targets in the block cipher community, we prove possibility results that provide theoretical support for the view that security against them is achievable. Finally we prove security of various block-cipher based constructs that use related keys, including a tweakable block cipher given in [14].

Journal Article
TL;DR: It is shown that the length of hidden messages embedded in the least significant bits of signal samples can be estimated with relatively high precision and the new steganalytic approach is based on some statistical measures of sample pairs that are highly sensitive to LSB embedding operations.
Abstract: This paper introduces a new, principled approach to detecting LSB steganography in digital signals such as images and audio. It is shown that the length of hidden message embedded in the least significant bits of signal samples can be estimated with relatively high precision. The new steganalytic approach is based on some statistical measures of sample pairs that are highly sensitive to LSB embedding operations. The resulting detection algorithm is simple and fast. To evaluate the robustness of the proposed steganalytic approach, bounds on estimation errors are developed. Furthermore, the vulnerability of the new approach to possible attacks is also assessed, and counter measures are suggested.