scispace - formally typeset
Search or ask a question

Showing papers in "Journal of Cryptology in 2010"


Journal ArticleDOI
TL;DR: This paper gives a single coherent framework that encompasses all of the constructions of pairing-friendly elliptic curves currently existing in the literature and provides recommendations as to which pairing- friendly curves to choose to best satisfy a variety of performance and security requirements.
Abstract: Elliptic curves with small embedding degree and large prime-order subgroup are key ingredients for implementing pairing-based cryptographic systems. Such “pairing-friendly” curves are rare and thus require specific constructions. In this paper we give a single coherent framework that encompasses all of the constructions of pairing-friendly elliptic curves currently existing in the literature. We also include new constructions of pairing-friendly curves that improve on the previously known constructions for certain embedding degrees. Finally, for all embedding degrees up to 50, we provide recommendations as to which pairing-friendly curves to choose to best satisfy a variety of performance and security requirements.

542 citations


Journal ArticleDOI
TL;DR: An extremely strong type of attack is demonstrated, which requires knowledge of neither the specific plaintexts nor ciphertexts and works by merely monitoring the effect of the cryptographic process on the cache.
Abstract: We describe several software side-channel attacks based on inter-process leakage through the state of the CPU’s memory cache. This leakage reveals memory access patterns, which can be used for cryptanalysis of cryptographic primitives that employ data-dependent table lookups. The attacks allow an unprivileged process to attack other processes running in parallel on the same processor, despite partitioning methods such as memory protection, sandboxing, and virtualization. Some of our methods require only the ability to trigger services that perform encryption or MAC using the unknown key, such as encrypted disk partitions or secure network links. Moreover, we demonstrate an extremely strong type of attack, which requires knowledge of neither the specific plaintexts nor ciphertexts and works by merely monitoring the effect of the cryptographic process on the cache. We discuss in detail several attacks on AES and experimentally demonstrate their applicability to real systems, such as OpenSSL and Linux’s dm-crypt encrypted partitions (in the latter case, the full key was recovered after just 800 writes to the partition, taking 65 milliseconds). Finally, we discuss a variety of countermeasures which can be used to mitigate such attacks.

500 citations


Journal ArticleDOI
TL;DR: The notion of covert adversaries is introduced, which is believed to faithfully models the adversarial behavior in many commercial, political, and social settings and it is shown that it is possible to obtain highly efficient protocols that are secure against such adversaries.
Abstract: In the setting of secure multiparty computation, a set of mutually distrustful parties wish to securely compute some joint function of their private inputs. The computation should be carried out in a secure way, meaning that no coalition of corrupted parties should be able to learn more than specified or somehow cause the result to be “incorrect.” Typically, corrupted parties are either assumed to be semi-honest (meaning that they follow the protocol specification) or malicious (meaning that they may deviate arbitrarily from the protocol). However, in many settings, the assumption regarding semi-honest behavior does not suffice and security in the presence of malicious adversaries is excessive and expensive to achieve. In this paper, we introduce the notion of covert adversaries, which we believe faithfully models the adversarial behavior in many commercial, political, and social settings. Covert adversaries have the property that they may deviate arbitrarily from the protocol specification in an attempt to cheat, but do not wish to be “caught” doing so. We provide a definition of security for covert adversaries and show that it is possible to obtain highly efficient protocols that are secure against such adversaries. We stress that in our definition, we quantify over all (possibly malicious) adversaries and do not assume that the adversary behaves in any particular way. Rather, we guarantee that if an adversary deviates from the protocol in a way that would enable it to “cheat” (meaning that it can achieve something that is impossible in an ideal model where a trusted party is used to compute the function), then the honest parties are guaranteed to detect this cheating with good probability. We argue that this level of security is sufficient in many settings.

225 citations


Journal ArticleDOI
TL;DR: In this paper, the authors proposed a secure set intersection and pattern matching algorithm based on secure pseudorandom function evaluation. But their results are presented in two adversary models: one is simulatable and the other is not.
Abstract: In this paper, we construct efficient secure protocols for set intersection and pattern matching. Our protocols for secure computing the set intersection functionality are based on secure pseudorandom function evaluations, in contrast to previous protocols that are based on polynomials. In addition to the above, we also use secure pseudorandom function evaluation in order to achieve secure pattern matching. In this case, we utilize specific properties of the Naor---Reingold pseudorandom function in order to achieve high efficiency. Our results are presented in two adversary models. Our protocol for secure pattern matching and one of our protocols for set intersection achieve security against malicious adversaries under a relaxed definition where one corruption case is simulatable and, for the other, only privacy (formalized through indistinguishability) is guaranteed. We also present a protocol for set intersection that is fully simulatable in the model of covert adversaries. Loosely speaking, this means that a malicious adversary can cheat but will then be caught with good probability.

119 citations


Journal ArticleDOI
TL;DR: Hopper and Blum (Asiacrypt 2001) and Juels and Weis (Crypto 2005) recently proposed two shared-key authentication protocols (HB and HB+ respectively) whose extremely low computational cost makes them attractive for low-cost devices such as radio-frequency identification (RFID) tags as mentioned in this paper.
Abstract: Hopper and Blum (Asiacrypt 2001) and Juels and Weis (Crypto 2005) recently proposed two shared-key authentication protocols--HB and HB+, respectively--whose extremely low computational cost makes them attractive for low-cost devices such as radio-frequency identification (RFID) tags. The security of these protocols is based on the conjectured hardness of the "learning parity with noise" (LPN) problem, which is equivalent to the problem of decoding random binary linear codes. The HB protocol is proven secure against a passive (eavesdropping) adversary, while the HB+ protocol is proven secure against active attacks.

87 citations


Journal ArticleDOI
TL;DR: An honest verifier zero-knowledge argument is suggested for the correctness of a shuffle of homomorphic encryptions based on ElGamal encryption, which is more efficient than previous schemes both in terms of communication and computation.
Abstract: A shuffle consists of a permutation and re-encryption of a set of input ciphertexts. One application of shuffles is to build mix-nets. We suggest an honest verifier zero-knowledge argument for the correctness of a shuffle of homomorphic encryptions. Our scheme is more efficient than previous schemes both in terms of communication and computation. The honest verifier zero-knowledge argument has a size that is independent of the actual cryptosystem being used and will typically be smaller than the size of the shuffle itself. Moreover, our scheme is well suited for the use of multi-exponentiation and batch-verification techniques. Additionally, we suggest a more efficient honest verifier zero-knowledge argument for a commitment containing a permutation of a set of publicly known messages. We also suggest an honest verifier zero-knowledge argument for the correctness of a combined shuffle-and-decrypt operation that can be used in connection with decrypting mix-nets based on ElGamal encryption. All our honest verifier zero-knowledge arguments can be turned into honest verifier zero-knowledge proofs. We use homomorphic commitments as an essential part of our schemes. When the commitment scheme is statistically hiding we obtain statistical honest verifier zero-knowledge arguments; when the commitment scheme is statistically binding, we obtain computational honest verifier zero-knowledge proofs.

81 citations


Journal ArticleDOI
TL;DR: It is shown that, in the ideal-cipher model, the 12 schemes considered secure by PGV really are secure: it is also shown that an additional 8 of the PGV schemes are just as collision resistant (up to a constant).
Abstract: Preneel, Govaerts, and Vandewalle (1993) considered the 64 most basic ways to construct a hash function $H{:\;\:}\{0,1\}^{*}\rightarrow \{0,1\}^{n}$from a blockcipher $E{:\;\:}\{0,1\}^{n}\times \{0,1\}^{n}\rightarrow \{0,1\}^{n}$. They regarded 12 of these 64 schemes as secure, though no proofs or formal claims were given. Here we provide a proof-based treatment of the PGV schemes. We show that, in the ideal-cipher model, the 12 schemes considered secure by PGV really are secure: we give tight upper and lower bounds on their collision resistance. Furthermore, by stepping outside of the Merkle–Damgard approach to analysis, we show that an additional 8 of the PGV schemes are just as collision resistant (up to a constant). Nonetheless, we are able to differentiate among the 20 collision-resistant schemes by considering their preimage resistance: only the 12 initial schemes enjoy optimal preimage resistance. Our work demonstrates that proving ideal-cipher-model bounds is a feasible and useful step for understanding the security of blockcipher-based hash-function constructions.

69 citations


Journal ArticleDOI
TL;DR: The strong Diffie–Hellman problem and related problems with public g, gα, gldots, g^{\alpha^{d}$ have computational complexity up to O(\sqrt{d}/\log p) less than the generic algorithm complexity of the discrete logarithm problem when p−1 has a divisor d≤p1/2.
Abstract: Let g be an element of prime order p in an abelian group, and let ??? p We show that if g,g ? , and $g^{\alpha^{d}}$ are given for a positive divisor d of p?1, the secret key ? can be computed deterministically in $O(\sqrt{p/d}+\sqrt{d})$ exponentiations by using $O(\max\{\sqrt{p/d},\sqrt{d}\})$ storage If $g^{\alpha^{i}}$ (i=0,1,2,?,2d) is given for a positive divisor d of p+1, ? can be computed in $O(\sqrt{p/d}+d)$ exponentiations by using $O(\max\{\sqrt{p/d},\sqrt{d}\})$ storage We also propose space-efficient but probabilistic algorithms for the same problem, which have the same computational complexities with the deterministic algorithm As applications of the proposed algorithms, we show that the strong Diffie---Hellman problem and related problems with public $g^{\alpha},\ldots,g^{\alpha^{d}}$ have computational complexity up to $O(\sqrt{d}/\log p)$ less than the generic algorithm complexity of the discrete logarithm problem when p?1 (resp p+1) has a divisor d≤p 1/2 (resp d≤p 1/3) Under the same conditions for d, the algorithm is also applicable to recovering the secret key in $O(\sqrt{p/d}\cdot \log p)$ for Boldyreva's blind signature scheme and the textbook ElGamal scheme when d signature or decryption queries are allowed

62 citations


Journal Article
TL;DR: In this article, an energy conservation DVFS algorithm is proposed to achieve this goal, which relies on a prediction equation that is constructed based on the correlation between the critical speed and the memory access rate.
Abstract: Typically, when a user wishes to minimise the energy consumption for an application running on a handheld device, he/she may choose to set the processor speed to its slowest level. However, our study indicated that due to the processes involved in memory accesses, decreasing the CPU frequency may not always reduce the energy consumption. A critical speed has been defined as the CPU frequency, at which energy consumption can be minimised for a program. It can be used when the user wants to maximise energy saving for the device if performance is a less important issue. In this paper, an energy conservation DVFS algorithm is proposed to achieve this goal. It predicts and applies the critical speed as the target CPU frequency during the program’s execution time. The algorithm relies on a prediction equation that is constructed based on the correlation between the critical speed and the memory access rate. We have implemented the algorithm on the Android operating system. Our results show that both the energy consumption and the performance can be improved than the situation of simply selecting the lowest frequency.

48 citations


Journal ArticleDOI
TL;DR: It is shown that a five-layer scheme with 128-bit plaintexts and 8-bit S-boxes is surprisingly weak against what is called a multiset attack, even when all the S- boxes and affine mappings are key dependent (and thus completely unknown to the attacker).
Abstract: In this paper we consider the security of block ciphers which contain alternate layers of invertible S-boxes and affine mappings (there are many popular cryptosystems which use this structure, including the winner of the AES competition, Rijndael). We show that a five-layer scheme with 128-bit plaintexts and 8-bit S-boxes is surprisingly weak against what we call a multiset attack, even when all the S-boxes and affine mappings are key dependent (and thus completely unknown to the attacker). We tested the multiset attack with an actual implementation, which required just 216 chosen plaintexts and a few seconds on a single PC to find the 217 bits of information in all the unknown elements of the scheme.

44 citations


Journal ArticleDOI
TL;DR: In this paper, two new variants of obfuscation definitions are proposed and investigated, which are simulation-based (i.e., require the existence of a simulator that can efficiently generate fake obfuscations) and demand only security on average.
Abstract: Loosely speaking, an obfuscation O of a function f should satisfy two requirements: firstly, using O, it should be possible to evaluate f; secondly, O should not reveal anything about f that cannot be learnt from oracle access to f alone. Several definitions for obfuscation exist. However, most of them are very hard to satisfy, even when focusing on specific applications such as obfuscating a point function (e.g., for authentication purposes). In this work, we propose and investigate two new variants of obfuscation definitions. Our definitions are simulation-based (i.e., require the existence of a simulator that can efficiently generate fake obfuscations) and demand only security on average (over the choice of the obfuscated function). We stress that our notions are not free from generic impossibilities: there exist natural classes of function families that cannot be securely obfuscated. Hence we cannot hope for a general-purpose obfuscator with respect to our definition. However, we prove that there also exist several natural classes of functions for which our definitions yield interesting results. Specifically, we show that our definitions have the following properties: Usefulness: -- Securely obfuscating (the encryption function of) a secure private-key encryption scheme yields a secure public-key encryption scheme. Achievability: -- There exist obfuscatable private-key encryption schemes. Also, a point function chosen uniformly at random can easily be obfuscated with respect to the weaker one (but not the stronger one) of our definitions. (Previous work focused on obfuscating point functions from arbitrary distributions.) Generic impossibilities: -- There exist unobfuscatable private-key encryption schemes. Furthermore, pseudorandom functions cannot be obfuscated with respect to our definitions. Our results show that, while it is hard to avoid generic impossibilities, useful and reasonable obfuscation definitions are possible when considering specific tasks (i.e., function families).

Journal ArticleDOI
TL;DR: The main contribution of the paper is a modular and generic proof of security for a slightly modified version of TLS that shows that the protocol is secure even if the pre-master and the master keys only satisfy only weak security requirements.
Abstract: We study the security of the widely deployed Secure Session Layer/Transport Layer Security (TLS) key agreement protocol. Our analysis identifies, justifies, and exploits the modularity present in the design of the protocol: the application keys offered to higher-level applications are obtained from a master key, which in turn is derived through interaction from a pre-master key. We define models (following well-established paradigms) that clarify the security level enjoyed by each of these types of keys. We capture the realistic setting where only one of the two parties involved in the execution of the protocol (namely the server) has a certified public key, and where the same master key is used to generate multiple application keys. The main contribution of the paper is a modular and generic proof of security for a slightly modified version of TLS. Our proofs shows that the protocol is secure even if the pre-master and the master keys only satisfy only weak security requirements. Our proofs make crucial use of modelling the key derivation function of TLS as a random oracle.

Journal ArticleDOI
TL;DR: This work proves that d-multiplicative schemes do not exist if the set of players is covered by d unauthorized sets, and implies a limitation on the usefulness of secret sharing in the context of MPC.
Abstract: A multiplicative secret sharing scheme allows players to multiply two secret-shared field elements by locally converting their shares of the two secrets into an additive sharing of their product. Multiplicative secret sharing serves as a central building block in protocols for secure multiparty computation (MPC). Motivated by open problems in the area of MPC, we introduce the more general notion of d-multiplicative secret sharing, allowing to locally multiply d shared secrets, and study the type of access structures for which such secret sharing schemes exist. While it is easy to show that d-multiplicative schemes exist if no d unauthorized sets of players cover the whole set of players, the converse direction is less obvious for d≥3. Our main result is a proof of this converse direction, namely that d-multiplicative schemes do not exist if the set of players is covered by d unauthorized sets. In particular, t-private d-multiplicative secret sharing among k players is possible if and only if k>dt. Our negative result holds for arbitrary (possibly inefficient or even nonlinear) secret sharing schemes and implies a limitation on the usefulness of secret sharing in the context of MPC. Its proof relies on a quantitative argument inspired by communication complexity lower bounds.

Journal ArticleDOI
TL;DR: A new encryption scheme which is secure against adaptive chosen-ciphertext attack (or CCA2-secure) in the standard model (i.e., without the use of random oracle) is presented and it is shown that security holds also if the authors use projective hash families (as the original Cramer–Shoup) and under the weaker Computational Diffie–Hellman (CDH) Assumption.
Abstract: We present a new encryption scheme which is secure against adaptive chosen-ciphertext attack (or CCA2-secure) in the standard model (i.e., without the use of random oracle). Our scheme is a hybrid one: it first uses a public-key step (the Key Encapsulation Module or KEM) to encrypt a random key, which is then used to encrypt the actual message using a symmetric encryption algorithm (the Data Encapsulation Module or DEM). Our scheme is a modification of the hybrid scheme presented by Shoup in (Euro-Crypt'97, Springer LNCS, vol. 1233, pp. 256---266, 1997) (based on the Cramer---Shoup scheme in CRYPTO'98, Springer LNCS, vol. 1462, pp. 13---25, 1998). Its major practical advantage is that it saves the computation of one exponentiation and produces shorter ciphertexts. This efficiency improvement is the result of a surprising observation: previous hybrid schemes were proven secure by proving that both the KEM and the DEM were CCA2-secure. On the other hand, our KEM is not CCA2-secure, yet the whole scheme is, assuming the Decisional Diffie---Hellman (DDH) Assumption. Finally we generalize our new scheme in two ways: (i) we show that security holds also if we use projective hash families (as the original Cramer---Shoup), and (ii) we show that in the random oracle model we can prove security under the weaker Computational Diffie---Hellman (CDH) Assumption.

Journal Article
TL;DR: In this paper, the authors describe approaches used by web search tools to communicate with users and examine interactive interfaces that use textual queries, tag-focused navigation, hyperlink navigation, visual features, etc.
Abstract: This paper describes approaches used by web search tools to communicate with users. With respect to different kinds of information resources, we examine interactive interfaces that use textual queries, tag-focused navigation, hyperlink navigation, visual features, etc. Among others, we introduce the design vision and describe the implementation of a visual-interface based on concepts of query token network and the WordNet ontology.

Journal ArticleDOI
TL;DR: In this paper, the authors show that the usual set-up assumptions used for UC protocols (e.g., a common reference string) are not sufficient to achieve long-term secure and composable protocols for commitments or zero-knowledge protocols.
Abstract: Algorithmic progress and future technological advances threaten today’s cryptographic protocols. This may allow adversaries to break a protocol retrospectively by breaking the underlying complexity assumptions long after the execution of the protocol. Long-term secure protocols, protocols that after the end of the execution do not reveal any information to a then possibly unlimited adversary, could meet this threat. On the other hand, in many applications, it is necessary that a protocol is secure not only when executed alone, but within arbitrary contexts. The established notion of universal composability (UC) captures this requirement. This is the first paper to study protocols which are simultaneously long-term secure and universally composable. We show that the usual set-up assumptions used for UC protocols (e.g. a common reference string) are not sufficient to achieve long-term secure and composable protocols for commitments or zero-knowledge protocols. We give practical alternatives (e.g. signature cards) to these usual setup-assumptions and show that these enable the implementation of the important primitives commitment and zero-knowledge protocols.

Journal ArticleDOI
TL;DR: This paper investigates two-party and multi-party protocols for both the semi-honest and malicious cases and proves that the problem can be solved in a number of rounds that is logarithmic in k, where each round requires communication and computation cost that is linear in b, the number of bits needed to describe each element of the input data.
Abstract: We consider the problem of securely computing the kth-ranked element of the union of two or more large, confidential data sets. This is a fundamental question motivated by many practical contexts. For example, two competitive companies may wish to compute the median salary of their combined employee populations without revealing to each other the exact salaries of their employees. While protocols do exist for computing the kth-ranked element, they require time that is at least linear in the sum of the sizes of their combined inputs. This paper investigates two-party and multi-party protocols for both the semi-honest and malicious cases. In the two-party setting, we prove that the problem can be solved in a number of rounds that is logarithmic in k, where each round requires communication and computation cost that is linear in b, the number of bits needed to describe each element of the input data. In the multi-party setting, we prove that the number of rounds is linear in b, where each round has overhead proportional to b multiplied by the number of parties. The multi-party protocol can be used in the two-party case. The overhead introduced by our protocols closely match the communication complexity lower bound. Our protocols can handle a malicious adversary via simple consistency checks.

Journal ArticleDOI
TL;DR: The state-of-the-art cryptanalytic results on MD2 are contained, in particular collision and preimage attacks on the full hash function, the latter having complexity 273, which should be compared to a brute-force attack of complexity 2128.
Abstract: This paper considers the hash function MD2 which was developed by Ron Rivest in 1989. Despite its age, MD2 has withstood cryptanalytic attacks until recently. This paper contains the state-of-the-art cryptanalytic results on MD2, in particular collision and preimage attacks on the full hash function, the latter having complexity 273, which should be compared to a brute-force attack of complexity 2128.

Journal ArticleDOI
TL;DR: This work proves, under the strong RSA assumption, that the group of invertible integers modulo the product of two safe primes is pseudo-free, the first provably secure construction of pseudo- free Abelian groups under a standard cryptographic assumption.
Abstract: We prove, under the strong RSA assumption, that the group of invertible integers modulo the product of two safe primes is pseudo-free. More specifically, no polynomial-time algorithm can output (with non negligible probability) an unsatisfiable system of equations over the free Abelian group generated by the symbols g 1,…,g n , together with a solution modulo the product of two randomly chosen safe primes when g 1,…,g n are instantiated to randomly chosen quadratic residues. Ours is the first provably secure construction of pseudo-free Abelian groups under a standard cryptographic assumption and resolves a conjecture of Rivest (Theory of Cryptography Conference—Proceedings of TCC 2004, LNCS, vol. 2951, pp. 505–521, 2004).

Journal ArticleDOI
TL;DR: This paper proves that a natural subclass of black-box simulators, called normal, is identified, and proves that security composition theorems of the type known for strict PPT hold for these restricted definitions of expected PPT.
Abstract: This paper concerns the possibility of developing a coherent theory of security when feasibility is associated with expected probabilistic polynomial-time (expected PPT) The source of difficulty is that the known definitions of expected PPT strategies (ie, expected PPT interactive machines) do not support natural results of the type presented below To overcome this difficulty, we suggest new definitions of expected PPT strategies, which are more restrictive than the known definitions (but nevertheless extend the notion of expected PPT noninteractive algorithms) We advocate the conceptual adequacy of these definitions and point out their technical advantages Specifically, identifying a natural subclass of black-box simulators, called normal, we prove the following two results: Specifically, a normal black-box simulator is required to make an expected polynomial number of steps, when given oracle access to any strategy, where each oracle call is counted as a single step This natural property is satisfied by most known simulators and is easy to verify

Journal ArticleDOI
TL;DR: This work shows how to overcome the above difficulties and provide efficient methods for generating ECs of prime order focusing on their support by a thorough experimental study and investigates the time efficiency of the CM variant under four different implementations of a crucial step of the variant.
Abstract: We consider the generation of prime-order elliptic curves (ECs) over a prime field $\mathbb{F}_{p}$ using the Complex Multiplication (CM) method. A crucial step of this method is to compute the roots of a special type of class field polynomials with the most commonly used being the Hilbert and Weber ones. These polynomials are uniquely determined by the CM discriminant D. In this paper, we consider a variant of the CM method for constructing elliptic curves (ECs) of prime order using Weber polynomials. In attempting to construct prime-order ECs using Weber polynomials, two difficulties arise (in addition to the necessary transformations of the roots of such polynomials to those of their Hilbert counterparts). The first one is that the requirement of prime order necessitates that D?3mod8), which gives Weber polynomials with degree three times larger than the degree of their corresponding Hilbert polynomials (a fact that could affect efficiency). The second difficulty is that these Weber polynomials do not have roots in $\mathbb{F}_{p}$ . In this work, we show how to overcome the above difficulties and provide efficient methods for generating ECs of prime order focusing on their support by a thorough experimental study. In particular, we show that such Weber polynomials have roots in the extension field $\mathbb{F}_{p^{3}}$ and present a set of transformations for mapping roots of Weber polynomials in $\mathbb{F}_{p^{3}}$ to roots of their corresponding Hilbert polynomials in $\mathbb{F}_{p}$ . We also show how an alternative class of polynomials, with degree equal to their corresponding Hilbert counterparts (and hence having roots in $\mathbb{F}_{p}$ ), can be used in the CM method to generate prime-order ECs. We conduct an extensive experimental study comparing the efficiency of using this alternative class against the use of the aforementioned Weber polynomials. Finally, we investigate the time efficiency of the CM variant under four different implementations of a crucial step of the variant and demonstrate the superiority of two of them.

Journal ArticleDOI
TL;DR: This work aims for stronger definitions of privacy for search problems that provide reasonable privacy and supply algorithmic machinery for designing such protocols for a broad selection of search problems.
Abstract: Secure multiparty computation allows a group of distrusting parties to jointly compute a (possibly randomized) function of their inputs. However, it is often the case that the parties executing a computation try to solve a search problem, where one input may have a multitude of correct answers—such as when the parties compute a shortest path in a graph or find a solution of a set of linear equations. The algorithm for arbitrarily picking one output from the solution set has significant implications on the privacy of the computation. A minimal privacy requirement was put forward by Beimel et al. [STOC 2006] with focus on proving impossibility results. Their definition, however, guarantees a very weak notion of privacy, which is probably insufficient for most applications. In this work we aim for stronger definitions of privacy for search problems that provide reasonable privacy. We give two alternative definitions and discuss their privacy guarantees. We also supply algorithmic machinery for designing such protocols for a broad selection of search problems.

Journal Article
TL;DR: In this paper, two HMMs (hidden Markov models) were proposed to improve the efficiency of protein structure retrieval and classification using the information contained in the protein tertiary structure.
Abstract: A knowledge of protein functions is crucial for the development of new drugs, better crops and synthetic biochemicals. To understand the structure-to-function relationship, life-sciences researchers and biologists need to retrieve similar structures from protein databases and classify them into the same protein fold. With technological development the number of determined protein structures increases rapidly, so retrieving structurally similar proteins using current structural alignment algorithms may take hours or even days. Therefore, improving the efficiency of the protein structure retrieval and classification becomes an important research issue. In this paper we present two HMMs (Hidden Markov Models) which provide faster classification of protein structures. Our first method considers secondary structures, while our second method builds a profile of the tertiary structures of protein molecules. We have made a comparison of these two algorithms, and the results showed that by considering the information contained in the protein tertiary structure, a higher precision is achieved. Additionally, we have compared our HMM method based on tertiary structures against an existing method named 3D HMM. This analysis showed that our method is more accurate than 3D HMM.