scispace - formally typeset
Search or ask a question

Showing papers on "Encryption published in 2016"


Proceedings Article
19 Jun 2016
TL;DR: It is shown that the cloud service is capable of applying the neural network to the encrypted data to make encrypted predictions, and also return them in encrypted form, which allows high throughput, accurate, and private predictions.
Abstract: Applying machine learning to a problem which involves medical, financial, or other types of sensitive data, not only requires accurate predictions but also careful attention to maintaining data privacy and security. Legal and ethical requirements may prevent the use of cloud-based machine learning solutions for such tasks. In this work, we will present a method to convert learned neural networks to CryptoNets, neural networks that can be applied to encrypted data. This allows a data owner to send their data in an encrypted form to a cloud service that hosts the network. The encryption ensures that the data remains confidential since the cloud does not have access to the keys needed to decrypt it. Nevertheless, we will show that the cloud service is capable of applying the neural network to the encrypted data to make encrypted predictions, and also return them in encrypted form. These encrypted predictions can be sent back to the owner of the secret key who can decrypt them. Therefore, the cloud service does not gain any information about the raw data nor about the prediction it made. We demonstrate CryptoNets on the MNIST optical character recognition tasks. CryptoNets achieve 99% accuracy and can make around 59000 predictions per hour on a single PC. Therefore, they allow high throughput, accurate, and private predictions.

1,246 citations


Journal ArticleDOI
TL;DR: This paper constructs a special tree-based index structure and proposes a “Greedy Depth-first Search” algorithm to provide efficient multi-keyword ranked search over encrypted cloud data, which simultaneously supports dynamic update operations like deletion and insertion of documents.
Abstract: Due to the increasing popularity of cloud computing, more and more data owners are motivated to outsource their data to cloud servers for great convenience and reduced cost in data management. However, sensitive data should be encrypted before outsourcing for privacy requirements, which obsoletes data utilization like keyword-based document retrieval. In this paper, we present a secure multi-keyword ranked search scheme over encrypted cloud data, which simultaneously supports dynamic update operations like deletion and insertion of documents. Specifically, the vector space model and the widely-used TF $\;\times\;$ IDF model are combined in the index construction and query generation. We construct a special tree-based index structure and propose a “Greedy Depth-first Search” algorithm to provide efficient multi-keyword ranked search. The secure kNN algorithm is utilized to encrypt the index and query vectors, and meanwhile ensure accurate relevance score calculation between encrypted index and query vectors. In order to resist statistical attacks, phantom terms are added to the index vector for blinding search results. Due to the use of our special tree-based index structure, the proposed scheme can achieve sub-linear search time and deal with the deletion and insertion of documents flexibly. Extensive experiments are conducted to demonstrate the efficiency of the proposed scheme.

976 citations


Journal ArticleDOI
TL;DR: This paper study and solve the problem of personalized multi-keyword ranked search over encrypted data (PRSE) while preserving privacy in cloud computing with the help of semantic ontology WordNet, and proposes two PRSE schemes for different search intentions.
Abstract: In cloud computing, searchable encryption scheme over outsourced data is a hot research field. However, most existing works on encrypted search over outsourced cloud data follow the model of “one size fits all” and ignore personalized search intention. Moreover, most of them support only exact keyword search, which greatly affects data usability and user experience. So how to design a searchable encryption scheme that supports personalized search and improves user search experience remains a very challenging task. In this paper, for the first time, we study and solve the problem of personalized multi-keyword ranked search over encrypted data (PRSE) while preserving privacy in cloud computing. With the help of semantic ontology WordNet, we build a user interest model for individual user by analyzing the user’s search history, and adopt a scoring mechanism to express user interest smartly. To address the limitations of the model of “one size fit all” and keyword exact search, we propose two PRSE schemes for different search intentions. Extensive experiments on real-world dataset validate our analysis and show that our proposed solution is very efficient and effective.

665 citations


Journal ArticleDOI
TL;DR: A unique watermark is directly embedded into the encrypted images by the cloud server before images are sent to the query user, and when image copy is found, the unlawful query user who distributed the image can be traced by the watermark extraction.
Abstract: With the increasing importance of images in people’s daily life, content-based image retrieval (CBIR) has been widely studied. Compared with text documents, images consume much more storage space. Hence, its maintenance is considered to be a typical example for cloud storage outsourcing. For privacy-preserving purposes, sensitive images, such as medical and personal images, need to be encrypted before outsourcing, which makes the CBIR technologies in plaintext domain to be unusable. In this paper, we propose a scheme that supports CBIR over encrypted images without leaking the sensitive information to the cloud server. First, feature vectors are extracted to represent the corresponding images. After that, the pre-filter tables are constructed by locality-sensitive hashing to increase search efficiency. Moreover, the feature vectors are protected by the secure kNN algorithm, and image pixels are encrypted by a standard stream cipher. In addition, considering the case that the authorized query users may illegally copy and distribute the retrieved images to someone unauthorized, we propose a watermark-based protocol to deter such illegal distributions. In our watermark-based protocol, a unique watermark is directly embedded into the encrypted images by the cloud server before images are sent to the query user. Hence, when image copy is found, the unlawful query user who distributed the image can be traced by the watermark extraction. The security analysis and the experiments show the security and efficiency of the proposed scheme.

563 citations


Proceedings ArticleDOI
19 Feb 2016
TL;DR: This paper studies the effectiveness of flow-based time-related features to detect VPN traffic and to characterize encrypted traffic into different categories, according to the type of traffic e.g., browsing, streaming, etc.
Abstract: Traffic characterization is one of the major challenges in today’s security industry. The continuous evolution and generation of new applications and services, together with the expansion of encrypted communications makes it a difficult task. Virtual Private Networks (VPNs) are an example of encrypted communication service that is becoming popular, as method for bypassing censorship as well as accessing services that are geographically locked. In this paper, we study the effectiveness of flow-based time-related features to detect VPN traffic and to characterize encrypted traffic into different categories, according to the type of traffic e.g., browsing, streaming, etc. We use two different well-known machine learning techniques (C4.5 and KNN) to test the accuracy of our features. Our results show high accuracy and performance, confirming that time-related features are good classifiers for encrypted traffic characterization.

562 citations


Journal ArticleDOI
TL;DR: A two-dimensional Logistic-adjusted-Sine map (2D-LASM) is proposed that has better ergodicity and unpredictability, and a wider chaotic range than many existing chaotic maps.

496 citations


Book ChapterDOI
14 Aug 2016
TL;DR: A new tweakable block cipher family SKINNY is presented, whose goal is to compete with NSA recent design SIMON in terms of hardware/software performances, while proving in addition much stronger security guarantees with regards to differential/linear attacks.
Abstract: We present a new tweakable block cipher family SKINNY, whose goal is to compete with NSA recent design SIMON in terms of hardware/software performances, while proving in addition much stronger security guarantees with regards to differential/linear attacks. In particular, unlike SIMON, we are able to provide strong bounds for all versions, and not only in the single-key model, but also in the related-key or related-tweak model. SKINNY has flexible block/key/tweak sizes and can also benefit from very efficient threshold implementations for side-channel protection. Regarding performances, it outperforms all known ciphers for ASIC round-based implementations, while still reaching an extremely small area for serial implementations and a very good efficiency for software and micro-controllers implementations SKINNY has the smallest total number of AND/OR/XOR gates used for encryption process. Secondly, we present MANTIS, a dedicated variant of SKINNY for low-latency implementations, that constitutes a very efficient solution to the problem of designing a tweakable block cipher for memory encryption. MANTIS basically reuses well understood, previously studied, known components. Yet, by putting those components together in a new fashion, we obtain a competitive cipher to PRINCE in latency and area, while being enhanced with a tweak input.

492 citations


Journal ArticleDOI
TL;DR: A new method of keyword transformation based on the uni-gram is developed, which will simultaneously improve the accuracy and creates the ability to handle other spelling mistakes and consider the keyword weight when selecting an adequate matching file set.
Abstract: Keyword-based search over encrypted outsourced data has become an important tool in the current cloud computing scenario. The majority of the existing techniques are focusing on multi-keyword exact match or single keyword fuzzy search. However, those existing techniques find less practical significance in real-world applications compared with the multi-keyword fuzzy search technique over encrypted data. The first attempt to construct such a multi-keyword fuzzy search scheme was reported by Wang et al. , who used locality-sensitive hashing functions and Bloom filtering to meet the goal of multi-keyword fuzzy search. Nevertheless, Wang’s scheme was only effective for a one letter mistake in keyword but was not effective for other common spelling mistakes. Moreover, Wang’s scheme was vulnerable to server out-of-order problems during the ranking process and did not consider the keyword weight. In this paper, based on Wang et al. ’s scheme, we propose an efficient multi-keyword fuzzy ranked search scheme based on Wang et al. ’s scheme that is able to address the aforementioned problems. First, we develop a new method of keyword transformation based on the uni-gram, which will simultaneously improve the accuracy and creates the ability to handle other spelling mistakes. In addition, keywords with the same root can be queried using the stemming algorithm. Furthermore, we consider the keyword weight when selecting an adequate matching file set. Experiments using real-world data show that our scheme is practically efficient and achieve high accuracy.

464 citations


Journal ArticleDOI
Lu Xu1, Zhi Li1, Jian Li1, Wei Hua1
TL;DR: A novel bit-level image encryption algorithm that is based on piecewise linear chaotic maps (PWLCM) that is both secure and reliable for image encryption.

449 citations


Journal ArticleDOI
TL;DR: A novel image encryption approach based on permutation-substitution (SP) network and chaotic systems that shows superior performance than previous schemes.

399 citations


Journal ArticleDOI
TL;DR: A new two-dimensional Sine ICMIC modulation map (2D-SIMM) is proposed based on a close-loop modulation coupling (CMC) model, and its chaotic performance is analyzed by means of phase diagram, Lyapunov exponent spectrum and complexity.

Journal ArticleDOI
TL;DR: This work gives constructions for indistinguishability obfuscation and functional encryption that supports all polynomial-size circuits and describes a candidate construction for blurry obfuscation for $\mathbf{NC}^1$ circuits.
Abstract: In this work, we study indistinguishability obfuscation and functional encryption for general circuits: Indistinguishability obfuscation requires that given any two equivalent circuits $C_0$ and $C_1$ of similar size, the obfuscations of $C_0$ and $C_1$ should be computationally indistinguishable. In functional encryption, ciphertexts encrypt inputs $x$ and keys are issued for circuits $C$. Using the key $\mathrm{SK}_C$ to decrypt a ciphertext $\mathrm{CT}_x={\sf Enc}(x)$ yields the value $C(x)$ but does not reveal anything else about $x$. Furthermore, no collusion of secret key holders should be able to learn anything more than the union of what they can each learn individually. We give constructions for indistinguishability obfuscation and functional encryption that supports all polynomial-size circuits. We accomplish this goal in three steps: (1) We describe a candidate construction for indistinguishability obfuscation for $\mathbf{NC}^1$ circuits. The security of this construction is based on a new al...

Journal ArticleDOI
TL;DR: An overview of the potential, recent advances, and challenges of optical security and encryption using free space optics is presented, highlighting the need for more specialized hardware and image processing algorithms.
Abstract: Information security and authentication are important challenges facing society. Recent attacks by hackers on the databases of large commercial and financial companies have demonstrated that more research and development of advanced approaches are necessary to deny unauthorized access to critical data. Free space optical technology has been investigated by many researchers in information security, encryption, and authentication. The main motivation for using optics and photonics for information security is that optical waveforms possess many complex degrees of freedom such as amplitude, phase, polarization, large bandwidth, nonlinear transformations, quantum properties of photons, and multiplexing that can be combined in many ways to make information encryption more secure and more difficult to attack. This roadmap article presents an overview of the potential, recent advances, and challenges of optical security and encryption using free space optics. The roadmap on optical security is comprised of six categories that together include 16 short sections written by authors who have made relevant contributions in this field. The first category of this roadmap describes novel encryption approaches, including secure optical sensing which summarizes double random phase encryption applications and flaws [Yamaguchi], the digital holographic encryption in free space optical technique which describes encryption using multidimensional digital holography [Nomura], simultaneous encryption of multiple signals [Perez-Cabre], asymmetric methods based on information truncation [Nishchal], and dynamic encryption of video sequences [Torroba]. Asymmetric and one-way cryptosystems are analyzed by Peng. The second category is on compression for encryption. In their respective contributions, Alfalou and Stern propose similar goals involving compressed data and compressive sensing encryption. The very important area of cryptanalysis is the topic of the third category with two sections: Sheridan reviews phase retrieval algorithms to perform different attacks, whereas Situ discusses nonlinear optical encryption techniques and the development of a rigorous optical information security theory. The fourth category with two contributions reports how encryption could be implemented at the nano- or micro-scale. Naruse discusses the use of nanostructures in security applications and Carnicer proposes encoding information in a tightly focused beam. In the fifth category, encryption based on ghost imaging using single-pixel detectors is also considered. In particular, the authors [Chen, Tajahuerce] emphasize the need for more specialized hardware and image processing algorithms. Finally, in the sixth category, Mosk and Javidi analyze in their corresponding papers how quantum imaging can benefit optical encryption systems. Sources that use few photons make encryption systems much more difficult to attack, providing a secure method for authentication.

Journal ArticleDOI
TL;DR: The proposed cryptosystem decreases the volume of data to be transmitted and simplifies the keys distribution simultaneously as a nonlinear encryption system with acceptable compression and security performance.
Abstract: Most image encryption algorithms based on low-dimensional chaos systems bear security risks and suffer encryption data expansion when adopting nonlinear transformation directly. To overcome these weaknesses and reduce the possible transmission burden, an efficient image compression–encryption scheme based on hyper-chaotic system and 2D compressive sensing is proposed. The original image is measured by the measurement matrices in two directions to achieve compression and encryption simultaneously, and then the resulting image is re-encrypted by the cycle shift operation controlled by a hyper-chaotic system. Cycle shift operation can change the values of the pixels efficiently. The proposed cryptosystem decreases the volume of data to be transmitted and simplifies the keys distribution simultaneously as a nonlinear encryption system. Simulation results verify the validity and the reliability of the proposed algorithm with acceptable compression and security performance.

Journal ArticleDOI
TL;DR: Simulated experimental results in terms of quantitative and qualitative ways prove the encryption quality and efficiency and robustness against different noises make the proposed cipher a good candidate for real time applications.
Abstract: A novel image encryption algorithm in streaming mode is proposed which exhaustively employs an entire set of DNA complementary rules alongwith one dimensional chaotic maps. The proposed algorithm is highly efficient due to encrypting the subset of digital image which contains 92.125 % of information. DNA addition operation is carried out on this MSB part. The core idea of the proposed scheme is to scramble the whole image by means of piecewise linear chaotic map (PWLCM) followed by decomposition of image into most significant bits (MSB) and least significant bits (LSB). The logistic sequence is XORed with the decoded MSB and LSB parts separately and finally these two parts are combined to get the ciphered image. The parameters for PWLCM, logistic map and selection of different DNA rules for encoding and decoding of both parts of an image are derived from 128-bit MD5 hash of the plain image. Simulated experimental results in terms of quantitative and qualitative ways prove the encryption quality. Efficiency and robustness against different noises make the proposed cipher a good candidate for real time applications.

Journal ArticleDOI
TL;DR: The theoretical analysis and experimental results show that the algorithm improves the encoding efficiency, enhances the security of the ciphertext and has a large key space and a high key sensitivity, and it is able to resist against the statistical and exhaustive attacks.
Abstract: In this paper, we propose a novel image encryption algorithm based on a hybrid model of deoxyribonucleic acid (DNA) masking, a Secure Hash Algorithm SHA-2 and the Lorenz system. Our study uses DNA sequences and operations and the chaotic Lorenz system to strengthen the cryptosystem. The significant advantages of this approach are improving the information entropy which is the most important feature of randomness, resisting against various typical attacks and getting good experimental results. The theoretical analysis and experimental results show that the algorithm improves the encoding efficiency, enhances the security of the ciphertext and has a large key space and a high key sensitivity, and it is able to resist against the statistical and exhaustive attacks.

Book ChapterDOI
08 May 2016
TL;DR: A general multiparty computation MPC protocol with only two rounds of interaction in the common random string model, which is known to be optimal in the honest-but-curious setting and fully malicious setting, is constructed.
Abstract: We construct a general multiparty computation MPC protocol with only two rounds of interaction in the common random string model, which is known to be optimal. In the honest-but-curious setting we only rely on the learning with errors LWE assumption, and in the fully malicious setting we additionally assume the existence of non-interactive zero knowledge arguments NIZKs. Previously, Asharov et al. EUROCRYPT '12 showed how to achieve three rounds based on LWE and NIZKs, while Garg et al. TCC '14 showed how to achieve the optimal two rounds based on indistinguishability obfuscation, but it was unknown if two rounds were possible under standard assumptions without obfuscation. Our approach relies on multi-key fully homomorphic encryption MFHE, introduced by Lopez-Alt et al. STOC '12, which enables homomorphic computation over data encrypted under different keys. We present a construction of MFHE based on LWE that significantly simplifies a recent scheme of Clear and McGoldrick CRYPTO '15. We then extend this construction to allow for a one-round distributed decryption of a multi-key ciphertext. Our entire MPC protocol consists of the following two rounds:1.Each party individually encrypts its input under its own key and broadcasts the ciphertext. All parties can then homomorphically compute a multi-key encryption of the output.2.Each party broadcasts a partial decryption of the output using its secret key. The partial decryptions can be combined to recover the output in plaintext.

Journal ArticleDOI
TL;DR: This paper presents the first attribute-based keyword search scheme with efficient user revocation (ABKS-UR) that enables scalable fine-grained (i.e., file-level) search authorization and formalizes the security definition and proves the proposed AB KS-UR scheme selectively secure against chosen-keyword attack.
Abstract: Search over encrypted data is a critically important enabling technique in cloud computing, where encryption-before-outsourcing is a fundamental solution to protecting user data privacy in the untrusted cloud server environment. Many secure search schemes have been focusing on the single-contributor scenario, where the outsourced dataset or the secure searchable index of the dataset are encrypted and managed by a single owner, typically based on symmetric cryptography. In this paper, we focus on a different yet more challenging scenario where the outsourced dataset can be contributed from multiple owners and are searchable by multiple users, i.e., multi-user multi-contributor case. Inspired by attribute-based encryption (ABE), we present the first attribute-based keyword search scheme with efficient user revocation (ABKS-UR) that enables scalable fine-grained (i.e., file-level) search authorization. Our scheme allows multiple owners to encrypt and outsource their data to the cloud server independently. Users can generate their own search capabilities without relying on an always online trusted authority. Fine-grained search authorization is also implemented by the owner-enforced access policy on the index of each file. Further, by incorporating proxy re-encryption and lazy re-encryption techniques, we are able to delegate heavy system update workload during user revocation to the resourceful semi-trusted cloud server. We formalize the security definition and prove the proposed ABKS-UR scheme selectively secure against chosen-keyword attack. To build confidence of data user in the proposed secure search system, we also design a search result verification scheme. Finally, performance evaluation shows the efficiency of our scheme.

Journal ArticleDOI
TL;DR: A general Inc-VDB framework is proposed by incorporating the primitive of vector commitment and the encrypt-then-incremental MAC mode of encryption and it is proved that the construction can achieve the desired security properties.
Abstract: The notion of verifiable database (VDB) enables a resource-constrained client to securely outsource a very large database to an untrusted server so that it could later retrieve a database record and update a record by assigning a new value. Also, any attempt by the server to tamper with the data will be detected by the client. When the database undergoes frequent while small modifications, the client must re-compute and update the encrypted version (ciphertext) on the server at all times. For very large data, it is extremely expensive for the resources-constrained client to perform both operations from scratch. In this paper, we formalize the notion of verifiable database with incremental updates (Inc-VDB). Besides, we propose a general Inc-VDB framework by incorporating the primitive of vector commitment and the encrypt-then-incremental MAC mode of encryption. We also present a concrete Inc-VDB scheme based on the computational Diffie-Hellman (CDH) assumption. Furthermore, we prove that our construction can achieve the desired security properties.

Proceedings ArticleDOI
Toshinori Araki1, Jun Furukawa1, Yehuda Lindell2, Ariel Nof2, Kazuma Ohara1 
24 Oct 2016
TL;DR: In this paper, the authors describe a new information-theoretic protocol (and a computationally secure variant) for secure three-party computation with an honest majority, and demonstrate that high-throughput secure computation is possible on standard hardware.
Abstract: In this paper, we describe a new information-theoretic protocol (and a computationally-secure variant) for secure three-party computation with an honest majority. The protocol has very minimal computation and communication; for Boolean circuits, each party sends only a single bit for every AND gate (and nothing is sent for XOR gates). Our protocol is (simulation-based) secure in the presence of semi-honest adversaries, and achieves privacy in the client/server model in the presence of malicious adversaries. On a cluster of three 20-core servers with a 10Gbps connection, the implementation of our protocol carries out over 1.3 million AES computations per second, which involves processing over 7 billion gates per second. In addition, we developed a Kerberos extension that replaces the ticket-granting-ticket encryption on the Key Distribution Center (KDC) in MIT-Kerberos with our protocol, using keys/ passwords that are shared between the servers. This enables the use of Kerberos while protecting passwords. Our implementation is able to support a login storm of over 35,000 logins per second, which suffices even for very large organizations. Our work demonstrates that high-throughput secure computation is possible on standard hardware.

Proceedings ArticleDOI
24 Oct 2016
TL;DR: This work proposes Sophos as a forward private SSE scheme with performance similar to existing less secure schemes, and that is conceptually simpler (and also more efficient) than previous forward private constructions.
Abstract: Searchable Symmetric Encryption aims at making possible searching over an encrypted database stored on an untrusted server while keeping privacy of both the queries and the data, by allowing some small controlled leakage to the server. Recent work shows that dynamic schemes -- in which the data is efficiently updatable -- leaking some information on updated keywords are subject to devastating adaptative attacks breaking the privacy of the queries. The only way to thwart this attack is to design forward private schemes whose update procedure does not leak if a newly inserted element matches previous search queries. This work proposes Sophos as a forward private SSE scheme with performance similar to existing less secure schemes, and that is conceptually simpler (and also more efficient) than previous forward private constructions. In particular, it only relies on trapdoor permutations and does not use an ORAM-like construction. We also explain why Sophos is an optimal point of the security/performance tradeoff for SSE. Finally, an implementation and evaluation results demonstrate its practical efficiency.

Journal ArticleDOI
TL;DR: The relevance scores and preference factors upon keywords which enable the precise keyword search and personalized user experience and the security analysis and experimental results demonstrate that the proposed schemes can achieve the same security level comparing to the existing ones and better performance in terms of functionality, query complexity and efficiency.
Abstract: Using cloud computing, individuals can store their data on remote servers and allow data access to public users through the cloud servers. As the outsourced data are likely to contain sensitive privacy information, they are typically encrypted before uploaded to the cloud. This, however, significantly limits the usability of outsourced data due to the difficulty of searching over the encrypted data. In this paper, we address this issue by developing the fine-grained multi-keyword search schemes over encrypted cloud data. Our original contributions are three-fold. First, we introduce the relevance scores and preference factors upon keywords which enable the precise keyword search and personalized user experience. Second, we develop a practical and very efficient multi-keyword search scheme. The proposed scheme can support complicated logic search the mixed “AND”, “OR” and “NO” operations of keywords. Third, we further employ the classified sub-dictionaries technique to achieve better efficiency on index building, trapdoor generating and query. Lastly, we analyze the security of the proposed schemes in terms of confidentiality of documents, privacy protection of index and trapdoor, and unlinkability of trapdoor. Through extensive experiments using the real-world dataset, we validate the performance of the proposed schemes. Both the security analysis and experimental results demonstrate that the proposed schemes can achieve the same security level comparing to the existing ones and better performance in terms of functionality, query complexity and efficiency.

Journal ArticleDOI
TL;DR: An efficient file hierarchy attribute-based encryption scheme is proposed in cloud computing that combines layered access structures into a single access structure, and then, the hierarchical files are encrypted with the integrated access structure.
Abstract: Ciphertext-policy attribute-based encryption (CP-ABE) has been a preferred encryption technology to solve the challenging problem of secure data sharing in cloud computing. The shared data files generally have the characteristic of multilevel hierarchy, particularly in the area of healthcare and the military. However, the hierarchy structure of shared files has not been explored in CP-ABE. In this paper, an efficient file hierarchy attribute-based encryption scheme is proposed in cloud computing. The layered access structures are integrated into a single access structure, and then, the hierarchical files are encrypted with the integrated access structure. The ciphertext components related to attributes could be shared by the files. Therefore, both ciphertext storage and time cost of encryption are saved. Moreover, the proposed scheme is proved to be secure under the standard assumption. Experimental simulation shows that the proposed scheme is highly efficient in terms of encryption and decryption. With the number of the files increasing, the advantages of our scheme become more and more conspicuous.

Journal ArticleDOI
TL;DR: To improve the efficiency of big data feature learning, the paper proposes a privacy preserving deep computation model by offloading the expensive operations to the cloud by using the BGV encryption scheme and employing cloud servers to perform the high-order back-propagation algorithm on the encrypted data efficiently forDeep computation model training.
Abstract: To improve the efficiency of big data feature learning, the paper proposes a privacy preserving deep computation model by offloading the expensive operations to the cloud. Privacy concerns become evident because there are a large number of private data by various applications in the smart city, such as sensitive data of governments or proprietary information of enterprises. To protect the private data, the proposed model uses the BGV encryption scheme to encrypt the private data and employs cloud servers to perform the high-order back-propagation algorithm on the encrypted data efficiently for deep computation model training. Furthermore, the proposed scheme approximates the Sigmoid function as a polynomial function to support the secure computation of the activation function with the BGV encryption. In our scheme, only the encryption operations and the decryption operations are performed by the client while all the computation tasks are performed on the cloud. Experimental results show that our scheme is improved by approximately 2.5 times in the training efficiency compared to the conventional deep computation model without disclosing the private data using the cloud computing including ten nodes. More importantly, our scheme is highly scalable by employing more cloud servers, which is particularly suitable for big data.

Proceedings ArticleDOI
24 Oct 2016
TL;DR: This work proposes abstract models that capture secure outsourced storage systems in sufficient generality, and identifies two basic sources of leakage, namely access pattern and ommunication volume, and develops generic reconstruction attacks on any system supporting range queries where either access pattern or communication volume is leaked.
Abstract: Recently, various protocols have been proposed for securely outsourcing database storage to a third party server, ranging from systems with "full-fledged" security based on strong cryptographic primitives such as fully homomorphic encryption or oblivious RAM, to more practical implementations based on searchable symmetric encryption or even on deterministic and order-preserving encryption. On the flip side, various attacks have emerged that show that for some of these protocols confidentiality of the data can be compromised, usually given certain auxiliary information. We take a step back and identify a need for a formal understanding of the inherent efficiency/privacy trade-off in outsourced database systems, independent of the details of the system. We propose abstract models that capture secure outsourced storage systems in sufficient generality, and identify two basic sources of leakage, namely access pattern and ommunication volume. We use our models to distinguish certain classes of outsourced database systems that have been proposed, and deduce that all of them exhibit at least one of these leakage sources. We then develop generic reconstruction attacks on any system supporting range queries where either access pattern or communication volume is leaked. These attacks are in a rather weak passive adversarial model, where the untrusted server knows only the underlying query distribution. In particular, to perform our attack the server need not have any prior knowledge about the data, and need not know any of the issued queries nor their results. Yet, the server can reconstruct the secret attribute of every record in the database after about $N^4$ queries, where N is the domain size. We provide a matching lower bound showing that our attacks are essentially optimal. Our reconstruction attacks using communication volume apply even to systems based on homomorphic encryption or oblivious RAM in the natural way. Finally, we provide experimental results demonstrating the efficacy of our attacks on real datasets with a variety of different features. On all these datasets, after the required number of queries our attacks successfully recovered the secret attributes of every record in at most a few seconds.

Journal ArticleDOI
TL;DR: A novel scheme of reversible data hiding in encrypted images using distributed source coding using Slepian-Wolf encoded using low-density parity check codes that outperforms the previously published ones.
Abstract: This paper proposes a novel scheme of reversible data hiding in encrypted images using distributed source coding. After the original image is encrypted by the content owner using a stream cipher, the data-hider compresses a series of selected bits taken from the encrypted image to make room for the secret data. The selected bit series is Slepian–Wolf encoded using low-density parity check codes. On the receiver side, the secret bits can be extracted if the image receiver has the embedding key only. In case the receiver has the encryption key only, he/she can recover the original image approximately with high quality using an image estimation algorithm. If the receiver has both the embedding and encryption keys, he/she can extract the secret data and perfectly recover the original image using the distributed source decoding. The proposed method outperforms the previously published ones.

Journal ArticleDOI
TL;DR: This paper proposes lossless, reversible, and combined data hiding schemes for ciphertext images encrypted by public-key cryptosystems with probabilistic and homomorphic properties.
Abstract: This paper proposes lossless, reversible, and combined data hiding schemes for ciphertext images encrypted by public-key cryptosystems with probabilistic and homomorphic properties. In the lossless scheme, the ciphertext pixels are replaced with new values to embed the additional data into several least significant bit planes of ciphertext pixels by multilayer wet paper coding. Then, the embedded data can be directly extracted from the encrypted domain, and the data-embedding operation does not affect the decryption of original plaintext image. In the reversible scheme, a preprocessing is employed to shrink the image histogram before image encryption, so that the modification on encrypted images for data embedding will not cause any pixel oversaturation in plaintext domain. Although a slight distortion is introduced, the embedded data can be extracted and the original image can be recovered from the directly decrypted image. Due to the compatibility between the lossless and reversible schemes, the data-embedding operations in the two manners can be simultaneously performed in an encrypted image. With the combined technique, a receiver may extract a part of embedded data before decryption, and extract another part of embedded data and recover the original plaintext image after decryption.

Journal ArticleDOI
TL;DR: This paper investigates to what extent an external attacker can identify the specific actions that a user is performing on her mobile apps, and design a system that achieves this goal using advanced machine learning techniques, and compares the solution with the three state-of-the-art algorithms.
Abstract: Mobile devices can be maliciously exploited to violate the privacy of people. In most attack scenarios, the adversary takes the local or remote control of the mobile device, by leveraging a vulnerability of the system, hence sending back the collected information to some remote web service. In this paper, we consider a different adversary, who does not interact actively with the mobile device, but he is able to eavesdrop the network traffic of the device from the network side (e.g., controlling a Wi-Fi access point). The fact that the network traffic is often encrypted makes the attack even more challenging. In this paper, we investigate to what extent such an external attacker can identify the specific actions that a user is performing on her mobile apps. We design a system that achieves this goal using advanced machine learning techniques. We built a complete implementation of this system, and we also run a thorough set of experiments, which show that our attack can achieve accuracy and precision higher than 95%, for most of the considered actions. We compared our solution with the three state-of-the-art algorithms, and confirming that our system outperforms all these direct competitors.

Journal ArticleDOI
TL;DR: An identity-based signature scheme and an identity- based encryption scheme are used to propose a new anonymous key distribution scheme for smart grid environments that allows a smart meter to anonymously access services provided by service providers using one private key without the help of the trusted anchor during authentication.
Abstract: To fully support information management among various stakeholders in smart grid domains, how to establish secure communication sessions has become an important issue for smart grid environments. In order to support secure communications between smart meters and service providers, key management for authentication becomes a crucial security topic. Recently, several key distribution schemes have been proposed to provide secure communications for smart grid. However, these schemes do not support smart meter anonymity and possess security weaknesses. This paper utilizes an identity-based signature scheme and an identity-based encryption scheme to propose a new anonymous key distribution scheme for smart grid environments. In the proposed scheme, a smart meter can anonymously access services provided by service providers using one private key without the help of the trusted anchor during authentication. In addition, the proposed scheme requires only a few of computation operations at the smart meter side. Security analysis is conducted to prove the proposed scheme is secure under random oracle model.

Proceedings ArticleDOI
30 May 2016
TL;DR: This paper shows that the page fault side-channel has sufficient channel capacity to extract bits of encryption keys from commodity implementations of cryptographic routines in OpenSSL and Libgcrypt -- leaking 27% on average and up to 100% of the secret bits in many case-studies.
Abstract: New hardware primitives such as Intel SGX secure a user-level process in presence of an untrusted or compromised OS. Such "enclaved execution" systems are vulnerable to several side-channels, one of which is the page fault channel. In this paper, we show that the page fault side-channel has sufficient channel capacity to extract bits of encryption keys from commodity implementations of cryptographic routines in OpenSSL and Libgcrypt -- leaking 27% on average and up to 100% of the secret bits in many case-studies. To mitigate this, we propose a software-only defense that masks page fault patterns by determinising the program's memory access behavior. We show that such a technique can be built into a compiler, and implement it for a subset of C which is sufficient to handle the cryptographic routines we study. This defense when implemented generically can have significant overhead of up to 4000X, but with help of developer-assisted compiler optimizations, the overhead reduces to at most 29.22% in our case studies. Finally, we discuss scope for hardware-assisted defenses, and show one solution that can reduce overheads to 6.77% with support from hardware changes.