scispace - formally typeset
Search or ask a question
Journal ArticleDOI

SVD-based image compression, encryption, and identity authentication algorithm on cloud

01 Oct 2019-Iet Image Processing (The Institution of Engineering and Technology)-Vol. 13, Iss: 12, pp 2224-2232
TL;DR: The authors proposed a novel authentication value calculation algorithm, which can calculate the authentication value according to related data and has the perfect authentication performance, so as in the scenarios if the image is cropped or added noisy.
Abstract: Based on singular value decomposition (SVD), an image compression, encryption, and identity authentication scheme is proposed here. This scheme can not only encrypt image data which would store in the cloud but also implement identity authentication. The authors use the SVD to decompose the image data into three parts: the left singular value matrix, the right singular value matrix, and the singular value matrix. The left singular value matrix and right singular value matrix are not as important as the singular value matrix. They propose a logistic-tent-sine chaotic system to encrypt them. In this scheme, they proposed a novel authentication value calculation algorithm, which can calculate the authentication value according to related data. According to the authentication value calculated from the ciphertext, the algorithm has the perfect authentication performance, so as in the scenarios if the image is cropped or added noisy. Theoretical analysis and empirical evaluations show that the proposed system can achieve better compression performance, satisfactory security performance, and low computational complexity.
Citations
More filters
Journal ArticleDOI
TL;DR: The various approaches taken to consider joint encryption and compression, assessing both their merits and their limitations are reviewed, offering a consideration of the different technical perspectives.
Abstract: As digital images are consistently generated and transmitted online, the unauthorized utilization of these images is an increasing concern that has a significant impact on both security and privacy issues; additionally, the representation of digital images requires a large amount of data. In recent years, an image compression scheme has been widely considered; such a scheme saves on hardware storage space and lowers both the transmission time and bandwidth demand for various potential applications. In this article, we review the various approaches taken to consider joint encryption and compression, assessing both their merits and their limitations. In addition to the survey, we also briefly introduce the most interesting and most often utilized applications of image encryption and evaluation metrics, providing an overview of the various kinds of image encryption schemes available. The contribution made by these approaches is then summarized and compared, offering a consideration of the different technical perspectives. Lastly, we highlight the recent challenges and some potential research directions that could fill the gaps in these domains for both researchers and developers.

23 citations

Journal ArticleDOI
TL;DR: A three-tier method for the automated detection and recognition of bridge defects is proposed that outperformed other prediction models achieving overall accuracy, F-measure, Kappa coefficient, balanced accuracy, Matthews’s correlation coefficient, and area under curve.
Abstract: Existing bridges are aging and deteriorating, raising concerns for public safety and the preservation of these valuable assets. Furthermore, the transportation networks that manage many bridges fac...

14 citations


Cites background from "SVD-based image compression, encryp..."

  • ...Additionally, it is characterized by its low computational complexity (62, 63)....

    [...]

Journal ArticleDOI
01 Oct 2022-Sensors
TL;DR: This tutorial discusses machine learning applications to propose robust authentication protocols that are trained based on hidden concepts in biometric and physical layer data and can be more reliable than traditional methods.
Abstract: Telehealth systems have evolved into more prevalent services that can serve people in remote locations and at their homes via smart devices and 5G systems. Protecting the privacy and security of users is crucial in such online systems. Although there are many protocols to provide security through strong authentication systems, sophisticated IoT attacks are becoming more prevalent. Using machine learning to handle biometric information or physical layer features is key to addressing authentication problems for human and IoT devices, respectively. This tutorial discusses machine learning applications to propose robust authentication protocols. Since machine learning methods are trained based on hidden concepts in biometric and physical layer data, these dynamic authentication models can be more reliable than traditional methods. The main advantage of these methods is that the behavioral traits of humans and devices are tough to counterfeit. Furthermore, machine learning facilitates continuous and context-aware authentication.

6 citations

Journal ArticleDOI
TL;DR: The proposed one-factor cancellable palmprint biometric recognition scheme based on Orthogonal Index of Maximum (OIOM) hash and Minimum Signature Hash (MSH) is proposed to generate pseudonymous identifier has strong security on the basis of maintaining palmprint recognition performance.
Abstract: The traditional Cancellable biometrics scheme needs another key or token to generate the revocable template, which usually suffers from the token-stolen problem. To solve this problem, a one-factor cancellable palmprint biometric recognition scheme based on Orthogonal Index of Maximum (OIOM) hash and Minimum Signature Hash (MSH) is proposed to generate pseudonymous identifier. Firstly, to improve the efficiency and effectiveness, a parallel structure is designed to obtain Orthogonal Gaussian random projection(GRP)matrices, which is employed to generate the OIOM hash code. Secondly, a random binary string is XOR the binary OIOM hash code to construct the helper data and it is stored in the database. Meanwhile, this string is hashed by MSH to get the final pseudonymous identifier. Lastly, during matching stage, based on helper data and a palmprint images, another pseudonymous identifier is generated to recognition. This implies that, just one factor, a palmprint of user, is needed in the matching stage, and therefore the privacy of user is preserved. To evaluate the proposed scheme, PolyU database and touchless TJU database are used in experiments. The noninvertible, renewability, unlinkability and several security attacks of the proposed scheme are analyzed. The experiment results and analysis show that the proposed scheme has strong security on the basis of maintaining palmprint recognition performance.

6 citations


Cites background or methods from "SVD-based image compression, encryp..."

  • ...Firstly, based on the Logistic-Tent-Sine composite chaotic system [18], to improve the efficiency and effectiveness, a aparallel structure is designed to generate Orthogonal Gaussian random projection(GRP)matrices....

    [...]

  • ...LTSS is one of the typical composite chaotic system [18]....

    [...]

Journal ArticleDOI
22 May 2021
TL;DR: Overall, the Coiflets, Haar wavelet, and SVD compression methods used for JPG images can reduce file size and preserve image information and quality.
Abstract: The image problem lies in the amount of storage space required, to save memory as little as possible image compression is required. The image compression technique is a technique used to represent an image by reducing the quality of the original image but still retaining the information inside. This study compares the best compression method between Coiflets, Haar wavelets, and SVD with JPG image material. The comparison process has done by calculating the compression ratio (CR), Space Saving (SS), Mean Square Error (MSE), Root Mean Square Error (RMSE), and Peak Signal to Noise Ratio (PSNR). The results obtained prove that the SVD method has the highest compression ratio of 3.25 while in the case of Space Saving (SS) the Coiflets method gives the best performance with a value of 73. Measurement in terms of MSE and RMSE is the best for the Coiflets method because it has an average value. -The smallest average among all methods is 0.02395 and 0.111383. provides the best performance in maintaining compression quality. The best PSNR based image quality assessment is the Coiflets method with the highest PSNR average of 63.02 dB. Overall, the Coiflets, Haar wavelet, and SVD compression methods used for JPG images can reduce file size and preserve image information and quality.

5 citations


Cites background from "SVD-based image compression, encryp..."

  • ...Each matrix M in the SVD, which is n × n in size, can be broken down into three parts as in (1) [19]:...

    [...]

References
More filters
Journal ArticleDOI
TL;DR: A novel feature similarity (FSIM) index for full reference IQA is proposed based on the fact that human visual system (HVS) understands an image mainly according to its low-level features.
Abstract: Image quality assessment (IQA) aims to use computational models to measure the image quality consistently with subjective evaluations. The well-known structural similarity index brings IQA from pixel- to structure-based stage. In this paper, a novel feature similarity (FSIM) index for full reference IQA is proposed based on the fact that human visual system (HVS) understands an image mainly according to its low-level features. Specifically, the phase congruency (PC), which is a dimensionless measure of the significance of a local structure, is used as the primary feature in FSIM. Considering that PC is contrast invariant while the contrast information does affect HVS' perception of image quality, the image gradient magnitude (GM) is employed as the secondary feature in FSIM. PC and GM play complementary roles in characterizing the image local quality. After obtaining the local quality map, we use PC again as a weighting function to derive a single quality score. Extensive experiments performed on six benchmark IQA databases demonstrate that FSIM can achieve much higher consistency with the subjective evaluations than state-of-the-art IQA metrics.

4,028 citations

Proceedings ArticleDOI
14 Mar 2010
TL;DR: This paper utilize and uniquely combine the public key based homomorphic authenticator with random masking to achieve the privacy-preserving public cloud data auditing system, which meets all above requirements.
Abstract: Cloud Computing is the long dreamed vision of computing as a utility, where users can remotely store their data into the cloud so as to enjoy the on-demand high quality applications and services from a shared pool of configurable computing resources. By data outsourcing, users can be relieved from the burden of local data storage and maintenance. However, the fact that users no longer have physical possession of the possibly large size of outsourced data makes the data integrity protection in Cloud Computing a very challenging and potentially formidable task, especially for users with constrained computing resources and capabilities. Thus, enabling public auditability for cloud data storage security is of critical importance so that users can resort to an external audit party to check the integrity of outsourced data when needed. To securely introduce an effective third party auditor (TPA), the following two fundamental requirements have to be met: 1) TPA should be able to efficiently audit the cloud data storage without demanding the local copy of data, and introduce no additional on-line burden to the cloud user; 2) The third party auditing process should bring in no new vulnerabilities towards user data privacy. In this paper, we utilize and uniquely combine the public key based homomorphic authenticator with random masking to achieve the privacy-preserving public cloud data auditing system, which meets all above requirements. To support efficient handling of multiple auditing tasks, we further explore the technique of bilinear aggregate signature to extend our main result into a multi-user setting, where TPA can perform multiple auditing tasks simultaneously. Extensive security and performance analysis shows the proposed schemes are provably secure and highly efficient.

1,408 citations

Proceedings ArticleDOI
21 Oct 2011
TL;DR: A proof-of-concept implementation of the recent somewhat homomorphic encryption scheme of Brakerski and Vaikuntanathan, whose security relies on the "ring learning with errors" (Ring LWE) problem, and a number of application-specific optimizations to the encryption scheme, including the ability to convert between different message encodings in a ciphertext.
Abstract: The prospect of outsourcing an increasing amount of data storage and management to cloud services raises many new privacy concerns for individuals and businesses alike. The privacy concerns can be satisfactorily addressed if users encrypt the data they send to the cloud. If the encryption scheme is homomorphic, the cloud can still perform meaningful computations on the data, even though it is encrypted.In fact, we now know a number of constructions of fully homomorphic encryption schemes that allow arbitrary computation on encrypted data. In the last two years, solutions for fully homomorphic encryption have been proposed and improved upon, but it is hard to ignore the elephant in the room, namely efficiency -- can homomorphic encryption ever be efficient enough to be practical? Certainly, it seems that all known fully homomorphic encryption schemes have a long way to go before they can be used in practice. Given this state of affairs, our contribution is two-fold.First, we exhibit a number of real-world applications, in the medical, financial, and the advertising domains, which require only that the encryption scheme is "somewhat" homomorphic. Somewhat homomorphic encryption schemes, which support a limited number of homomorphic operations, can be much faster, and more compact than fully homomorphic encryption schemes.Secondly, we show a proof-of-concept implementation of the recent somewhat homomorphic encryption scheme of Brakerski and Vaikuntanathan, whose security relies on the "ring learning with errors" (Ring LWE) problem. The scheme is very efficient, and has reasonably short ciphertexts. Our unoptimized implementation in magma enjoys comparable efficiency to even optimized pairing-based schemes with the same level of security and homomorphic capacity. We also show a number of application-specific optimizations to the encryption scheme, most notably the ability to convert between different message encodings in a ciphertext.

1,053 citations

Journal ArticleDOI
TL;DR: Simulations and performance evaluations show that the proposed system is able to produce many 1D chaotic maps with larger chaotic ranges and better chaotic behaviors compared with their seed maps.

694 citations

Journal ArticleDOI
TL;DR: A unique watermark is directly embedded into the encrypted images by the cloud server before images are sent to the query user, and when image copy is found, the unlawful query user who distributed the image can be traced by the watermark extraction.
Abstract: With the increasing importance of images in people’s daily life, content-based image retrieval (CBIR) has been widely studied. Compared with text documents, images consume much more storage space. Hence, its maintenance is considered to be a typical example for cloud storage outsourcing. For privacy-preserving purposes, sensitive images, such as medical and personal images, need to be encrypted before outsourcing, which makes the CBIR technologies in plaintext domain to be unusable. In this paper, we propose a scheme that supports CBIR over encrypted images without leaking the sensitive information to the cloud server. First, feature vectors are extracted to represent the corresponding images. After that, the pre-filter tables are constructed by locality-sensitive hashing to increase search efficiency. Moreover, the feature vectors are protected by the secure kNN algorithm, and image pixels are encrypted by a standard stream cipher. In addition, considering the case that the authorized query users may illegally copy and distribute the retrieved images to someone unauthorized, we propose a watermark-based protocol to deter such illegal distributions. In our watermark-based protocol, a unique watermark is directly embedded into the encrypted images by the cloud server before images are sent to the query user. Hence, when image copy is found, the unlawful query user who distributed the image can be traced by the watermark extraction. The security analysis and the experiments show the security and efficiency of the proposed scheme.

563 citations