scispace - formally typeset
Search or ask a question
Author

Harikesh Pandey

Bio: Harikesh Pandey is an academic researcher. The author has contributed to research in topics: Wireless network & Image compression. The author has an hindex of 1, co-authored 3 publications receiving 70 citations.

Papers
More filters
Journal Article
TL;DR: This paper takes care of this open issue with a novel plan in view of strategies including polynomial-based validation labels and homomorphic straight authenticators and beats existing POR and PDP plans while giving the extra usefulness of deduplication.
Abstract: Information uprightness and capacity effectiveness are two essential necessities for distributed storage. Verification of Retrievability (POR) and Confirmation of Information Ownership (PDP) strategies guarantee information respectability for distributed storage. Evidence of Proprietorship (POW) enhances stockpiling proficiency by safely evacuating superfluously copied the information on the capacity server. Be that as it may, an insignificant blend of the two systems, with a specific end goal to accomplish both information trustworthiness and capacity proficiency, brings about non-minor duplication of metadata (i.e., validation labels), which repudiates the destinations of POW. Late endeavors to this issue present huge computational and correspondence costs and have likewise been demonstrated not secure. It requires another answer for bolster effective and secure information trustworthiness inspecting with capacity deduplication for distributed storage. In this paper, we take care of this open issue with a novel plan in view of strategies including polynomial-based validation labels and homomorphic straight authenticators. Our plan permits deduplication of both documents and their relating confirmation labels. Information respectability examining and capacity deduplication are accomplished all the while. Our proposed plan is likewise portrayed by consistent ongoing correspondence and computational cost on the client side. Open inspecting and group reviewing are both upheld. Henceforth, our proposed conspire beats existing POR and PDP plans while giving the extra usefulness of deduplication. We demonstrate the security of our proposed conspire in light of the Computational Diffie-Hellman issue, the Static Diffie-Hellman issue, and the t-Solid Diffie-Hellman issue. Numerical investigation and trial come about on Amazon AWS demonstrate that our plan is proficient and versatile.

72 citations

Journal Article
TL;DR: This research work presents a technique for image compression, using Discrete Cosine Transform and Fuzzy Logic Techniques, which shows an improved performance both in compression ratio as well as image perceptibility.
Abstract: — Today there has been big advancement in digital technology that has led to the development of various easily usable devices and methods specially in the fields of communications and data transfer to a longer distances The transmission of data in the form of documents, images, voice etcis now reachable to all parts of the society and the services are affordable to a larger number of people An important aspect is data compression and for that matter Image compression, as images form a larger part of data being exchanged over the internet through social networking and messaging sites and apps all over the world Among all the various kinds of data images and videos constitute the bulkiest data Thus, need for compressing the image and video files is an important aspect in data communication In this research work we present a technique for image compression, using Discrete Cosine Transform and Fuzzy Logic Techniques The algorithm used in this paper is tested along with several images and the results are compared with other techniques Our method shows an improved performance both in compression ratio as well as image perceptibility

1 citations

Journal Article
TL;DR: In this research work various energy efficient schemes apply in WSNs have been studied, the clustering based approach has been studied and a modified protocol has been implemented which is based on selection probability.
Abstract: Wireless Sensor Networks (WSNs) are being used extensively for monitoring and surveillance in several fields like military area, agricultural fields, forests, nuclear reactors etc. A Wireless Sensor Network generally consists of a large number of small and low cost sensor nodes powered by small non rechargeable batteries and equipped with various sensing devices. It is expected that it will be suddenly active to gather the required data for some times when something is detected, and then remaining largely inactive for long periods of time. So, efficient power saving schemes and corresponding algorithms must be developed and designed in order to provide reasonable energy consumption and to improve the network lifetime for WSNs. The cluster-based technique is one of the good approaches to reduce energy consumption in wireless sensor networks. The lifetime of wireless sensor networks is extended by using the uniform cluster location and balancing the network loading among the clusters. In this research work various energy efficient schemes apply in WSNs have been studied. The clustering based approach has been studied and a modified protocol has been implemented which is based on selection probability. The sensor only transmits when the threshold level is achieved for this selection potential. It selects a node as a cluster head if its residual energy is more than system average energy and have less energy consumption rate in previous round. The goals of this scheme are, increase stability period of network, and minimize loss of sensed data.

Cited by
More filters
Posted Content
TL;DR: This paper defines and explores proofs of retrievability (PORs), a POR scheme that enables an archive or back-up service to produce a concise proof that a user can retrieve a target file F, that is, that the archive retains and reliably transmits file data sufficient for the user to recover F in its entirety.
Abstract: In this paper, we define and explore proofs of retrievability (PORs). A POR scheme enables an archive or back-up service (prover) to produce a concise proof that a user (verifier) can retrieve a target file F, that is, that the archive retains and reliably transmits file data sufficient for the user to recover F in its entirety.A POR may be viewed as a kind of cryptographic proof of knowledge (POK), but one specially designed to handle a large file (or bitstring) F. We explore POR protocols here in which the communication costs, number of memory accesses for the prover, and storage requirements of the user (verifier) are small parameters essentially independent of the length of F. In addition to proposing new, practical POR constructions, we explore implementation considerations and optimizations that bear on previously explored, related schemes.In a POR, unlike a POK, neither the prover nor the verifier need actually have knowledge of F. PORs give rise to a new and unusual security definition whose formulation is another contribution of our work.We view PORs as an important tool for semi-trusted online archives. Existing cryptographic techniques help users ensure the privacy and integrity of files they retrieve. It is also natural, however, for users to want to verify that archives do not delete or modify files prior to retrieval. The goal of a POR is to accomplish these checks without users having to download the files themselves. A POR can also provide quality-of-service guarantees, i.e., show that a file is retrievable within a certain time bound.

1,783 citations

Journal ArticleDOI
TL;DR: This paper makes the first attempt to formally address the problem of authorized data deduplication, and shows that the proposed authorized duplicate check scheme incurs minimal overhead compared to normal operations.
Abstract: Data deduplication is one of important data compression techniques for eliminating duplicate copies of repeating data, and has been widely used in cloud storage to reduce the amount of storage space and save bandwidth. To protect the confidentiality of sensitive data while supporting deduplication, the convergent encryption technique has been proposed to encrypt the data before outsourcing. To better protect data security, this paper makes the first attempt to formally address the problem of authorized data deduplication. Different from traditional deduplication systems, the differential privileges of users are further considered in duplicate check besides the data itself. We also present several new deduplication constructions supporting authorized duplicate check in a hybrid cloud architecture. Security analysis demonstrates that our scheme is secure in terms of the definitions specified in the proposed security model. As a proof of concept, we implement a prototype of our proposed authorized duplicate check scheme and conduct testbed experiments using our prototype. We show that our proposed authorized duplicate check scheme incurs minimal overhead compared to normal operations.

394 citations

Journal ArticleDOI
TL;DR: This survey addresses the issue of how blockchain technology inserts into current deployed cloud solutions and enables the reengineering of cloud datacenter, and investigates recent efforts in the technical fusion of blockchain and clouds.
Abstract: Blockchain technology has been deemed to be an ideal choice for strengthening existing computing systems in varied manners. As one of the network-enabled technologies, cloud computing has been broadly adopted in the industry through numerous cloud service models. Fusing blockchain technology with existing cloud systems has a great potential in both functionality/performance enhancement and security/privacy improvement. The question remains on how blockchain technology inserts into current deployed cloud solutions and enables the reengineering of cloud datacenter. This survey addresses this issue and investigates recent efforts in the technical fusion of blockchain and clouds. Three technical dimensions roughly are covered in this work. First, we concern the service model and review an emerging cloud-relevant blockchain service model, Blockchain-as-a-Service (BaaS); second, security is considered a key technical dimension in this work and both access control and searchable encryption schemes are assessed; finally, we examine the performance of cloud datacenter with supports/participance of blockchain from hardware and software perspectives. Main findings of this survey will be theoretical supports for future reference of blockchain-enabled reengineering of cloud datacenter.

190 citations

Journal ArticleDOI
TL;DR: This paper proposes a scheme to deduplicate encrypted data stored in cloud based on ownership challenge and proxy re-encryption that integrates cloud data dedUplication with access control and evaluates its performance based on extensive analysis and computer simulations.
Abstract: Cloud computing offers a new way of service provision by re-arranging various resources over the Internet. The most important and popular cloud service is data storage. In order to preserve the privacy of data holders, data are often stored in cloud in an encrypted form. However, encrypted data introduce new challenges for cloud data deduplication, which becomes crucial for big data storage and processing in cloud. Traditional deduplication schemes cannot work on encrypted data. Existing solutions of encrypted data deduplication suffer from security weakness. They cannot flexibly support data access control and revocation. Therefore, few of them can be readily deployed in practice. In this paper, we propose a scheme to deduplicate encrypted data stored in cloud based on ownership challenge and proxy re-encryption. It integrates cloud data deduplication with access control. We evaluate its performance based on extensive analysis and computer simulations. The results show the superior efficiency and effectiveness of the scheme for potential practical deployment, especially for big data deduplication in cloud storage.

139 citations

Journal ArticleDOI
TL;DR: This work proposes two secure systems, namely SecCloud and SecCloud, designed motivated by the fact that customers always want to encrypt their data before uploading, and enables integrity auditing and secure deduplication on encrypted data.
Abstract: As the cloud computing technology develops during the last decade, outsourcing data to cloud service for storage becomes an attractive trend, which benefits in sparing efforts on heavy data maintenance and management. Nevertheless, since the outsourced cloud storage is not fully trustworthy, it raises security concerns on how to realize data deduplication in cloud while achieving integrity auditing. In this work, we study the problem of integrity auditing and secure deduplication on cloud data. Specifically, aiming at achieving both data integrity and deduplication in cloud, we propose two secure systems, namely SecCloud and SecCloud $^+$ . SecCloud introduces an auditing entity with a maintenance of a MapReduce cloud, which helps clients generate data tags before uploading as well as audit the integrity of data having been stored in cloud. Compared with previous work, the computation by user in SecCloud is greatly reduced during the file uploading and auditing phases. SecCloud $^+$ is designed motivated by the fact that customers always want to encrypt their data before uploading, and enables integrity auditing and secure deduplication on encrypted data.

134 citations