scispace - formally typeset
Search or ask a question
Topic

Data Corruption

About: Data Corruption is a research topic. Over the lifetime, 435 publications have been published within this topic receiving 6784 citations.


Papers
More filters
Book ChapterDOI
01 Jan 2021
TL;DR: In this article, the authors considered different users' data protection policies with the aim to make the data resilient to the co-residence attacks, including data partition with and without replication of the parts, and attack detection through the early warning mechanism.
Abstract: The virtualization technology, particularly virtual machines (VMs) used in cloud computing systems have raised unique security and reliability risks for cloud users. This chapter focuses on the resilience to one of such risks, co-residence attacks where a user’s information in one VM can be accessed/stolen or corrupted through side channels by a malicious attacker’s VM co-residing on the same physical server. Both users’ and attackers’ VMs are distributed among cloud servers at random. We consider different users’ data protection policies with the aim to make the data resilient to the co-residence attacks, including data partition with and without replication of the parts, and attack detection through the early warning mechanism. Probabilistic models are suggested to derive the overall probabilities of an attacker’s success in data theft and data corruption. Based on the suggested probabilistic evaluation models, optimization problems of obtaining the data partition/replication policy to balance data security, data reliability, and a user’s overheads are formulated and solved, leading to the optimal data protection policy to achieve data resilience. The possible user’s uncertainty about the number of attacker’s VMs is taken into account. Numerical examples demonstrating the influence of different constraints on the optimal policy are presented.

3 citations

01 Jan 2013
TL;DR: Precompute the token for large number of data will improve the performance of error detection in storage services and is used in this paper to identify the error corruption.
Abstract: Cloud Structure delivers infrastructure, platform and software as a service which are made available in pay-as-you go model. Cloud storage is a model of networked online storage where data is stored in virtualized pools of storage which are generally hosted by third parties. Users interact with the cloud servers through Cloud Storage Providers (CSP) to access or retrieve the data. Users have no time to monitor the data online. Hence, it entrust the auditing task to an optional Third Party Auditor (TPA). Its main aim is to achieve the integration of storage correctness across multiple servers and data error localization. Homomorphic pre computation is used in this paper to identify the error corruption. User precomputes the token for the data file; server computes signature over specified blocks. If the signature is not matched with the precomputed token which denotes the data corruption. Byzantine Fault tolerant algorithm or data error localization algorithm is used to identify in which server the data gets corrupted or which server is not behaving properly. After detecting the error, reed Solomon algorithm is used to recover the corrupted data. In this paper, precompute the token for large number of data will improve the performance of error detection in storage services.

2 citations

Proceedings ArticleDOI
22 Mar 2021
TL;DR: In this article, the authors propose an efficient and robust data integrity verification scheme for large-scale data transfer between computing systems with high-performance storage devices, where the order of I/O operations is controlled to ensure the robustness of the integrity verification.
Abstract: Most of the data generated on high-performance computing systems are transferred to storage in remote systems for various purposes such as backup. To detect data corruption caused by network or storage failures during data transfer, the receiver system verifies data integrity by comparing the checksum of the data. However, the internal operation of the storage device is not sufficiently investigated in the existing end-to-end integrity verification techniques. In this paper, we propose an efficient and robust data integrity verification scheme for large-scale data transfer between computing systems with high-performance storage devices. To ensure the robustness of the integrity verification, we control the order of I/O operations. In addition, we parallelize checksum computing and overlap it with I/O operations to make the integrity verification efficient.

2 citations

Patent
17 Nov 2008
TL;DR: A duplicate address discovery process detects duplicate MAC addresses or duplicate unique port identifiers within the network, alerts attached devices of the duplicates, and takes action to avoid data corruption that might be caused by such duplicate addresses.
Abstract: A duplicate address discovery process detects duplicate MAC addresses or duplicate unique port identifiers within the network, alerts attached devices of the duplicates, and takes action to avoid data corruption that might be caused by such duplicate addresses.

2 citations


Network Information
Related Topics (5)
Network packet
159.7K papers, 2.2M citations
82% related
Software
130.5K papers, 2M citations
81% related
Wireless sensor network
142K papers, 2.4M citations
78% related
Wireless network
122.5K papers, 2.1M citations
77% related
Cluster analysis
146.5K papers, 2.9M citations
76% related
Performance
Metrics
No. of papers in the topic in previous years
YearPapers
20221
202121
202025
201927
201827
201727