scispace - formally typeset
Search or ask a question
Topic

Data Corruption

About: Data Corruption is a research topic. Over the lifetime, 435 publications have been published within this topic receiving 6784 citations.


Papers
More filters
Proceedings ArticleDOI
09 Mar 2015
TL;DR: A comprehensive study on 138 real world data corruption incidents reported in Hadoop bug repositories finds the impact of data corruption is not limited to data integrity, and existing data corruption detection schemes are quite insufficient.
Abstract: Big data processing is one of the killer applications for cloud systems. MapReduce systems such as Hadoop are the most popular big data processing platforms used in the cloud system. Data corruption is one of the most critical problems in cloud data processing, which not only has serious impact on the integrity of individual application results but also affects the performance and availability of the whole data processing system. In this paper, we present a comprehensive study on 138 real world data corruption incidents reported in Hadoop bug repositories. We characterize those data corruption problems in four aspects: 1) what impact can data corruption have on the application and system? 2) how is data corruption detected? 3) what are the causes of the data corruption? and 4) what problems can occur while attempting to handle data corruption? Our study has made the following findings: 1) the impact of data corruption is not limited to data integrity, 2) existing data corruption detection schemes are quite insufficient: only 25% of data corruption problems are correctly reported, 42% are silent data corruption without any error message, and 21% receive imprecise error report. We also found the detection system raised 12% false alarms, 3) there are various causes of data corruption such as improper runtime checking, race conditions, inconsistent block states, improper network failure handling, and improper node crash handling, and 4) existing data corruption handling mechanisms (i.e., data replication, replica deletion, simple re-execution) make frequent mistakes including replicating corrupted data blocks, deleting uncorrupted data blocks, or causing undesirable resource hogging.

17 citations

Patent
18 Jul 2012
TL;DR: In this paper, a data backup service based on a broadband internet and high-capacity storage space is proposed, which provides the automatic and safe data protection and automatically backups the important data of users in real time to the storage of the cloud atmosphere.
Abstract: The invention provides a data backup method in the cloud atmosphere The method is the backup service based on a broadband internet and high-capacity storage space In short, the data backup method assembles a larger number of various types of storage devices in network through functions such as clustering application, grid technology or distributed file systems and the like and by application software so as to cooperatively work and jointly provide the functional services of data storage backup and business access for the exterior The data backup method provides the automatic and safe data protection and automatically backups the important data of users in real time to the storage of the cloud atmosphere; and once the data corruption in a local computer, a mobile phone or other terminals happens, the data can be recovered by one key

16 citations

Journal ArticleDOI
TL;DR: It is shown that the time lost on such mistakes is enormous and dramatically affects results; therefore, mistakes should be mitigated in any way possible and one of the goals is to expedite the use of integrity control methods.
Abstract: Gene expression microarrays are a relatively new technology, dating back just a few years, yet they have already become a very widely used tool in biology, and have evolved to a wide range of applications well beyond their original design intent. However, while the use of microarrays has expanded, and the issues of performance optimization have been intensively studied, the fundamental issue of data integrity management has largely been ignored. Now that performance has improved so greatly, the shortcomings of data integrity control methods constitute a greater percent of the stumbling blocks for investigators. Microarray data are cumbersome, and the rule up to this point has mostly been one of hands-on transformations, leading to human errors which often have dramatic consequences. We show in this review that the time lost on such mistakes is enormous and dramatically affects results; therefore, mistakes should be mitigated in any way possible. We outline the scope of the data integrity issue, to survey some of the most common and dangerous data transformations, and their shortcomings. To illustrate, we review some case studies. We then look at the work done by the research community on this issue (which admittedly is meager up to this point). Some data integrity issues are always going to be difficult, while others will become easier-one of our goals is to expedite the use of integrity control methods. Finally, we present some preliminary guidelines and some specific approaches that we believe should be the focus of future research.

16 citations

Patent
29 Aug 2007
TL;DR: In this article, the hierarchical rollback method is proposed to restore data to a previous point in the case of a data modification failure in order to prevent incorrect linking and data corruption.
Abstract: An apparatus, system, computer program product and method are disclosed for the hierarchical rollback of business objects on a datastore. The hierarchical rollback method utilizes a non-linear process designed to restore data to a previous point in the case of a data modification failure in order to prevent incorrect linking and data corruption. The hierarchical rollback methods are generated by retrieving existing data and creating commands in an order that will prevent orphan data in a datastore.

16 citations

Proceedings ArticleDOI
13 Nov 2009
TL;DR: A novel file format that employs range encoding to provide a high degree of data compression, a three-tiered 128-bit encryption system for patient information and data security, and a 32-bit cyclic redundancy check to verify the integrity of compressed data blocks is presented.
Abstract: Continuous, long-term (up to 10 days) electrophysiological monitoring using hybrid intracranial electrodes is an emerging tool for presurgical epilepsy evaluation and fundamental investigations of seizure generation. Detection of high-frequency oscillations and microseizures could provide valuable insights into causes and therapies for the treatment of epilepsy, but requires high spatial and temporal resolution. Our group is currently using hybrid arrays composed of up to 320 micro- and clinical macroelectrode arrays sampled at 32 kHz per channel with 18-bits of A/D resolution. Such recordings produce approximately 3 terabytes of data per day. Existing file formats have limited data compression capabilities, and do not offer mechanisms for protecting patient identifying information or detecting data corruption during transmission or storage. We present a novel file format that employs range encoding to provide a high degree of data compression, a three-tiered 128-bit encryption system for patient information and data security, and a 32-bit cyclic redundancy check to verify the integrity of compressed data blocks. Open-source software to read, write, and process these files are provided.

16 citations


Network Information
Related Topics (5)
Network packet
159.7K papers, 2.2M citations
82% related
Software
130.5K papers, 2M citations
81% related
Wireless sensor network
142K papers, 2.4M citations
78% related
Wireless network
122.5K papers, 2.1M citations
77% related
Cluster analysis
146.5K papers, 2.9M citations
76% related
Performance
Metrics
No. of papers in the topic in previous years
YearPapers
20221
202121
202025
201927
201827
201727