scispace - formally typeset
Search or ask a question

Showing papers on "Data Corruption published in 2011"


Proceedings ArticleDOI
05 Mar 2011
TL;DR: Flikker exposes and leverages an interesting trade-off between energy consumption and hardware correctness, and shows that many applications are naturally tolerant to errors in the non-critical data, and in the vast majority of cases, the errors have little or no impact on the application's final outcome.
Abstract: Energy has become a first-class design constraint in computer systems. Memory is a significant contributor to total system power. This paper introduces Flikker, an application-level technique to reduce refresh power in DRAM memories. Flikker enables developers to specify critical and non-critical data in programs and the runtime system allocates this data in separate parts of memory. The portion of memory containing critical data is refreshed at the regular refresh-rate, while the portion containing non-critical data is refreshed at substantially lower rates. This partitioning saves energy at the cost of a modest increase in data corruption in the non-critical data. Flikker thus exposes and leverages an interesting trade-off between energy consumption and hardware correctness. We show that many applications are naturally tolerant to errors in the non-critical data, and in the vast majority of cases, the errors have little or no impact on the application's final outcome. We also find that Flikker can save between 20-25% of the power consumed by the memory sub-system in a mobile device, with negligible impact on application performance. Flikker is implemented almost entirely in software, and requires only modest changes to the hardware.

457 citations


Journal ArticleDOI
TL;DR: In this article, a Cloud Computing system is intended to improve and automate the controlling single point operations by using a single point of control, this goal is accomplished through the elimination of duplicate entry and the contribution of data integrity, detailed drilldown, simple training, manageable support, minimal IT maintenance, easy upgrades and reduced costs.
Abstract: A Cloud Computing system is intended to improve and automate the controlling single point operations By using a single point of control, this goal is accomplished through the elimination of duplicate entry and the contribution of data integrity, detailed drilldown, simple training, manageable support, minimal IT maintenance, easy upgrades and reduced costs Overall, the advantages of cloud computing usage fulfill the original intentions of business as it allows process manufacturers to manage their business as simply and efficiently as possible Enterprise Resource Planning (ERP) software is designed to improve and auto-mate business processes operations However, there are many unnecessary administrative, procedural costs and delays often associated with this practice Examples include duplicate data entry, data corruption, increased training, complicated supplier relations, greater IT support and software incompatibilities Purpose of this system is Single Point of Control, Duplicate Entry Elimination, Data Integrity, Detail Drill Down, Basic Training, Manage Support, Security, Minimal IT Maintenance, Easy Upgrades, and Reduce Costs etc

20 citations


Journal ArticleDOI
TL;DR: The proposed technique shows that it is possible to substantially improve the mean time to failure (MTTF) of the memory at the cost of increasing the access time for writing operations.
Abstract: Memory reliability is an important issue. The continuous scaling of transistor technology enables the use of larger memories making soft errors more likely to occur. To ensure that those errors do not cause data corruption, error correcting codes (ECC) are commonly used. Single error correction-double error detection codes (SEC-DED) are typically implemented in each memory word, so that a single error in a word can be corrected and two errors can be detected. In this paper, a technique to improve the reliability of memories that use SEC-DED is studied. The proposed technique shows that it is possible to substantially improve the mean time to failure (MTTF) of the memory at the cost of increasing the access time for writing operations.

15 citations


Patent
Robert L. Horn1
21 Nov 2011
TL;DR: In this paper, a disk drive is disclosed that varies its data redundancy policy for caching data in non-volatile solid-state memory as the memory degrades, which can be used to recover data stored in the nonvolatile memory in case of a data corruption.
Abstract: A disk drive is disclosed that varies its data redundancy policy for caching data in non-volatile solid-state memory as the memory degrades. As the non-volatile memory degrades, the redundancy of data stored in the non-volatile memory can be increased to counteract the effects of such degradation. Redundant data can be used to recover data stored in the non-volatile memory in case of a data corruption. Performance improvements and reduced costs of disk drives can thereby be attained.

11 citations


Patent
30 Jun 2011
TL;DR: In this paper, an array controller and a frame buffer are configured to read/write data to/from a drive array in response to one or more input/output requests, and the frame buffer may be implemented within the array controller.
Abstract: An apparatus comprising an array controller and a frame buffer. The array controller may be configured to read/write data to/from a drive array in response to one or more input/output requests. The frame buffer may be implemented within the array controller and may be configured to perform (i) a first data integrity check to determine a first type of data error and (ii) a second data integrity check to determine a second type of data error. The frame buffer may log occurrences of the first type of error and the second type of error in a field transmitted with the data. The field may be used to determine a source of possible corruption of the data.

9 citations


Proceedings ArticleDOI
23 Mar 2011
TL;DR: A novel fault tolerant model of AES is presented which is based on the Hamming error correction code and can identify the error and also encrypt the image as color so the data corruption due to Single Event Upset can be avoided and the performance was increased.
Abstract: This paper uses one of the security algorithms like the Advanced Encryption Standard (AES) in Earth Observation small Satellites. Commercial security in Earth observation satellite images is in need to protect valuable data transmitted from the satellite. In November 2001 NIST published Rijndael as the proposed Algorithm for AES (Advanced Encryption Standard), it provides the highest level of security by utilizing the newest and strongest 128 bit AES encryption algorithm to encrypt and authenticate the data. At the same time while encryption process immunity of encryption is taken into the account. Five modes of AES have been used to perform security on satellite data. The different modes are ECB, CBC, OFB, CFB and CTR. All the above techniques except OFB lead to fault during data transmission to ground because of noisy channels. It is due to Single Event Upsets (SEU). In order to avoid data corruption due to SEU's a novel fault tolerant model of AES is presented which is based on the Hamming error correction code. This reduces the data corruption and increases the performance. As a result we can identify the error and also we can encrypt the image as color. Thus the data corruption due to Single Event Upset can be avoided and the performance was increased.

7 citations


Proceedings ArticleDOI
11 Apr 2011
TL;DR: The Proactive Checking Framework is described, a new framework that enables a database system to deal with data corruption automatically and proactively and to outline a challenging research agenda to address it.
Abstract: The danger of production or backup data becoming corrupted is a problem that database administrators dread. This position paper aims to bring this problem to the attention of the database research community, which, surprisingly, has by and large overlooked this problem. We begin by pointing out the causes and consequences of data corruption. We then describe the Proactive Checking Framework (PCF), a new framework that enables a database system to deal with data corruption automatically and proactively. We use a prototype implementation of PCF to give deeper insights into the overall problem and to outline a challenging research agenda to address it.

6 citations


Patent
04 Jan 2011
TL;DR: In this paper, the authors propose a system and method that provision at least two (2) receivers in a topology that allows each receiver to acquire wireless communication signals through different diverse antenna fields.
Abstract: The present application includes a system and method that provisions at least two (2) receivers in a topology that allows each receiver to acquire wireless communication signals through different diverse antenna fields. Each receiver acquires the signal, and demodulates, decodes and sends data to the data terminal component. The data terminal component resolves packet alignment issues and selects the best data. This improves system reliability and reduces the system susceptibility to data corruption or loss of data due to signal fading that might occur on a single antenna field. Provisioning a wireless system in this manner reduces the likelihood that the same fading phenomena, resulting from either multipath and/or shadowing affects, impair signal reception causing data dropout or loss of data.

6 citations


Proceedings ArticleDOI
12 Jun 2011
TL;DR: The Amulet system is introduced, which is the first system that gives administrators a declarative language to specify their objectives regarding the detection and repair of data corruption, and contains optimization and execution algorithms to ensure that the administrator's objectives are met robustly and with least cost.
Abstract: Occasional corruption of stored data is an unfortunate byproduct of the complexity of modern systems. Hardware errors, software bugs, and mistakes by human administrators can corrupt important sources of data. The dominant practice to deal with data corruption today involves administrators writing ad hoc scripts that run data-integrity tests at the application, database, file-system, and storage levels. This manual approach is tedious, error-prone, and provides no understanding of the potential system unavailability and data loss if a corruption were to occur. We introduce the Amulet system that addresses the problem of verifying the correctness of stored data proactively and continuously. To our knowledge, Amulet is the first system that: (i) gives administrators a declarative language to specify their objectives regarding the detection and repair of data corruption; (ii) contains optimization and execution algorithms to ensure that the administrator's objectives are met robustly and with least cost, e.g., using pay-as-you cloud resources; and (iii) provides timely notification when corruption is detected, allowing proactive repair of corruption before it impacts users and applications. We describe the implementation and a comprehensive evaluation of Amulet for a database software stack deployed on an infrastructure-as-a-service cloud provider.

5 citations


Patent
20 Jun 2011
TL;DR: In this paper, a source application reads a body of data in data block sized units and calculates a checksum value for each data block before sending the data block, the calculated checksum values and the identifier.
Abstract: A source application reads a body of data in data block sized units and calculates a checksum value for each data block before sending the data block, the calculated checksum value and the identifier. Upon receipt, a destination application independently calculates a checksum value for each received data block and compares the two checksums. Non-matching checksums indicate a network-induced error in the data block. Identifiers for the erroneous data blocks are transmitted to the source application after all of the data blocks have been initially transmitted. The source application thereafter resends only those data blocks identified. The destination application repeats the process of comparing checksums and transmitting identifiers to the source application until all of the data blocks of the body of data have been correctly received, and then uses the data blocks to recreate the body of data.

4 citations


Book ChapterDOI
23 Oct 2011
TL;DR: This paper formulate unifying algorithm, data and security models that allow to evaluate and prove the security guarantees provided by direct forensic encoding constructions from these techniques and suitable combinations of them for both data at rest and data in transit.
Abstract: Data forensics needs techniques that gather digital evidence of data corruption. While techniques like error correcting codes, disjunct matrices and cryptographic hashing are frequently studied and used in practical applications, very few research efforts have been done to rigorously evaluate and combine benefits of these techniques for data forensics purposes. In this paper we formulate unifying algorithm, data and security models that allow to evaluate and prove the security guarantees provided by direct forensic encoding constructions from these techniques and suitable combinations of them.We rigorously clarify the different security guarantees provided by using these techniques (alone or in some standard or novel combinations) for both data at rest and data in transit. Our most novel construction provides a forensic encoding scheme that allows to detect if any errors were introduced by corrupted data senders, does not allow data intruders to detect whether the data was encoded or not, and requires no data expansion in a large-min-entropy data model, as typical in multimedia data.

Patent
17 May 2011
TL;DR: In this article, a memory device recognizes that data corruption is present in a block and, rather than skipping the block and continue write operations into a different uncorrupted block, the memory device continues to write data into the corrupted block.
Abstract: A memory device recognizes that data corruption is present in a block. In response, rather than skip the block and continue write operations into a different uncorrupted block, the memory device continues to write data into the corrupted block. The memory device may write data on the basis of logical groups. The logical groups may be smaller than a block and larger than a page, but other sizes are also possible. In response to write corruption in the block (e.g., from power loss during a write operation), the memory device may skip certain parts of the block and continue writing into the block. For example, the memory device may skip the remainder of the page range in which the logical group was going to be written when data corruption occurred, and instead write that logical group into the block from the start of the next logical group unit, the next available page, or any other boundary.

01 Jan 2011
TL;DR: In this article, the authors use static program analysis to understand and make error handling in large systems more reliable, and apply their analyses to numerous Linux file systems and drivers, finding hundreds of confirmed error handling bugs that could lead to serious problems such as system crashes, silent data loss and corruption.
Abstract: Run-time errors are unavoidable whenever software interacts with the physical world. Unchecked errors are especially pernicious in operating system file management code. Transient or permanent hardware failures are inevitable, and errormanagement bugs at the file system layer can cause silent, unrecoverable data corruption. Furthermore, even when developers have the best of intentions, inaccurate documentation can mislead programmers and cause software to fail in unexpected ways. We use static program analysis to understand and make error handling in large systems more reliable. We apply our analyses to numerous Linux file systems and drivers, finding hundreds of confirmed error-handling bugs that could lead to serious problems such as system crashes, silent data loss and corruption.

Journal ArticleDOI
01 Aug 2011
TL;DR: The Amulet system is developed that can verify the correctness of stored data proactively and continuously and provide timely notification when corruption is detected, allowing proactive repair of corruption before it impacts users and applications.
Abstract: Occasional corruption of stored data is an unfortunate byproduct of the complexity of modern systems. Hardware errors, software bugs, and mistakes by human administrators can corrupt important sources of data. The dominant practice to deal with data corruption today involves administrators writing ad hoc scripts that run data-integrity tests at the application, database, file-system, and storage levels. This manual approach, apart from being tedious and error-prone, provides no understanding of the potential system unavailability and data loss if a corruption were to occur. We have developed the Amulet system that can verify the correctness of stored data proactively and continuously. This demonstration focuses on the uses of Amulet and its technical innovations: (i) a declarative language for administrators to specify their objectives regarding the detection and repair of data corruption; (ii) optimization and execution algorithms to meet the administrator's objectives robustly and with least cost using pay-as-you-go cloud resources; and (iii) timely notification when corruption is detected, allowing proactive repair of corruption before it impacts users and applications.

Journal ArticleDOI
23 Dec 2011
TL;DR: A Consistency Service has been developed as part of the DQ2 Distributed Data Management system that automatically corrects the errors reported and informs the users in case of irrecoverable file loss.
Abstract: With the continuously increasing volume of data produced by ATLAS and stored on the WLCG sites, the probability of data corruption or data losses, due to software and hardware failures is increasing. In order to ensure the consistency of all data produced by ATLAS a Consistency Service has been developed as part of the DQ2 Distributed Data Management system. This service is fed by the different ATLAS tools, i.e. the analysis tools, production tools, DQ2 site services or by site administrators that report corrupted or lost files. It automatically corrects the errors reported and informs the users in case of irrecoverable file loss.

Patent
10 Feb 2011
TL;DR: In this paper, the first switching section switch from the selected receiver buffer to the other receiver buffer when the frequency of reception errors occurred in the reception of a first predetermined amount of data exceeds a preset allowable number of receiving errors.
Abstract: PROBLEM TO BE SOLVED: To prevent loss of data, generation of communication errors such as data corruption even when a cable having a large attenuation factor is used, and to prevent deterioration of transmission speed. SOLUTION: Data communication equipment has: a connection section to which a cable is connected and that includes an input/output terminal; a plurality of receiver buffers that output received data; a controller that is connected with any of receiver buffers and grasps reception errors upon receiving received data; and a first switching section that switches the receiver buffer to be used upon receiving the instruction from the controller. Each receiver buffer has different thresholds by which it is determined that the signal is High and/or the signal is Low. The controller makes the first switching section switch from the selected receiver buffer to the other receiver buffer when the frequency of reception errors occurred in the reception of a first predetermined amount of data exceeds a preset allowable number of reception errors. COPYRIGHT: (C)2011,JPO&INPIT

Patent
27 Apr 2011
TL;DR: In this article, the authors proposed a data protection method for data storage equipment, which comprises the following steps of detecting whether a hand enters the periphery of the external storage equipment and if so, turning to a step b; and closing the data read-write operation.
Abstract: The invention is applied to data storage equipment, and provides external storage equipment and a data protection method thereof. The method comprises the following steps of: a, detecting whether a hand enters the periphery of the external storage equipment, and if so, turning to a step b; and b, closing the data read-write operation by the external storage equipment. According to the technical scheme provided by the invention, the method can detect a preparation action for drawing the external storage equipment by a user in real time, and the external storage equipment automatically closes the data read-write operation before being drawn by the user, so the method automatically realizes protection of external storage equipment data, reduces the possibility of storage data corruption in the external storage equipment, meanwhile reduces limitation of user operation and use, and is more humanized and intelligent.