scispace - formally typeset
Search or ask a question

Showing papers on "Data Corruption published in 2014"


Proceedings ArticleDOI
19 May 2014
TL;DR: This paper proposes F-SEFI, a Fine-grained Soft Error Fault Injector, as a tool for profiling software robustness against soft errors and demonstrates use cases of F- SEFI on several benchmark applications to show how data corruption can propagate to incorrect results.
Abstract: As the high performance computing (HPC) community continues to push towards exascale computing, resilience remains a serious challenge. With the expected decrease of both feature size and operating voltage, we expect a significant increase in hardware soft errors. HPC applications of today are only affected by soft errors to a small degree but we expect that this will become a more serious issue as HPC systems grow. We propose F-SEFI, a Fine-grained Soft Error Fault Injector, as a tool for profiling software robustness against soft errors. In this paper we utilize soft error injection to mimic the impact of errors on logic circuit behavior. Leveraging the open source virtual machine hypervisor QEMU, F-SEFI enables users to modify emulated machine instructions to introduce soft errors. F-SEFI can control what application, which sub-function, when and how to inject soft errors with different granularities, without interference to other applications that share the same environment. F-SEFI does this without requiring revisions to the application source code, compilers or operating systems. We discuss the design constraints for F-SEFI and the specifics of our implementation. We demonstrate use cases of F-SEFI on several benchmark applications to show how data corruption can propagate to incorrect results.

67 citations


Proceedings ArticleDOI
17 Feb 2014
TL;DR: ViewBox is presented, an integrated synchronization service and local file system that provides freedom from data corruption and inconsistency that detects and recovers from both Corruption and inconsistency, while incurring minimal overhead.
Abstract: Cloud-based file synchronization services have become enormously popular in recent years, both for their ability to synchronize files across multiple clients and for the automatic cloud backups they provide. However, despite the excellent reliability that the cloud back-end provides, the loose coupling of these services and the local file system makes synchronized data more vulnerable than users might believe. Local corruption may be propagated to the cloud, polluting all copies on other devices, and a crash or untimely shutdown may lead to inconsistency between a local file and its cloud copy. Even without these failures, these services cannot provide causal consistency.To address these problems, we present ViewBox, an integrated synchronization service and local file system that provides freedom from data corruption and inconsistency. ViewBox detects these problems using ext4-cksum, a modified version of ext4, and recovers from them using a user-level daemon, cloud helper, to fetch correct data from the cloud. To provide a stable basis for recovery, ViewBox employs the view manager on top of ext4-cksum. The view manager creates and exposes views, consistent in-memory snapshots of the file system, which the synchronization client then uploads. Our experiments show that ViewBox detects and recovers from both corruption and inconsistency, while incurring minimal overhead.

48 citations


Proceedings ArticleDOI
06 Feb 2014
TL;DR: A novel technique to detect silent data corruption based on data monitoring is proposed and it is shown that this technique can detect up to 50% of injected errors while incurring only negligible overhead.
Abstract: Parallel programming has become one of the best ways to express scientific models that simulate a wide range of natural phenomena. These complex parallel codes are deployed and executed on large-scale parallel computers, making them important tools for scientific discovery. As supercomputers get faster and larger, the increasing number of components is leading to higher failure rates. In particular, the miniaturization of electronic components is expected to lead to a dramatic rise in soft errors and data corruption. Moreover, soft errors can corrupt data silently and generate large inaccuracies or wrong results at the end of the computation. In this paper we propose a novel technique to detect silent data corruption based on data monitoring. Using this technique, an application can learn the normal dynamics of its datasets, allowing it to quickly spot anomalies. We evaluate our technique with synthetic benchmarks and we show that our technique can detect up to 50% of injected errors while incurring only negligible overhead.

45 citations


Journal ArticleDOI
TL;DR: In this article, the impact of supervisory control and data acquisition (SCADA) data corruption on real-time locational marginal price (LMP) in electricity markets is examined.
Abstract: This paper examines the impact of supervisory control and data acquisition (SCADA) data corruption on real-time locational marginal price (LMP) in electricity markets. We present an analytical framework to quantify LMP sensitivity with respect to changes in sensor data. This framework consists of a unified LMP sensitivity matrix subject to sensor data corruption. This sensitivity matrix reflects a coupling among the sensor data, an estimation of the power system states, and the real-time LMP. The proposed framework offers system operators an online tool to: 1) quantify the impact of corrupted data at any sensor on LMP variations at any bus; 2) identify buses with LMPs highly sensitive to data corruption; and 3) find sensors that impact LMP changes significantly and influentially. It also allows system operators to evaluate the impact of SCADA data accuracy on real-time LMP. The results of the proposed sensitivity based analysis are illustrated and verified with IEEE 14-bus and 118-bus systems with both Ex-ante and Ex-post real-time pricing models.

32 citations


Patent
04 Jun 2014
TL;DR: In this paper, the last resort zone of the NVM media is associated with a higher risk of data loss or data corruption than other portions of the media and is reserved as unavailable for storing data.
Abstract: A data storage device (DSD) includes a non-volatile memory (NVM) media for storing data. A last resort zone of the NVM media is associated with a higher risk of data loss or data corruption than other portions of the NVM media and is reserved as unavailable for storing data. It is determined whether a current data storage capacity and/or an environmental condition for the NVM media has reached a threshold. The last resort zone is set as available for storing data if it is determined that the threshold has been reached and data is written in the last resort zone.

16 citations


Proceedings ArticleDOI
Keun Soo Yim1
19 May 2014
TL;DR: It is experimentally show that a single-bit error in non-control data can change the final total energy of a large-scale N-body program with ~2.1% probability and the corrupted total energy values have certain biases that can be used to reduce the expected number of re-executions.
Abstract: In N-body programs, trajectories of simulated particles have chaotic patterns if errors are in the initial conditions or occur during some computation steps. It was believed that the global properties (e.g., total energy) of simulated particles are unlikely to be affected by a small number of such errors. In this paper, we present a quantitative analysis of the impact of transient faults in GPU devices on a global property of simulated particles. We experimentally show that a single-bit error in non-control data can change the final total energy of a large-scale N-body program with ~2.1% probability. We also find that the corrupted total energy values have certain biases (e.g., the values are not a normal distribution), which can be used to reduce the expected number of re-executions. In this paper, we also present a data error detection technique for N-body programs by utilizing two types of properties that hold in simulated physical models. The presented technique and an existing redundancy-based technique together cover many data errors (e.g., >97.5%) with a small performance overhead (e.g., 2.3%).

15 citations


Proceedings ArticleDOI
17 Feb 2014
TL;DR: This work shows that a runtime checker must enforce the atomicity and durability properties of the file system on every write, in addition to checking transactions at commit time, to provide the strong guarantee that every block write will maintain file system consistency.
Abstract: Data corruption is the most common consequence of file-system bugs, as shown by a recent study. When such corruption occurs, the file system's offline check and recovery tools need to be used, but they are error prone and cause significant downtime. Previous work has shown that a runtime checker for the Ext3 journaling file system can verify that metadata updates within a transaction are mutually consistent, helping detect corruption in metadata blocks at commit time. However, corruption can still be caused when a bug in the file system's transactional mechanism loses, misdirects, or corrupts writes. We show that a runtime checker needs to enforce the atomicity and durability properties of the file system on every write, in addition to checking transactions at commit time, to provide the strong guarantee that every block write will maintain file system consistency.In this paper, we identify the invariants that need to be enforced on journaling and shadow paging file systems to preserve the integrity of committed transactions. We also describe the key properties that make it feasible to check these invariants for a file system. Based on this characterization, we have implemented runtime checkers for a modified version of the Ext3 file system and for the Btrfs file system. Our evaluation shows that both checkers detect data corruption effectively, and they can be used during normal operation with low overhead.

10 citations


Proceedings ArticleDOI
09 Jan 2014
TL;DR: The performance of a failure predictor when used to forecast failures in a web-serving system subject to successive updates is studied and it is suggested that re-training is indeed necessary.
Abstract: Failure prediction is a promising technique to improve dependability of computer systems, in particular when it is important to foresee incoming failures and take corrective actions to avoid downtime or data corruption. Failure prediction is especially adequate in long running systems where internal errors accumulate and eventually lead to failures. The problem is that such systems do evolve. The workload and even the system itself changes over time, and this may affect the performance of the failure predictor. However, training failure prediction algorithms is a complex and time-consuming task and should be performed only when needed. Thus, it is important to understand if a system change affects prediction performance, to avoid running the target system with an ineffective predictor and prevent unnecessary retraining efforts. In this work we study the performance of a failure predictor when used to forecast failures in a web-serving system subject to successive updates. We observe and analyze the variation of performance in terms of ROC-AUC using fault injection and virtualization for the generation of the data needed for the assessment. Our results suggest that re-training is indeed necessary.

10 citations


Patent
23 Oct 2014
TL;DR: In this paper, a serialization and deserialization module for an elevator installation can be configured to cross check various data inputs and outputs to identify data corruption, component failures, or inconsistencies in data.
Abstract: Safety related information for an elevator installation can be transmitted via a serial connection using serialization and deserialization modules. These serialization and deserialization modules can comprise redundant components, such as processors and interfaces, and can be configured to cross check various data inputs and outputs to identify data corruption, component failures, or inconsistencies in data.

5 citations


Patent
21 May 2014
TL;DR: In this paper, a non-volatile cache is used to store complement data associated with each received write command and store the complement data in the cache while the complement is overwritten via execution of the write command.
Abstract: The disclosed systems include features to mitigate a risk of data corruption attributable to unexpected power loss events. In particular, the disclosed system identifies and retrieves complement data associated with each received write command and stores the complement data in a non-volatile cache while the complement data is overwritten via execution of the write command.

3 citations


21 Feb 2014
TL;DR: In this article, the impact of data integrity/quality in the supervisory control and data acquisition (SCADA) system on real-time locational marginal price (LMP) in electricity market operations is examined.
Abstract: This talk examines the impact of data integrity/quality in the supervisory control and data acquisition (SCADA) system on real-time locational marginal price (LMP) in electricity market operations. Measurement noise and/or manipulated sensor errors in a SCADA system may mislead system operators about real-time conditions in a power system, which, in turn, may impact the price signals in real-time power markets. This research serves as a first step to analytically investigate the impact of bad/malicious data on electric power market operations. The first part of this talk studies from a market participant’s perspective a new class of malicious data attacks on state estimation, which subsequently influences the result of the newly emerging look-ahead dispatch models in the real-time power market. We propose a novel attack strategy, named ramp-induced data (RID) attack, with which the attacker can manipulate the limits of ramp constraints of generators in look-ahead dispatch, leading to financial profits while being undetected by the existing bad data detection algorithm embedded in the state estimator. In the second part, we investigate from a system operator’s perspective the sensitivity of locational marginal price (LMP) with respect to data corruption-induced state estimation error in real-time power market. We present an analytical framework to quantify real-time LMP sensitivity subject to continuous and discrete data corruption via state estimation. The proposed framework offers system operators an online tool to identify economically sensitive buses and transmission lines to data corruption as well as find sensors that impact LMP changes significantly and influentially.

Patent
Tan An, Dianming Hu, Jun Liu, Wenjun Yang, Dai Tan 
23 Apr 2014
TL;DR: In this article, a method and a device used for data storage is described, where the processing device determines one or a plurality of corresponding candidate storage scheme information according to data storage requests, and determines corresponding optimized storage scheme according to performance index information corresponding to the candidate storage schemas.
Abstract: The invention aims to provide a method and a device used for data storage The processing device determines one or a plurality of corresponding candidate storage scheme information according to data storage requests, and determines corresponding optimized storage scheme information according to performance index information corresponding to the candidate storage scheme information, so as to process data storage requests Compared with the prior art, according to different hardware medium service qualities, data is distributed to proper storage mediums according to the degree of importance, so that the data corruption and loss probabilities can be reduced; meanwhile, the service quality of the medium is changed in real time according to a practical situation, and potential faults can be earlier found by using the device, so that the damage caused by imminent hardware trouble can be prevented; therefore, a data grading topology which senses the storage medium service qualities can be realized, the influence on storage performance and reliability of the storage system caused by the different storage medium service qualities can be eliminated, and a strategy for controlling the storage cost can be realized

Patent
James A. Taylor1, Tim K. Emami1
04 Mar 2014
TL;DR: In this article, a data storage system stores data in one or more storage locations of a storage drive and generates context information that identifies the data associated with each of the storage locations.
Abstract: Examples described herein include a system for storing data. The data storage system stores data in one or more storage locations of a storage drive and generates context information that identifies the data associated with each of the one or more storage locations. The context information is stored in a data buffer, and may include at least one of: an index node, a file block number, or a generation count. Further, the data buffer may be a FIFO circular buffer. The data storage system then uses the context information in the data buffer to verify the data stored in the one or more storage locations during an idle time of the storage drive.

01 Jan 2014
TL;DR: This paper introduces a way to authenticate the data transmitted over the network using Cyclic Redundancy Check (CRC) error detection technique which work on the concept of binary division.
Abstract: This paper introduces a way to authenticate the data transmitted over the network using Cyclic Redundancy Check (CRC) error detection technique which work on the concept of binary division. A network must be capable of transmitting the data from one end to other end with accuracy. But transmission errors are the common fact of data communication. It is not mandatory that data received at receiver end is identical to the data transmitted by the sender. There are a number of reasons responsible for data corruption as thermal noise, impulse noise, etc. For reliable communication, it is required that the system must be enriched with error detection and error correction techniques. Therefore a number of error control techniques have been introduced and one of them is CRC to achieve accuracy in data communication.

Journal ArticleDOI
TL;DR: By taking into account a more realistic correlation between bits, this work will contribute to the understanding of the soft error or the corruption of data stored in nano-scale devices.
Abstract: The corruption process of a binary nano-bit model resulting from an interaction with N stochastically-independent Brownian agents (BAs) is studied with the help of Monte-Carlo simulations and analytic continuum theory to investigate the data corruption process through the measurement of the spatial two-point correlation and the autocorrelation of bit corruption at the origin. By taking into account a more realistic correlation between bits, this work will contribute to the understanding of the soft error or the corruption of data stored in nano-scale devices.

Patent
24 Apr 2014
TL;DR: In this article, the problem of reducing the necessary storage capacity of a non-volatile storage part and a processing load for data recovery while securing restoration performance against data corruption in an electronic control device for saving data from an RAM to the nonvolatile memory part, and for restoring the data from the storage part to the RAM.
Abstract: PROBLEM TO BE SOLVED: To reduce the necessary storage capacity of a non-volatile storage part and a processing load for data recovery while securing restoration performance against data corruption in an electronic control device for saving data from an RAM to the non-volatile storage part, and for restoring the data from the storage part to the RAM.SOLUTION: In an ECU 11, when a data saving execution timing arrives in an operation period, a CPU 31 writes a data group in an RAM 35 and a check sum calculated with respect to the data group in each of two blocks in a non-volatile memory 37. Then, when power supply to the ECU 11 is started and booted, the CPU 31 specifies the block in which the calculation result of the check sum with respect to the stored data group and the stored check sum are coincident between the two blocks in which the writing has been executed in the previous operation period, and stores the data group stored in the specified block in the RAM for restoration.

ReportDOI
16 Jan 2014
TL;DR: This work proposes a technique that leverages certain properties of high-performance computing applications in order to detect silent errors at the application level, and proposes multiple corruption detectors that can detect the majority of the corruptions, while incurring negligible overhead.
Abstract: Supercomputers allow scientists to study natural phenomena by means of computer simulations. Next-generation machines are expected to have more components and, at the same time, consume several times less energy per operation. These trends are pushing supercomputer construction to the limits of miniaturization and energy-saving strategies. Consequently, the number of soft errors is expected to increase dramatically in the coming years. While mechanisms are in place to correct or at least detect some soft errors, a significant percentage of those errors pass unnoticed by the hardware. Such silent errors are extremely damaging because they can make applications silently produce wrong results. In this work we propose a technique that leverages certain properties of high-performance computing applications in order to detect silent errors at the application level. Our technique detects corruption solely based on the behavior of the application datasets and is completely application-agnostic. We propose multiple corruption detectors, and we couple them to work together in a fashion transparent to the user. We demonstrate that this strategy can detect the majority of the corruptions, while incurring negligible overhead. We show that with the help of these detectors, applications can have up to 80% of coverage against data corruption.

Journal ArticleDOI
TL;DR: In this article, a case study at the Denpasar District Court aims to describe and analyze deeply, about the effectiveness of additional punishment, including the return of financial loss caused by corruption.
Abstract: Research on the effectiveness and the application of criminal sanctions and punishment in addition to return financial losses caused by corruption ( case study at the Denpasar District Court ) aims to describe and analyze deeply, about the effectiveness of additional punishment, including the return of financial loss caused by corruption. In addition, this study also aims to determine and assess the constraints in the implementation of court decisions related to the return of financial loss. Based on this articles, the question that is whether the application of additional criminal sanction and punishment, including the return of state losses can be effective pursuant to the provisions of Article 18 of Law No. 31 Year of 1999 on Eradication of Corruption Jo . Law No. 20 year of 2001 on the Amendment of the Law No. 31 Year of 1999 on Eradication of Corruption. The method used in this research is the method of empirical juridical legal research of the descriptive research using primary and secondary data sources by document studying and interview techniques as well as articles related to the issues. Based on the research that has been done, it can be seen that the application of the additional sanction and punishment, including the return of state losses have applied but unfortunately have not been able to be effective in the aim of recovery effort of state losses due to corruption, and reduce the amount of corruption that occurred in the Denpasar District Court Jurisdiction . It is based on the data corruption cases in the year of 2012 increased from 20 cases to 25 cases in 2013. Returns of state losses in 2012 - 2013 amounted Rp.871.273.192 which is the corruption amount cases in the year of 2010 - 2011 . While the corruption cases in the year of 2012 – 2013, until recent time there are no recorded return of state losses. The constraints in the implementation of the court decisions related to the return of state losses, is convicted assets and property that has been transfered, multiple population administration, and duration of the judicial process to verdict and binding execution to be carried out .

Journal ArticleDOI
TL;DR: The jSRML metalanguage is demonstrated which provides a way to define more comprehensive and non-obtrusive validation rules for forms and a system called j SRMLTool is created which can perform hybrid validation methods as well as propose jSR ML validation rules using machine learning.
Abstract: Over the years the Internet has spread to most areas of our lives ranging from reading news, ordering food, streaming music, playing games all the way to handling our finances online With this rapid expansion came an increased need to ensure that the data being transmitted is valid Validity is important not just to avoid data corruption but also to prevent possible security breaches Whenever a user wants to interact with a website where information needs to be shared they usually fill out forms and submit them for server-side processing Web forms are very prone to input errors, external exploits like SQL injection attacks, automated bot submissions and several other security circumvention attempts We will demonstrate our jSRML metalanguage which provides a way to define more comprehensive and non-obtrusive validation rules for forms We used jQuery to allow asynchronous AJAX validation without posting the page to provide a seamless experience for the user Our approach also allows rules to be defined to correct mistakes in user input aside from performing validation making it a valuable asset in the space of form validation We have created a system called jSRMLTool which can perform hybrid validation methods as well as propose jSRML validation rules using machine learning

Journal ArticleDOI
TL;DR: An improved RDC scheme is proposed in which the communication overhead can be reduced by downloading only a part of the parity data for update while simultaneously ensuring the integrity of the data.
Abstract: A client stores data in the cloud and uses remote data checking (RDC) schemes to check the integrity of the data. The client can detect the corruption of the data using RDC schemes. Recently, robust RDC schemes have integrated forward error-correcting codes (FECs) to ensure the integrity of data while enabling dynamic update operations. Thus, minor data corruption can be recovered by FECs, whereas major data corruption can be detected by spot-checking techniques. However, this requires high communication overhead for dynamic update, because a small update may require the client to download an entire file. The Variable Length Constraint Group (VLCG) scheme overcomes this disadvantage by downloading the RS-encoded parity data for update instead of the entire file. Despite this, it needs to download all the parity data for any minor update. In this paper, we propose an improved RDC scheme in which the communication overhead can be reduced by downloading only a part of the parity data for update while simultaneously ensuring the integrity of the data. Efficiency and security analysis show that the proposed scheme enhances efficiency without any security degradation.

Journal ArticleDOI
TL;DR: SysProp is a successful demonstration of the concept that UNIX daemons can be remotely executed and controlled over the Web and might be exploited to build many system administrative applications.
Abstract: From the inception of computer based computing, preventing data loss or data corruption is considered as one of the difficult challenges. In early days, data reliability had been increased by replicating data in multiple disks, which were attached with the same system and later located inside the same network. Later, to avoid potential risk of single point of failure, the replicated data storage has been separated from the network from which the data has been originated. Thus, following the concept of peer-to-peer (P2P) networking, P2P storage system has been designed, where data has been replicated inside multiple remote peers' redundant storages. With the advent of Cloud computing, a similar but more reliable Cloud-based storage system has been developed. Note that Cloud storages are expensive for small and medium enterprises. Moreover, users are often reluctant to store their sensitive data inside a third-party's network that they do now own or control. In this paper, we design, develop and deploy a storage system that we named SysProp. Two widely used tools—Web applications and UNIX daemon—have been incorporated in the development process of SysProp. Our goal is to congregate benefits of different storage systems (e.g., networked, P2P and Cloud storages) in a single application. SysProp provides a remotely accessible, Web-based interface, where users have full control over their data and data is being transferred in encrypted form. Moreover, for data backup, a powerful UNIX tool, rsync has been used that synchronize data by transferring only the updated portion. Finally, SysProp is a successful demonstration of the concept that UNIX daemons can be remotely executed and controlled over the Web. Hence, this concept might be exploited to build many system administrative applications. Index Terms—Storage system, data backup, synchronization, Web service, remote system admin.

Patent
Yan Li1
24 Jan 2014
TL;DR: In this paper, a system and methods for programming a set of data onto non-volatile memory elements, maintaining copies of the data pages to be programmed, as well as surrounding data pages, internally or externally to the memory circuit, verifying programming correctness after programming, and upon discovering programming error, recovering the safe copies of corrupted data to be reprogrammed in alternative nonvolatile memories elements.
Abstract: A system and methods for programming a set of data onto non-volatile memory elements, maintaining copies of the data pages to be programmed, as well as surrounding data pages, internally or externally to the memory circuit, verifying programming correctness after programming, and upon discovering programming error, recovering the safe copies of the corrupted data to be reprogrammed in alternative non-volatile memory elements. Additionally, a system and methods for programming one or more sets of data across multiple die of a non-volatile memory system, combining data pages across the multiple die by means such as the XOR operation prior to programming the one or more sets of data, employing various methods to determine the correctness of programming, and upon identifying data corruption, recovering safe copies of data pages by means such as XOR operation to reprogram the pages in an alternate location on the non-volatile memory system.

Journal ArticleDOI
TL;DR: This paper proposes a plan of database scheduled tasks based on Linux environment that can be timed to export the database, be sent automatically to a secure backup mailbox and complete the regular backup of the database finally.
Abstract: With the development of Internet technology and the increasing popularity of the Internet, the amount of information data of enterprises increased sharply. How to avoid unexpected data corruption and to improve the system of data security and data recovery capabilities has been the focus of attention of users and enterprises. Regular backups of the database is to restore the data the easiest and most effective guarantee method and is an effective measure database administrators to manage data. This paper proposes a plan of database scheduled tasks based on Linux environment. This plan can be timed to export the database, be sent automatically to a secure backup mailbox and complete the regular backup of the database finally.

Patent
05 Jun 2014
TL;DR: In this paper, the authors propose a data read-write method to cope with data corruption by delay destruction in flash memory, where a CPU 11 reads data from each page included in a page ring to RAM 12, detects an error of the data by using an error detection code included in the data on the RAM 12 and searches and identifies the latest written page and the oldest written page among each of pages included in page ring R by using a flag included in data in which the error is not detected.
Abstract: PROBLEM TO BE SOLVED: To provide an IC card, a data read-write method and a data read-write program which are capable of coping with data corruption by delay destruction in flash memorySOLUTION: A CPU 11 reads data from each page included in a page ring to RAM 12, detects an error of the data by using an error detection code included in the data on the RAM 12, searches and identifies the latest written page and the oldest written page among each of pages included in the page ring R by using a flag included in the data in which the error is not detected, reads the data from the identified latest page to the RAM 12, updates the data on the RAM 12, and writes the updated data to the identified oldest page

01 Jan 2014
TL;DR: In this paper, the authors proposed a hybrid double layered security strategy for sensed data, where the first step of security is applied by appending a Keyed Message Authentication Code (HMAC) to the sensed data by Secure Hash Algorithm (SHA-2/512) which is robust algorithm to ensure message security throughout the network.
Abstract: Wireless Sensor Network (WSN) is a collection of sensors that are of heterogeneous in nature. Data sensed from the environment are traversed through the network till it reaches the sink. The main focused problem of Wireless Sensor Network is the data integrity throughout the network. If the data is corrupted, considerable amount of energy is wasted at each time when the data is forwarded to the next node. The critical data corruption attack is done by compromised nodes. Various strategies have been introduced to identify the corrupted data and compromised node. This paper focuses a hybrid double layered security strategy for sensed data. The first step of security is applied by appending a Keyed Message Authentication Code (HMAC) to the sensed data by Secure Hash Algorithm (SHA-2/512) which is robust algorithm to ensure message security throughout the network. The second step of security is implemented by a modified form of ConstrAined Random Perturbation based pairwise keY (CARPY+) mechanism. In CARPY+ mechanism guaranteed key exchange between sender and receiver proves the sender nodes identity. Any fail while comparing the key which is extracted from the received message identifies the sender node as a malicious node. The proposed methodology improves the network performance by avoiding data corruption at the network layer and same time identifies the compromised nodes.

Patent
24 Nov 2014
TL;DR: In this paper, the authors discuss the use of frame matching in conjunction with erasure windowing to overcome data corruption in a set of data to allow recovery of the set of corrupted data.
Abstract: The disclosure is related to systems and methods of data recovery using frame matching and erasure windowing. Aspects involve using frame matching in conjunction with erasure windowing to overcome data corruption in a set of data to allow recovery of the set of data. When a synchronization mark indicating the position of a set of data in a superset of data is corrupted, frame matching in conjunction with erasure windowing are used to enable recovery of the set of data by applying one or more frame windows and one or more erasure windows to data including the set of data to recover the set of data.