scispace - formally typeset
Search or ask a question

Showing papers by "Shlomi Dolev published in 2012"


Journal ArticleDOI
TL;DR: The imbalance problem is investigated, referring to several real-life scenarios in which malicious files are expected to be about 10% of the total inspected files, and a chronological evaluation showed a clear trend in which the performance improves as the training set is more updated.
Abstract: In previous studies classification algorithms were employed successfully for the detection of unknown malicious code. Most of these studies extracted features based on byte n-gram patterns in order to represent the inspected files. In this study we represent the inspected files using OpCode n-gram patterns which are extracted from the files after disassembly. The OpCode n-gram patterns are used as features for the classification process. The classification process main goal is to detect unknown malware within a set of suspected files which will later be included in antivirus software as signatures. A rigorous evaluation was performed using a test collection comprising of more than 30,000 files, in which various settings of OpCode n-gram patterns of various size representations and eight types of classifiers were evaluated. A typical problem of this domain is the imbalance problem in which the distribution of the classes in real life varies. We investigated the imbalance problem, referring to several real-life scenarios in which malicious files are expected to be about 10% of the total inspected files. Lastly, we present a chronological evaluation in which the frequent need for updating the training set was evaluated. Evaluation results indicate that the evaluated methodology achieves a level of accuracy higher than 96% (with TPR above 0.95 and FPR approximately 0.1), which slightly improves the results in previous studies that use byte n-gram representation. The chronological evaluation showed a clear trend in which the performance improves as the training set is more updated.

263 citations


Proceedings ArticleDOI
03 Sep 2012
TL;DR: Two complementary heuristics to speed up exact computation of the shortest-path between ness centrality are proposed and evaluated and can be used to further speed up betweenNess estimation algorithms, as well.
Abstract: We propose and evaluate two complementary heuristics to speed up exact computation of the shortest-path between ness centrality. Both heuristics are relatively simple adaptations of the standard algorithm for between ness centrality. Consequently, they generalize the computation of edge between ness and most other variants, and can be used to further speed up between ness estimation algorithms, as well. In the first heuristic, structurally equivalent vertices are contracted based on the observation that they have the same centrality and also contribute equally to the centrality of others. In the second heuristic, we first apply a linear-time between ness algorithm on the block-cut point tree and then compute the remaining contributions separately in each biconnected component. Experiments on a variety of large graphs illustrate the efficiency and complementarity of our heuristics.

34 citations


Book ChapterDOI
01 Oct 2012
TL;DR: In this paper, a self-stabilizing end-to-end algorithm that can be applied to networks of bounded capacity that omit, duplicate and reorder packets is presented.
Abstract: End-to-end communication over the network layer (or data link in overlay networks) is one of the most important communication tasks in every communication network, including legacy communication networks as well as mobile ad hoc networks, peer-to-peer networks and mash networks We study end-to-end algorithms that exchange packets to deliver (high level) messages in FIFO order without omissions or duplications We present a self-stabilizing end-to-end algorithm that can be applied to networks of bounded capacity that omit, duplicate and reorder packets The algorithm is network topology independent, and hence suitable for always changing dynamic networks with any churn rate

33 citations


Journal ArticleDOI
01 Sep 2012
TL;DR: This work investigates an extension of the k-secret sharing scheme, in which the secret shares are changed on the fly, independently and without (internal) communication, as a reaction to a global external trigger.
Abstract: Secret sharing is a fundamental cryptographic task. Motivated by the virtual automata abstraction and swarm computing, we investigate an extension of the k-secret sharing scheme, in which the secret shares are changed on the fly, independently and without (internal) communication, as a reaction to a global external trigger. The changes are made while maintaining the requirement that k or more secret shares may reconstruct the secret and no k-1 or fewer can do so.The application considered is a swarm of mobile processes, each maintaining a share of the secret which may change according to common outside inputs, e.g., inputs received by sensors attached to the process.The proposed schemes support addition and removal of processes from the swarm, as well as corruption of a small portion of the processes in the swarm.

18 citations


Book ChapterDOI
18 Dec 2012
TL;DR: This work proposes a crash safe and pseudo-stabilizing algorithm for implementing an atomic memory abstraction in a message passing system that preserves the same properties as ABD when there are no transient faults, namely the linearizability of operations.
Abstract: We propose a crash safe and pseudo-stabilizing algorithm for implementing an atomic memory abstraction in a message passing system. Our algorithm is particularly appealing for multi-core architectures where both processors and memory contents (including stale messages in transit) are prone to errors and faults. Our algorithm extends the classical fault-tolerant implementation of atomic memory that was originally proposed by Attiya, Bar-Noy, and Dolev (ABD) to a stabilizing setting where memory can be initially corrupted in an arbitrary manner. The original ABD algorithm provides no guaranties when started in such a corrupted configuration. Interestingly, our scheme preserves the same properties as ABD when there are no transient faults, namely the linearizability of operations. When started in an arbitrarily corrupted initial configuration, we still guarantee eventual yet suffix-closed linearizability.

11 citations


Journal ArticleDOI
TL;DR: Hardware and software components that enable the creation of a self-stabilizing os/vmm on top of an off-the-shelf, nonself- stabilizing processor are suggested.
Abstract: In this work, we suggest hardware and software components that enable the creation of a self-stabilizing os/vmm on top of an off-the-shelf, nonself-stabilizing processor A simple "watchdog” hardware that is called a periodic reset monitor (prm) provides a basic solution The solution is extended to stabilization enabling hardware (seh) which removes any real time requirement from the os/vmm A stabilization enabling system that extends the seh with software components provides the user (an os/vmm designer) with a self-stabilizing processor abstraction The method uses only a modest addition of hardware, which is external to the microprocessor We demonstrate our approach on the XScale core by Intel Moreover, we suggest methods for the adaptation of existing system code (eg, code for operating systems) to be self-stabilizing One method allows capturing and enforcing the configuration used by the program, thus reducing the work of the self-stabilizing algorithm designer to considering only the dynamic (nonconfigurational) parts of the state Another method is suggested for ensuring that, eventually, addresses of branch commands are examined using a sanity check segment This method is then used to ensure that a sanity check is performed before critical operations One application of the latter method is for enforcing a full separation of components in the system

6 citations


Patent
22 Aug 2012
TL;DR: In this paper, the authors proposed a method for broadcast encryption that allows a broadcaster to send encrypted data to a set of users such that only a subset of authorized users can decrypt said data.
Abstract: The invention is a method for broadcast encryption that allows a broadcaster to send encrypted data to a set of users such that only a subset of authorized users can decrypt said data. The method comprises modifications to the four stages of the basic Cipher-text Policy Attribute- Based Encryption techniques. The method can be adapted to transform any Attribute-Based Encryption scheme that supports only temporary revocation into a scheme that supports the permanent revocation of users.

6 citations


Journal ArticleDOI
TL;DR: This paper surveys structures for representing Hamiltonian cycles, the use of these structures in heuristic optimization techniques, and efficient mapping ofThese structures along with respective operators to a newly proposed electrooptical vector by matrix multiplication (VMM) architecture.
Abstract: A new state space representation for a class of combinatorial optimization problems, related to minimal Hamiltonian cycles, enables efficient implementation of exhaustive search for the minimal cycle in optimization problems with a relatively small number of vertices and heuristic search for problems with large number of vertices. This paper surveys structures for representing Hamiltonian cycles, the use of these structures in heuristic optimization techniques, and efficient mapping of these structures along with respective operators to a newly proposed electrooptical vector by matrix multiplication (VMM) architecture. Record keeping mechanisms are used to improve solution quality and execution time of these heuristics using the VMM. Finally, the utility of a low-power VMM based implementation is evaluated.

6 citations


Book ChapterDOI
13 Sep 2012
TL;DR: Two solutions are presented, one that extends the Welch-Berlekamp technique and copes with discrete noise and Byzantine data, and the other based on Arora and Khot techniques, extending them in the case of multidimensional noisy and ByzantineData.
Abstract: Given a large set of measurement sensor data, in order to identify a simple function that captures the essence of the data gathered by the sensors, we suggest representing the data by (spatial) functions, in particular by polynomials. Given a (sampled) set of values, we interpolate the datapoints to define a polynomial that would represent the data. The interpolation is challenging, since in practice the data can be noisy and even Byzantine, where the Byzantine data represents an adversarial value that is not limited to being close to the correct measured data. We present two solutions, one that extends the Welch-Berlekamp technique in the case of multidimensional data, and copes with discrete noise and Byzantine data, and the other based on Arora and Khot techniques, extending them in the case of multidimensional noisy and Byzantine data.

4 citations


Journal Article
TL;DR: The main motivation of this work is to study the average case hardness of the problems which belong to high complexity classes, particularly those which have a big set of hard instances.
Abstract: The main motivation of this work is to study the average case hardness of the problems which belong to high complexity classes. In more detail, we are interested in provable hard problems which have a big set of hard instances. Moreover, we consider efficient generators of these hard instances of the problems. Our investigation has possible applications in cryptography. As a first step, we consider computational problems from the NEXP class.

4 citations


Book ChapterDOI
28 Nov 2012
TL;DR: The nested Merkle puzzles scheme copes with δ-sampling attack where the adversary chooses to solve δ puzzles in each iteration of the key establishment protocol, decrypting the actual current communication when the adversary is lucky to choose the same puzzles the receiver chooses.
Abstract: We propose a new private key establishment protocol which is based on the Merkle’s puzzles scheme. This protocol is designed to provide the honest parties the ability to securely and continuously communicate over an unprotected channel. To achieve the continuous security over unbounded communication sessions we propose to use a nested Merkle’s puzzles approach where the honest parties repeatedly establish new keys and use previous keys to encrypt the puzzles of the current key establishment incarnation. We provide an implementation of the idea in the random oracle model and analyze its security. In addition, we implement the protocol in the standard cryptographic model, basing its security on the lattice shortest vector problem. The iterative nested scheme we propose enlarges the probability that the set of randomly chosen puzzles will contain hard puzzles, comparing with the probability that a single randomly chosen set consists of hard puzzles. Our nested Merkle puzzles scheme copes with δ-sampling attack where the adversary chooses to solve δ puzzles in each iteration of the key establishment protocol, decrypting the actual current communication when the adversary is lucky to choose the same puzzles the receiver chooses. We analyze the security of our schemes in the presence of such an attack.

Book ChapterDOI
01 Jan 2012
TL;DR: This work proposes an efficient collaborative monitoring scheme that harnesses the collective resources of many mobile devices, generating a “vaccination”-like effect in the network and suggests a new local information flooding algorithm called Time-to-Live Probabilistic Propagation (TPP), shown to outperform the existing state of the art information propagation algorithms.
Abstract: Complex network and complex systems research has been proven to have great implications in practice in many scopes including Social Networks, Biology, Disease Propagation, and Information Security. One can use complex network theory to optimize resource locations and optimize actions. Randomly constructed graphs and probabilistic arguments lead to important conclusions with a possible great social and financial influence. Security in online social networks has recently become a major issue for network designers and operators. Being “open” in their nature and offering users the ability to compose and share information, such networks may involuntarily be used as an infection platform by viruses and other kinds of malicious software. This is specifically true for mobile social networks, that allow their users to download millions of applications created by various individual programers, some of which may be malicious or flawed. In order to detect that an application is malicious, monitoring its operation in a real environment for a significant period of time is often required. As the computation and power resources of mobile devices are very limited, a single device can monitor only a limited number of potentially malicious applications locally. In this work, we propose an efficient collaborative monitoring scheme that harnesses the collective resources of many mobile devices, generating a “vaccination”-like effect in the network. We suggest a new local information flooding algorithm called Time-to-Live Probabilistic Propagation (TPP). The algorithm is implemented in any mobile device, periodically monitors one or more applications and reports its conclusions to a small number of other mobile devices, who then propagate this information onward, whereas each message has a predefined “Time-to-Live” (TTL) counter. The algorithm is analyzed, and is shown to outperform the existing state of the art information propagation algorithms, in terms of convergence time as well as network overhead. We then show both analytically and experimentally that implementing the proposed algorithm significantly reduces the number of infected mobile devices. Finally, we analytically prove that the algorithm is tolerant to the presence of adversarial agents that inject false information into the system.

Book ChapterDOI
16 Oct 2012
TL;DR: A distributed computation setting in which a party has a finite state automaton (FSA) with m states, which accepts an (a priori unbounded) stream of inputs x1, x2,... received from an external source is considered.
Abstract: We consider a distributed computation setting in which a party, whom we refer to as the dealer, has a finite state automaton (FSA) $\mathcal{A}$ with m states,which accepts an (a priori unbounded) stream of inputs x1, x2,... received from an external source. The dealer delegates the computation to agents A1,...,An, by furnishing them with an implementation of $\mathcal{A}$. The input streamx1, x2,... is delivered to all agents in a synchronized manner during the online input-processing phase. Finally, given a signal from the dealer, the agents terminate the execution, submit their internal state to the dealer, who computes the state of $\mathcal{A}$ and returns it as output.

Posted Content
TL;DR: Two solutions are presented, one that extends the Welch-Berlekamp technique and copes with discrete noise and Byzantine data, and the other based on Arora and Khot techniques, extending them in the case of multidimensional noisy and ByzantineData.
Abstract: Given a large set of measurement sensor data, in order to identify a simple function that captures the essence of the data gathered by the sensors, we suggest representing the data by (spatial) functions, in particular by polynomials. Given a (sampled) set of values, we interpolate the datapoints to define a polynomial that would represent the data. The interpolation is challenging, since in practice the data can be noisy and even Byzantine, where the Byzantine data represents an adversarial value that is not limited to being close to the correct measured data. We present two solutions, one that extends the Welch-Berlekamp technique in the case of multidimensional data, and copes with discrete noise and Byzantine data, and the other based on Arora and Khot techniques, extending them in the case of multidimensional noisy and Byzantine data.

Journal ArticleDOI
TL;DR: This work uses zero-knowledge proof techniques to repeatedly identify U by providing a proof that U has evidence EVID, without revealingEVID, therefore avoiding identity theft.
Abstract: We present schemes for providing anonymous transactions while privacy and anonymity are preserved, providing user's anonymous authentication in distributed networks such as the Internet. We first present a practical scheme for anonymous transactions while the transaction resolution is assisted by a Trusted Authority. This practical scheme is extended to a theoretical scheme where a Trusted Authority is not involved in the transaction resolution. Both schemes assume that all the players interact over anonymous secure channels. Given authority that generates for each player hard to produce evidence EVID (e.g., problem instance with or without a solution) to each player, the identity of a user U is defined by the ability to prove possession of aforementioned evidence. We use zero-knowledge proof techniques to repeatedly identify U by providing a proof that U has evidence EVID, without revealing EVID, therefore avoiding identity theft.In both schemes the authority provides each user with a unique random string. A player U may produce a unique user name and password for each other player S using a one-way function over the random string and the IP address of S. The player does not have to maintain any information in order to reproduce the user name and password used for accessing a player S. Moreover, the player U may execute transactions with a group of players SU in two phases; in the first phase the player interacts with each server without revealing information concerning its identity and without possibly identifying linkability among the servers in SU. In the second phase the player allows linkability and therefore transaction commitment with all servers in SU, while preserving anonymity (for future transactions).

Posted Content
TL;DR: This work proposes the first deterministic, cryptographic-assumptions-free, self-stabilizing, Byzantine-resilient algorithms for network topology discovery and end-to-end message delivery and considers the task of r-neighborhood discovery for the case in which r and the degree of nodes are bounded by constants.
Abstract: Traditional Byzantine resilient algorithms use 2f+1 vertex disjoint paths to ensure message delivery in the presence of up to f Byzantine nodes. The question of how these paths are identified is related to the fundamental problem of topology discovery. Distributed algorithms for topology discovery cope with a never ending task, dealing with frequent changes in the network topology and unpredictable transient faults. Therefore, algorithms for topology discovery should be self-stabilizing to ensure convergence of the topology information following any such unpredictable sequence of events. We present the first such algorithm that can cope with Byzantine nodes. Starting in an arbitrary global state, and in the presence of f Byzantine nodes, each node is eventually aware of all the other non-Byzantine nodes and their connecting communication links. Using the topology information, nodes can, for example, route messages across the network and deliver messages from one end user to another. We present the first deterministic, cryptographicassumptions- free, self-stabilizing, Byzantine-resilient algorithms for network topology discovery and end-to-end message delivery. We also consider the task of r-neighborhood discovery for the case in which r and the degree of nodes are bounded by constants. The use of r-neighborhood discovery facilitates polynomial time, communication and space solutions for the above tasks. The obtained algorithms can be used to authenticate parties, in particular during the establishment of private secrets, thus forming public key schemes that are resistant to man-in-the-middle attacks of the compromised Byzantine nodes. A polynomial and efficient end-to-end algorithm that is based on the established private secrets can be employed in between periodical re-establishments of the secrets.

Book ChapterDOI
19 Jul 2012
TL;DR: An optical architecture for energy efficient asynchronous automata, based on the most basic reversible and energy efficient gate, the Fredkin gate, is suggested and a circuit for asynchronous cascading between two automata is proposed.
Abstract: An optical architecture for energy efficient asynchronous automata is suggested. We use a logical paradigm called “Directed Logic”, based on the most basic reversible and energy efficient gate, the Fredkin gate. Directed Logic circuits for basic boolean gates as NOT, OR/NOR and AND/NAND are used. These circuits are then employed for an optical energy efficient automata. A D latch is then used to define the automata operation cycle. A set-reset latch is used as part of an handshake protocol that suggests an optical energy efficient automata operating internally in an asynchronous fashion. Lastly, we propose a circuit for asynchronous cascading between two automata.

Book ChapterDOI
19 Jul 2012
TL;DR: It is shown that using compressive sensing can lead to a reduction in the amount of stored data without significantly affecting the utility of this data for image recognition and image compression.
Abstract: In this paper we explore the utility of compressive sensing for object signature generation in the optical domain. We use laser scanning in the data acquisition stage to obtain a small (sub-Nyquist) number of points of an object’s boundary. This can be used to construct the signature, thereby enabling object identification, reconstruction, and, image data compression. We refer to this framework as compressive scanning of objects’ signatures. The main contributions of the paper are the following: 1) we use this framework to replace parts of the digital processing with optical processing, 2) the use of compressive scanning reduces laser data obtained and maintains high reconstruction accuracy, and 3) we show that using compressive sensing can lead to a reduction in the amount of stored data without significantly affecting the utility of this data for image recognition and image compression.

Proceedings ArticleDOI
TL;DR: The first public key broadcast encryption scheme that supports permanent revocation of users is presented, which improves on the original scheme in a poster presentation by a factor of O(log n) in all major performance measures.
Abstract: We propose a new and efficient scheme for broadcast encryption. A broadcast encryption system allows a broadcaster to send an encrypted message to a dynamically chosen subset RS, |RS|=n, of a given set of users, such that only users in this subset can decrypt the message. An important component of broadcast encryption schemes is revocation of users by the broadcaster, thereby updating the subset RS. Revocation may be either temporary, for a specific cipher text, or permanent. In the existing public key schemes which support temporary revocation of the users, the broadcaster is required to keep track of the revoked users. We present the first public key broadcast encryption scheme that supports permanent revocation of users. Unlike previous schemes, the broadcaster in our scheme should not keep track of the revoked users (saving memory and computation power). Our scheme is fully collusion-resistant. In other words, even if all the revoked users collude, the revoked user cannot encrypt messages without receiving new keys from the broadcaster. The procedure is based on Cipher-text Policy Attribute-Based Encryption (CP-ABE). The overhead of revocation in our system is constant in all major performance measures including length of private and public keys, computational complexity, user's storage space, and computational complexity of encryption and decryption. The scheme we construct improves on our original scheme in a poster presentation [7] by a factor of O(log n) in all major performance measures.

Journal ArticleDOI
TL;DR: Computer architectures that are based on optics offer several interesting features, including high density of information represented in 3D (holography) using free space or crystals and high spatial parallelism.
Abstract: Optical computers use photons rather than electrons to represent and modify information. Computer architectures that are based on optics offer several interesting features: • Current architectures use energy to move electrons while photons move by nature. Thus, an optical information processing devices, consisting of passive components, may not use energy. Transport of information is accomplished with high speed and exceptional good efficiency and without crosstalk. • Transmission of information over long distances is done by using light transmitted in fiber optics. Thus, there exists an inherent bottleneck of light to electronics conversion when information processing is done by electrons. • High spatial parallelism. • High density of information represented in 3D (holography) using free space or crystals. • Some specific information tasks are already naturally solved by optical devices.

Book ChapterDOI
01 Oct 2012
TL;DR: The notion of digital arbitration is introduced which enables resolving disputes between servers and users with the aid of arbitrators in a social network that facilitate communication or business transactions.
Abstract: We introduce the notion of digital arbitration which enables resolving disputes between servers and users with the aid of arbitrators. Arbitrators are semi-trusted entities in a social network that facilitate communication or business transactions. The communicating parties, users and servers, agree before a communication transaction on a set of arbitrators that they trust (reputation systems may support their choice). Then, the arbitrators receive digital goods, e.g. a deposit, and a terms of use agreement between participants such that the goods of a participant are returned if and only if the participant acts according to the agreement.

Posted Content
TL;DR: Relying on the existence of one-way functions, this work shows how to process unbounded inputs (polynomial in the security parameter) at a cost linear in m, the number of FSA states, and in particular a novel share re-randomization technique which might be of independent interest.
Abstract: In the problem of swarm computing, n agents wish to securely and distributively perform a computation on common inputs, in such a way that even if the entire memory contents of some of them are exposed, no information is revealed about the state of the computation. Recently, Dolev, Garay, Gilboa and Kolesnikov [ICS 2011] considered this problem in the setting of informationtheoretic security, showing how to perform such computations on input streams of unbounded length. The cost of their solution, however, is exponential in the size of the Finite State Automaton (FSA) computing the function. In this work we are interested in efficient computation in the above model, at the expense of minimal additional assumptions. Relying on the existence of one-way functions, we show how to process a priori unbounded inputs (but of course, polynomial in the security parameter) at a cost linear inm, the number of FSA states. In particular, our algorithms achieve the following: In the case of (n;n)-reconstruction (i.e. in which alln agents participate in reconstruction of the distributed computation) and at most n 1 agents are corrupted, the agent storage, the time required to process each input symbol and the time complexity for reconstruction are all O(mn). In the case of (t+1;n)-reconstruction (where onlyt+1 agents take part in the reconstruction) and at mostt agents are corrupted, the agents’ storage and time required to process each input symbol areO(m n 1

Posted Content
01 Jan 2012
TL;DR: Two solutions are presented, one that extends the Welch-Berlekamp technique and copes with discrete noise and Byzantine data, and the other based on Arora and Khot techniques, extending them in the case of multidimensional noisy andByzantine data.
Abstract: Given a large set of measurement sensor data, in order to identify a simple function thatcaptures the essence of the data gathered by the sensors, we suggest representing the data by(spatial) functions, in particular by polynomials. Given a (sampled) set of values, we interpolatethe datapoints to de ne a polynomial that would represent the data. The interpolation ischallenging, since in practice the data can be noisy and even Byzantine, where the Byzantinedata represents an adversarial value that is not limited to being close to the correct measureddata. We present two solutions, one that extends the Welch-Berlekamp technique in the caseof multidimensional data, and copes with discrete noise and Byzantine data, and the otherbased on Arora and Khot techniques, extending them in the case of multidimensional noisy andByzantine data. Department of Computer Science, Ben-Gurion University, Beer-Sheva, 84105, Israel. Email: hd@cs.bgu.ac.il.Partially supported by a Russian Israeli grant from the Israeli Ministry of Science and Technology and the RussianFoundation for Basic Research, the Rita Altura Trust Chair in Computer Sciences, the Lynne and William FrankelCenter for Computer Sciences, Israel Science Foundation (grant number 428/11), Cabarnit Cyber Security MAGNETConsortium, Grant from the Institute for Future Defense Technologies Research named for the Medvedi of theTechnion, MAFAT, and Israeli Internet Association

Book ChapterDOI
19 Jul 2012
TL;DR: The architecture uses the Vector-Matrix-Multiplier as the basic device used to perform calculations and may in turn provide the building blocks for optically controlled devices.
Abstract: We present a design concept for a nano optical architecture for a finite state machine. The architecture uses the Vector-Matrix-Multiplier as the basic device used to perform calculations. We provide schematics of such a device. These devices may in turn provide the building blocks for optically controlled devices.

Proceedings ArticleDOI
11 Dec 2012
TL;DR: A new method for fault tolerant computing where for a given error rate, the hamming distance between correct inputs and faulty inputs as well as the hammer distance between a correct result and a faulty result is preserved throughout processing; thereby enabling correction of up to transient faults per computation cycle.
Abstract: The traditional approach to fault tolerant computing involves replicating computation units and applying a majority vote operation on individual result bits. This approach, however, has several limitations; the most severe is the resource requirement. This paper presents a new method for fault tolerant computing where for a given error rate, the hamming distance between correct inputs and faulty inputs as well as the hamming distance between a correct result and a faulty result is preserved throughout processing; thereby enabling correction of up to transient faults per computation cycle. The new method is compared and contrasted with current protection methods and its cost / performance is analyzed.