scispace - formally typeset
Search or ask a question
Journal ArticleDOI

Error detecting and error correcting codes

01 Apr 1950-Bell System Technical Journal (Alcatel-Lucent)-Vol. 29, Iss: 2, pp 147-160
TL;DR: The author was led to the study given in this paper from a consideration of large scale computing machines in which a large number of operations must be performed without a single error in the end result.
Abstract: The author was led to the study given in this paper from a consideration of large scale computing machines in which a large number of operations must be performed without a single error in the end result. This problem of “doing things right” on a large scale is not essentially new; in a telephone central office, for example, a very large number of operations are performed while the errors leading to wrong numbers are kept well under control, though they have not been completely eliminated. This has been achieved, in part, through the use of self-checking circuits. The occasional failure that escapes routine checking is still detected by the customer and will, if it persists, result in customer complaint, while if it is transient it will produce only occasional wrong numbers. At the same time the rest of the central office functions satisfactorily. In a digital computer, on the other hand, a single failure usually means the complete failure, in the sense that if it is detected no more computing can be done until the failure is located and corrected, while if it escapes detection then it invalidates all subsequent operations of the machine. Put in other words, in a telephone central office there are a number of parallel paths which are more or less independent of each other; in a digital machine there is usually a single long path which passes through the same piece of equipment many, many times before the answer is obtained.

Content maybe subject to copyright    Report

Citations
More filters
Journal ArticleDOI
01 Jun 1988
TL;DR: Five levels of RAIDs are introduced, giving their relative cost/performance, and a comparison to an IBM 3380 and a Fujitsu Super Eagle is compared.
Abstract: Increasing performance of CPUs and memories will be squandered if not matched by a similar performance increase in I/O. While the capacity of Single Large Expensive Disks (SLED) has grown rapidly, the performance improvement of SLED has been modest. Redundant Arrays of Inexpensive Disks (RAID), based on the magnetic disk technology developed for personal computers, offers an attractive alternative to SLED, promising improvements of an order of magnitude in performance, reliability, power consumption, and scalability. This paper introduces five levels of RAIDs, giving their relative cost/performance, and compares RAID to an IBM 3380 and a Fujitsu Super Eagle.

3,015 citations

Journal ArticleDOI
TL;DR: A mapping of m symbols into 2 symbols will be shown to be (2 m)/2 or ( 2 m 1)/2 symbol correcting, depending on whether m is even or odd.
Abstract: a._) into the 2-tuple (P(0), P(a), P(a:), P(1 ); this m-tuple might be some encoded message and the corresponding 2n-tuple is to be transmitted. This mapping of m symbols into 2 symbols will be shown to be (2 m)/2 or (2 m 1)/2 symbol correcting, depending on whether m is even or odd. A natural correspondence is established between the field elements of K and certain binary sequences of length n. Under this correspondence, code E may be regarded as a mapping of binary sequences of mn bits into binary sequences of n2 bits. Thus code E can be interpreted to be a systematic multiple-error-correcting code of binary sequences.

2,931 citations

Book
05 Jun 2007
TL;DR: The second edition of Ontology Matching has been thoroughly revised and updated to reflect the most recent advances in this quickly developing area, which resulted in more than 150 pages of new content.
Abstract: Ontologies tend to be found everywhere. They are viewed as the silver bullet for many applications, such as database integration, peer-to-peer systems, e-commerce, semantic web services, or social networks. However, in open or evolving systems, such as the semantic web, different parties would, in general, adopt different ontologies. Thus, merely using ontologies, like using XML, does not reduce heterogeneity: it just raises heterogeneity problems to a higher level. Euzenat and Shvaikos book is devoted to ontology matching as a solution to the semantic heterogeneity problem faced by computer systems. Ontology matching aims at finding correspondences between semantically related entities of different ontologies. These correspondences may stand for equivalence as well as other relations, such as consequence, subsumption, or disjointness, between ontology entities. Many different matching solutions have been proposed so far from various viewpoints, e.g., databases, information systems, and artificial intelligence. The second edition of Ontology Matching has been thoroughly revised and updated to reflect the most recent advances in this quickly developing area, which resulted in more than 150 pages of new content. In particular, the book includes a new chapter dedicated to the methodology for performing ontology matching. It also covers emerging topics, such as data interlinking, ontology partitioning and pruning, context-based matching, matcher tuning, alignment debugging, and user involvement in matching, to mention a few. More than 100 state-of-the-art matching systems and frameworks were reviewed. With Ontology Matching, researchers and practitioners will find a reference book that presents currently available work in a uniform framework. In particular, the work and the techniques presented in this book can be equally applied to database schema matching, catalog integration, XML schema matching and other related problems. The objectives of the book include presenting (i) the state of the art and (ii) the latest research results in ontology matching by providing a systematic and detailed account of matching techniques and matching systems from theoretical, practical and application perspectives.

2,579 citations


Cites background from "Error detecting and error correctin..."

  • ...A more immediate way of comparing two strings is the Hamming distance which counts the number of positions in which the two strings differ (Hamming 1950)....

    [...]

Journal ArticleDOI
TL;DR: Expander graphs were first defined by Bassalygo and Pinsker in the early 1970s, and their existence was proved in the late 1970s as discussed by the authors and early 1980s.
Abstract: A major consideration we had in writing this survey was to make it accessible to mathematicians as well as to computer scientists, since expander graphs, the protagonists of our story, come up in numerous and often surprising contexts in both fields But, perhaps, we should start with a few words about graphs in general They are, of course, one of the prime objects of study in Discrete Mathematics However, graphs are among the most ubiquitous models of both natural and human-made structures In the natural and social sciences they model relations among species, societies, companies, etc In computer science, they represent networks of communication, data organization, computational devices as well as the flow of computation, and more In mathematics, Cayley graphs are useful in Group Theory Graphs carry a natural metric and are therefore useful in Geometry, and though they are “just” one-dimensional complexes, they are useful in certain parts of Topology, eg Knot Theory In statistical physics, graphs can represent local connections between interacting parts of a system, as well as the dynamics of a physical process on such systems The study of these models calls, then, for the comprehension of the significant structural properties of the relevant graphs But are there nontrivial structural properties which are universally important? Expansion of a graph requires that it is simultaneously sparse and highly connected Expander graphs were first defined by Bassalygo and Pinsker, and their existence first proved by Pinsker in the early ’70s The property of being an expander seems significant in many of these mathematical, computational and physical contexts It is not surprising that expanders are useful in the design and analysis of communication networks What is less obvious is that expanders have surprising utility in other computational settings such as in the theory of error correcting codes and the theory of pseudorandomness In mathematics, we will encounter eg their role in the study of metric embeddings, and in particular in work around the Baum-Connes Conjecture Expansion is closely related to the convergence rates of Markov Chains, and so they play a key role in the study of Monte-Carlo algorithms in statistical mechanics and in a host of practical computational applications The list of such interesting and fruitful connections goes on and on with so many applications we will not even

2,037 citations