scispace - formally typeset
Search or ask a question
Institution

IBM

CompanyArmonk, New York, United States
About: IBM is a company organization based out in Armonk, New York, United States. It is known for research contribution in the topics: Layer (electronics) & Cache. The organization has 134567 authors who have published 253905 publications receiving 7458795 citations. The organization is also known as: International Business Machines Corporation & Big Blue.


Papers
More filters
Journal ArticleDOI
TL;DR: A related but simpler EPR scheme is described and it is proved it secure against more general attacks, including substitution of a fake EPR source and the original 1984 key distribution scheme of Bennett and Brassard, which uses single particles instead of EPR pairs.
Abstract: Ekert has described a cryptographic scheme in which Einstein-Podolsky-Rosen (EPR) pairs of particles are used to generate identical random numbers in remote places, while Bell's theorem certifies that the particles have not been measured in transit by an eavesdropper. We describe a related but simpler EPR scheme and, without invoking Bell's theorem, prove it secure against more general attacks, including substitution of a fake EPR source. Finally we show our scheme is equivalent to the original 1984 key distribution scheme of Bennett and Brassard, which uses single particles instead of EPR pairs.

2,050 citations

Irina Rish1
01 Jan 2001
TL;DR: This work analyzes the impact of the distribution entropy on the classification error, showing that low-entropy feature distributions yield good performance of naive Bayes and demonstrates that naive Baye works well for certain nearlyfunctional feature dependencies.
Abstract: The naive Bayes classifier greatly simplify learning by assuming that features are independent given class. Although independence is generally a poor assumption, in practice naive Bayes often competes well with more sophisticated classifiers. Our broad goal is to understand the data characteristics which affect the performance of naive Bayes. Our approach uses Monte Carlo simulations that allow a systematic study of classification accuracy for several classes of randomly generated problems. We analyze the impact of the distribution entropy on the classification error, showing that low-entropy feature distributions yield good performance of naive Bayes. We also demonstrate that naive Bayes works well for certain nearlyfunctional feature dependencies, thus reaching its best performance in two opposite cases: completely independent features (as expected) and functionally dependent features (which is surprising). Another surprising result is that the accuracy of naive Bayes is not directly correlated with the degree of feature dependencies measured as the classconditional mutual information between the features. Instead, a better predictor of naive Bayes accuracy is the amount of information about the class that is lost because of the independence assumption.

2,046 citations

Journal ArticleDOI
S. Katz1
TL;DR: The model offers, via a nonlinear recursive procedure, a computation and space efficient solution to the problem of estimating probabilities from sparse data, and compares favorably to other proposed methods.
Abstract: The description of a novel type of m-gram language model is given. The model offers, via a nonlinear recursive procedure, a computation and space efficient solution to the problem of estimating probabilities from sparse data. This solution compares favorably to other proposed methods. While the method has been developed for and successfully implemented in the IBM Real Time Speech Recognizers, its generality makes it applicable in other areas where the problem of estimating probabilities from sparse data arises.

2,038 citations

Journal ArticleDOI
TL;DR: Expander graphs were first defined by Bassalygo and Pinsker in the early 1970s, and their existence was proved in the late 1970s as discussed by the authors and early 1980s.
Abstract: A major consideration we had in writing this survey was to make it accessible to mathematicians as well as to computer scientists, since expander graphs, the protagonists of our story, come up in numerous and often surprising contexts in both fields But, perhaps, we should start with a few words about graphs in general They are, of course, one of the prime objects of study in Discrete Mathematics However, graphs are among the most ubiquitous models of both natural and human-made structures In the natural and social sciences they model relations among species, societies, companies, etc In computer science, they represent networks of communication, data organization, computational devices as well as the flow of computation, and more In mathematics, Cayley graphs are useful in Group Theory Graphs carry a natural metric and are therefore useful in Geometry, and though they are “just” one-dimensional complexes, they are useful in certain parts of Topology, eg Knot Theory In statistical physics, graphs can represent local connections between interacting parts of a system, as well as the dynamics of a physical process on such systems The study of these models calls, then, for the comprehension of the significant structural properties of the relevant graphs But are there nontrivial structural properties which are universally important? Expansion of a graph requires that it is simultaneously sparse and highly connected Expander graphs were first defined by Bassalygo and Pinsker, and their existence first proved by Pinsker in the early ’70s The property of being an expander seems significant in many of these mathematical, computational and physical contexts It is not surprising that expanders are useful in the design and analysis of communication networks What is less obvious is that expanders have surprising utility in other computational settings such as in the theory of error correcting codes and the theory of pseudorandomness In mathematics, we will encounter eg their role in the study of metric embeddings, and in particular in work around the Baum-Connes Conjecture Expansion is closely related to the convergence rates of Markov Chains, and so they play a key role in the study of Monte-Carlo algorithms in statistical mechanics and in a host of practical computational applications The list of such interesting and fruitful connections goes on and on with so many applications we will not even

2,037 citations

Journal ArticleDOI
TL;DR: It is argued that a transaction needs to lock a logical rather than a physical subset of the database, and an implementation of predicate locks which satisfies the consistency condition is suggested.
Abstract: In database systems, users access shared data under the assumption that the data satisfies certain consistency constraints. This paper defines the concepts of transaction, consistency and schedule and shows that consistency requires that a transaction cannot request new locks after releasing a lock. Then it is argued that a transaction needs to lock a logical rather than a physical subset of the database. These subsets may be specified by predicates. An implementation of predicate locks which satisfies the consistency condition is suggested.

2,031 citations


Authors

Showing all 134658 results

NameH-indexPapersCitations
Zhong Lin Wang2452529259003
Anil K. Jain1831016192151
Hyun-Chul Kim1764076183227
Rodney S. Ruoff164666194902
Tobin J. Marks1591621111604
Jean M. J. Fréchet15472690295
Albert-László Barabási152438200119
György Buzsáki15044696433
Stanislas Dehaene14945686539
Philip S. Yu1481914107374
James M. Tour14385991364
Thomas P. Russell141101280055
Naomi J. Halas14043582040
Steven G. Louie13777788794
Daphne Koller13536771073
Network Information
Related Institutions (5)
Carnegie Mellon University
104.3K papers, 5.9M citations

93% related

Georgia Institute of Technology
119K papers, 4.6M citations

92% related

Bell Labs
59.8K papers, 3.1M citations

90% related

Microsoft
86.9K papers, 4.1M citations

89% related

Massachusetts Institute of Technology
268K papers, 18.2M citations

88% related

Performance
Metrics
No. of papers from the Institution in previous years
YearPapers
202330
2022137
20213,163
20206,336
20196,427
20186,278