scispace - formally typeset
Search or ask a question
Institution

IBM

CompanyArmonk, New York, United States
About: IBM is a company organization based out in Armonk, New York, United States. It is known for research contribution in the topics: Layer (electronics) & Cache. The organization has 134567 authors who have published 253905 publications receiving 7458795 citations. The organization is also known as: International Business Machines Corporation & Big Blue.


Papers
More filters
Journal ArticleDOI
TL;DR: The SAMANN network offers the generalization ability of projecting new data, which is not present in the original Sammon's projection algorithm; the NDA method and NP-SOM network provide new powerful approaches for visualizing high dimensional data.
Abstract: Classical feature extraction and data projection methods have been well studied in the pattern recognition and exploratory data analysis literature. We propose a number of networks and learning algorithms which provide new or alternative tools for feature extraction and data projection. These networks include a network (SAMANN) for J.W. Sammon's (1969) nonlinear projection, a linear discriminant analysis (LDA) network, a nonlinear discriminant analysis (NDA) network, and a network for nonlinear projection (NP-SOM) based on Kohonen's self-organizing map. A common attribute of these networks is that they all employ adaptive learning algorithms which makes them suitable in some environments where the distribution of patterns in feature space changes with respect to time. The availability of these networks also facilitates hardware implementation of well-known classical feature extraction and projection approaches. Moreover, the SAMANN network offers the generalization ability of projecting new data, which is not present in the original Sammon's projection algorithm; the NDA method and NP-SOM network provide new powerful approaches for visualizing high dimensional data. We evaluate five representative neural networks for feature extraction and data projection based on a visual judgement of the two-dimensional projection maps and three quantitative criteria on eight data sets with various properties. >

695 citations

Journal ArticleDOI
TL;DR: It is shown that a large data set of more than 100 devices can be consistently accounted by a model that relates the on-current of a CNFET to a tunneling barrier whose height is determined by the nanotube diameter and the nature of the source/drain metal contacts.
Abstract: Single-wall carbon nanotube field-effect transistors (CNFETs) have been shown to behave as Schottky barrier (SB) devices. It is not clear, however, what factors control the SB size. Here we present the first statistical analysis of this issue. We show that a large data set of more than 100 devices can be consistently accounted by a model that relates the on-current of a CNFET to a tunneling barrier whose height is determined by the nanotube diameter and the nature of the source/drain metal contacts. Our study permits identification of the desired combination of tube diameter and type of metal that provides the optimum performance of a CNFET.

695 citations

Journal Article
Shai Fine1, Katya Scheinberg1
TL;DR: This work shows that for a low rank kernel matrix it is possible to design a better interior point method (IPM) in terms of storage requirements as well as computational complexity and derives an upper bound on the change in the objective function value based on the approximation error and the number of active constraints (support vectors).
Abstract: SVM training is a convex optimization problem which scales with the training set size rather than the feature space dimension. While this is usually considered to be a desired quality, in large scale problems it may cause training to be impractical. The common techniques to handle this difficulty basically build a solution by solving a sequence of small scale subproblems. Our current effort is concentrated on the rank of the kernel matrix as a source for further enhancement of the training procedure. We first show that for a low rank kernel matrix it is possible to design a better interior point method (IPM) in terms of storage requirements as well as computational complexity. We then suggest an efficient use of a known factorization technique to approximate a given kernel matrix by a low rank matrix, which in turn will be used to feed the optimizer. Finally, we derive an upper bound on the change in the objective function value based on the approximation error and the number of active constraints (support vectors). This bound is general in the sense that it holds regardless of the approximation method.

695 citations

Patent
Stephane H. Maes1, Chalapathy Neti1
31 Jan 2002
TL;DR: In this paper, a method for performing focus detection, ambiguity resolution and mood classification in accordance with multi-modal input data, in varying operating conditions, in order to provide an effective conversational computing environment (418, 422) for one or more users (812).
Abstract: This is a method provided for performing focus detection, ambiguity resolution and mood classification (815) in accordance with multi-modal input data, in varying operating conditions, in order to provide an effective conversational computing environment (418, 422) for one or more users (812).

694 citations

Journal ArticleDOI
Jerry B. Torrance1, Y. Tokura1, A. I. Nazzal1, A. Bezinge1, T. C. Huang1, Stuart S. P. Parkin1 
TL;DR: It is found that T/sub c/ is constant at approx.36 K from p = 0.15 to 0.24, where it begins to decrease, and beyond papprox. =0.32, superconductivity disappears, even though the samples are more conducting.
Abstract: Samples of ${\mathrm{La}}_{2\ensuremath{-}x}{\mathrm{Sr}}_{x}\mathrm{Cu}{\mathrm{O}}_{4\ensuremath{-}\ensuremath{\delta}}$ have previously shown a maximum concentration of $p=0.15$ holes per [Cu${\mathrm{O}}_{2}$] unit, because increasing $xg0.15$ normally induces compensating oxygen vacancies. Annealing samples in 100 bars of oxygen pressure fills the oxygen vacancies and greatly increases the range of accessible hole concentrations, up to $p=0.40$ (or effectively ${\mathrm{Cu}}^{+2.40}$). We find that ${T}_{c}$ is constant at \ensuremath{\simeq}36 K from $p=0.15 \mathrm{to} 0.24$, where it begins to decrease. Beyond $p\ensuremath{\simeq}0.32$, superconductivity disappears, even though the samples are more conducting.

694 citations


Authors

Showing all 134658 results

NameH-indexPapersCitations
Zhong Lin Wang2452529259003
Anil K. Jain1831016192151
Hyun-Chul Kim1764076183227
Rodney S. Ruoff164666194902
Tobin J. Marks1591621111604
Jean M. J. Fréchet15472690295
Albert-László Barabási152438200119
György Buzsáki15044696433
Stanislas Dehaene14945686539
Philip S. Yu1481914107374
James M. Tour14385991364
Thomas P. Russell141101280055
Naomi J. Halas14043582040
Steven G. Louie13777788794
Daphne Koller13536771073
Network Information
Related Institutions (5)
Carnegie Mellon University
104.3K papers, 5.9M citations

93% related

Georgia Institute of Technology
119K papers, 4.6M citations

92% related

Bell Labs
59.8K papers, 3.1M citations

90% related

Microsoft
86.9K papers, 4.1M citations

89% related

Massachusetts Institute of Technology
268K papers, 18.2M citations

88% related

Performance
Metrics
No. of papers from the Institution in previous years
YearPapers
202330
2022137
20213,163
20206,336
20196,427
20186,278