Institution
IBM
Company•Armonk, New York, United States•
About: IBM is a company organization based out in Armonk, New York, United States. It is known for research contribution in the topics: Layer (electronics) & Cache. The organization has 134567 authors who have published 253905 publications receiving 7458795 citations. The organization is also known as: International Business Machines Corporation & Big Blue.
Papers published on a yearly basis
Papers
More filters
••
TL;DR: The SAMANN network offers the generalization ability of projecting new data, which is not present in the original Sammon's projection algorithm; the NDA method and NP-SOM network provide new powerful approaches for visualizing high dimensional data.
Abstract: Classical feature extraction and data projection methods have been well studied in the pattern recognition and exploratory data analysis literature. We propose a number of networks and learning algorithms which provide new or alternative tools for feature extraction and data projection. These networks include a network (SAMANN) for J.W. Sammon's (1969) nonlinear projection, a linear discriminant analysis (LDA) network, a nonlinear discriminant analysis (NDA) network, and a network for nonlinear projection (NP-SOM) based on Kohonen's self-organizing map. A common attribute of these networks is that they all employ adaptive learning algorithms which makes them suitable in some environments where the distribution of patterns in feature space changes with respect to time. The availability of these networks also facilitates hardware implementation of well-known classical feature extraction and projection approaches. Moreover, the SAMANN network offers the generalization ability of projecting new data, which is not present in the original Sammon's projection algorithm; the NDA method and NP-SOM network provide new powerful approaches for visualizing high dimensional data. We evaluate five representative neural networks for feature extraction and data projection based on a visual judgement of the two-dimensional projection maps and three quantitative criteria on eight data sets with various properties. >
695 citations
••
TL;DR: It is shown that a large data set of more than 100 devices can be consistently accounted by a model that relates the on-current of a CNFET to a tunneling barrier whose height is determined by the nanotube diameter and the nature of the source/drain metal contacts.
Abstract: Single-wall carbon nanotube field-effect transistors (CNFETs) have been shown to behave as Schottky barrier (SB) devices. It is not clear, however, what factors control the SB size. Here we present the first statistical analysis of this issue. We show that a large data set of more than 100 devices can be consistently accounted by a model that relates the on-current of a CNFET to a tunneling barrier whose height is determined by the nanotube diameter and the nature of the source/drain metal contacts. Our study permits identification of the desired combination of tube diameter and type of metal that provides the optimum performance of a CNFET.
695 citations
•
IBM1
TL;DR: This work shows that for a low rank kernel matrix it is possible to design a better interior point method (IPM) in terms of storage requirements as well as computational complexity and derives an upper bound on the change in the objective function value based on the approximation error and the number of active constraints (support vectors).
Abstract: SVM training is a convex optimization problem which scales with the training set size rather than the feature space dimension. While this is usually considered to be a desired quality, in large scale problems it may cause training to be impractical. The common techniques to handle this difficulty basically build a solution by solving a sequence of small scale subproblems. Our current effort is concentrated on the rank of the kernel matrix as a source for further enhancement of the training procedure. We first show that for a low rank kernel matrix it is possible to design a better interior point method (IPM) in terms of storage requirements as well as computational complexity. We then suggest an efficient use of a known factorization technique to approximate a given kernel matrix by a low rank matrix, which in turn will be used to feed the optimizer. Finally, we derive an upper bound on the change in the objective function value based on the approximation error and the number of active constraints (support vectors). This bound is general in the sense that it holds regardless of the approximation method.
695 citations
•
IBM1
TL;DR: In this paper, a method for performing focus detection, ambiguity resolution and mood classification in accordance with multi-modal input data, in varying operating conditions, in order to provide an effective conversational computing environment (418, 422) for one or more users (812).
Abstract: This is a method provided for performing focus detection, ambiguity resolution and mood classification (815) in accordance with multi-modal input data, in varying operating conditions, in order to provide an effective conversational computing environment (418, 422) for one or more users (812).
694 citations
••
IBM1
TL;DR: It is found that T/sub c/ is constant at approx.36 K from p = 0.15 to 0.24, where it begins to decrease, and beyond papprox. =0.32, superconductivity disappears, even though the samples are more conducting.
Abstract: Samples of ${\mathrm{La}}_{2\ensuremath{-}x}{\mathrm{Sr}}_{x}\mathrm{Cu}{\mathrm{O}}_{4\ensuremath{-}\ensuremath{\delta}}$ have previously shown a maximum concentration of $p=0.15$ holes per [Cu${\mathrm{O}}_{2}$] unit, because increasing $xg0.15$ normally induces compensating oxygen vacancies. Annealing samples in 100 bars of oxygen pressure fills the oxygen vacancies and greatly increases the range of accessible hole concentrations, up to $p=0.40$ (or effectively ${\mathrm{Cu}}^{+2.40}$). We find that ${T}_{c}$ is constant at \ensuremath{\simeq}36 K from $p=0.15 \mathrm{to} 0.24$, where it begins to decrease. Beyond $p\ensuremath{\simeq}0.32$, superconductivity disappears, even though the samples are more conducting.
694 citations
Authors
Showing all 134658 results
Name | H-index | Papers | Citations |
---|---|---|---|
Zhong Lin Wang | 245 | 2529 | 259003 |
Anil K. Jain | 183 | 1016 | 192151 |
Hyun-Chul Kim | 176 | 4076 | 183227 |
Rodney S. Ruoff | 164 | 666 | 194902 |
Tobin J. Marks | 159 | 1621 | 111604 |
Jean M. J. Fréchet | 154 | 726 | 90295 |
Albert-László Barabási | 152 | 438 | 200119 |
György Buzsáki | 150 | 446 | 96433 |
Stanislas Dehaene | 149 | 456 | 86539 |
Philip S. Yu | 148 | 1914 | 107374 |
James M. Tour | 143 | 859 | 91364 |
Thomas P. Russell | 141 | 1012 | 80055 |
Naomi J. Halas | 140 | 435 | 82040 |
Steven G. Louie | 137 | 777 | 88794 |
Daphne Koller | 135 | 367 | 71073 |