Institution
Technion – Israel Institute of Technology
Education•Haifa, Israel•
About: Technion – Israel Institute of Technology is a education organization based out in Haifa, Israel. It is known for research contribution in the topics: Population & Nonlinear system. The organization has 31714 authors who have published 79377 publications receiving 2603976 citations. The organization is also known as: Technion Israel Institute of Technology & Ṭekhniyon, Makhon ṭekhnologi le-Yiśraʼel.
Papers published on a yearly basis
Papers
More filters
••
TL;DR: These findings show that human ES cells have great potential to become an unlimited cell source for neurons in culture, and may then be used in transplantation therapies for neural pathologies.
466 citations
••
TL;DR: In this paper, the authors describe the results of the experimental investigation and modeling of multiple jets during the electrospinning of polymer solutions, and demonstrate how the external electric fields and mutual electric interaction of multiple charged jets influence their path and evolution during electro-spinning.
466 citations
••
TL;DR: In this paper, an experimental study is presented to evaluate the effectiveness of micro-surface structure, produced by laser texturing, to improve tribological properties of reciprocating automotive components, including piston rings and cylinder linings.
Abstract: An experimental study is presented to evaluate the effectiveness of micro-surface structure, produced by laser texturing, to improve tribological properties of reciprocating automotive components. The test rig and test specimens are described and some test results are presented. Good correlation is found with theoretical prediction of friction reduction on a simple, yet representative, test specimen. Potential benefit of the laser surface texturing under conditions of lubricant starvation is also presented. Finally, friction reduction with actual production piston rings and cylinder liner segments is demonstrated. Presented at the 57th Annual Meeting in Houston, Texas May 19–23, 2002
465 citations
••
13 Oct 2003TL;DR: This work exploits a recently proposed approximation technique, locality-sensitive hashing (LSH), to reduce the computational complexity of adaptive mean shift and implements the implementation of LSH, where the optimal parameters of the data structure are determined by a pilot learning procedure, and the partitions are data driven.
Abstract: Feature space analysis is the main module in many computer vision tasks. The most popular technique, k-means clustering, however, has two inherent limitations: the clusters are constrained to be spherically symmetric and their number has to be known a priori. In nonparametric clustering methods, like the one based on mean shift, these limitations are eliminated but the amount of computation becomes prohibitively large as the dimension of the space increases. We exploit a recently proposed approximation technique, locality-sensitive hashing (LSH), to reduce the computational complexity of adaptive mean shift. In our implementation of LSH the optimal parameters of the data structure are determined by a pilot learning procedure, and the partitions are data driven. As an application, the performance of mode and k-means based textons are compared in a texture classification study.
465 citations
••
01 Apr 1997
TL;DR: It is constructively proved that the NARX networks with a finite number of parameters are computationally as strong as fully connected recurrent networks and thus Turing machines, raising the issue of what amount of feedback or recurrence is necessary for any network to be Turing equivalent and what restrictions on feedback limit computational power.
Abstract: Recently, fully connected recurrent neural networks have been proven to be computationally rich-at least as powerful as Turing machines. This work focuses on another network which is popular in control applications and has been found to be very effective at learning a variety of problems. These networks are based upon Nonlinear AutoRegressive models with eXogenous Inputs (NARX models), and are therefore called NARX networks. As opposed to other recurrent networks, NARX networks have a limited feedback which comes only from the output neuron rather than from hidden states. They are formalized by y(t)=/spl Psi/(u(t-n/sub u/), ..., u(t-1), u(t), y(t-n/sub y/), ..., y(t-1)) where u(t) and y(t) represent input and output of the network at time t, n/sub u/ and n/sub y/ are the input and output order, and the function /spl Psi/ is the mapping performed by a Multilayer Perceptron. We constructively prove that the NARX networks with a finite number of parameters are computationally as strong as fully connected recurrent networks and thus Turing machines. We conclude that in theory one can use the NARX models, rather than conventional recurrent networks without any computational loss even though their feedback is limited. Furthermore, these results raise the issue of what amount of feedback or recurrence is necessary for any network to be Turing equivalent and what restrictions on feedback limit computational power.
462 citations
Authors
Showing all 31937 results
Name | H-index | Papers | Citations |
---|---|---|---|
Robert Langer | 281 | 2324 | 326306 |
Nicholas G. Martin | 192 | 1770 | 161952 |
Tobin J. Marks | 159 | 1621 | 111604 |
Grant W. Montgomery | 157 | 926 | 108118 |
David Eisenberg | 156 | 697 | 112460 |
David J. Mooney | 156 | 695 | 94172 |
Dirk Inzé | 149 | 647 | 74468 |
Jerrold M. Olefsky | 143 | 595 | 77356 |
Joseph J.Y. Sung | 142 | 1240 | 92035 |
Deborah Estrin | 135 | 562 | 106177 |
Bruce Yabsley | 133 | 1191 | 84889 |
Jerry W. Shay | 133 | 639 | 74774 |
Richard N. Bergman | 130 | 477 | 91718 |
Shlomit Tarem | 129 | 1306 | 86919 |
Allen Mincer | 129 | 1040 | 80059 |