Institution
Technion – Israel Institute of Technology
Education•Haifa, Israel•
About: Technion – Israel Institute of Technology is a education organization based out in Haifa, Israel. It is known for research contribution in the topics: Population & Upper and lower bounds. The organization has 31714 authors who have published 79377 publications receiving 2603976 citations. The organization is also known as: Technion Israel Institute of Technology & Ṭekhniyon, Makhon ṭekhnologi le-Yiśraʼel.
Topics: Population, Upper and lower bounds, Nonlinear system, Decoding methods, Large Hadron Collider
Papers published on a yearly basis
Papers
More filters
••
TL;DR: In this article, a real-world dataset of positive severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) test results after inoculation with the BNT162b2 messenger RNA vaccine was analyzed, and the authors found that the viral load was substantially reduced for infections occurring 12-37 days after the first dose of vaccine.
Abstract: Beyond their substantial protection of individual vaccinees, coronavirus disease 2019 (COVID-19) vaccines might reduce viral load in breakthrough infection and thereby further suppress onward transmission. In this analysis of a real-world dataset of positive severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) test results after inoculation with the BNT162b2 messenger RNA vaccine, we found that the viral load was substantially reduced for infections occurring 12–37 d after the first dose of vaccine. These reduced viral loads hint at a potentially lower infectiousness, further contributing to vaccine effect on virus spread. Breakthrough infections of SARS-CoV-2 occurring 12 or more days after the first dose of the BNT162b2 mRNA vaccine were associated with lower viral loads than those found in unvaccinated individuals, suggesting that the vaccine might reduce infectiousness.
373 citations
•
01 May 2017TL;DR: In this paper, a random walk on a random landscape (RWN) model is proposed to solve the generalization gap problem in the training of deep learning models, which is known to exhibit similar "ultra-slow" diffusion behavior.
Abstract: Background: Deep learning models are typically trained using stochastic gradient descent or one of its variants. These methods update the weights using their gradient, estimated from a small fraction of the training data. It has been observed that when using large batch sizes there is a persistent degradation in generalization performance - known as the "generalization gap" phenomenon. Identifying the origin of this gap and closing it had remained an open problem. Contributions: We examine the initial high learning rate training phase. We find that the weight distance from its initialization grows logarithmically with the number of weight updates. We therefore propose a "random walk on a random landscape" statistical model which is known to exhibit similar "ultra-slow" diffusion behavior. Following this hypothesis we conducted experiments to show empirically that the "generalization gap" stems from the relatively small number of updates rather than the batch size, and can be completely eliminated by adapting the training regime used. We further investigate different techniques to train models in the large-batch regime and present a novel algorithm named "Ghost Batch Normalization" which enables significant decrease in the generalization gap without increasing the number of updates. To validate our findings we conduct several additional experiments on MNIST, CIFAR-10, CIFAR-100 and ImageNet. Finally, we reassess common practices and beliefs concerning training of deep models and suggest they may not be optimal to achieve good generalization.
373 citations
••
TL;DR: In this article, the authors apply the theoretical concepts of local isotropy to explain the behavior of liquid in liquid dispersions, subjected to turbulent agitation, and compare the influence of turbulence on both breakup and coalescence of individual droplets.
Abstract: The present paper is concerned with the conditions of flow in tanks containing stirred fluids An attempt is made to apply the theoretical concepts of local isotropy to explain the behaviour of liquid in liquid dispersions, subjected to turbulent agitation Relations describing quantitatively the influence of turbulence on both break-up and coalescence of individual droplets are derived and are compared with experimental evidence A special type of dispersion is described in which droplet size is controlled by the prevention of coalescence due to turbulence The dependence of droplet size on energy dissipation per unit mass, as predicted by the theory of local isotropy, is put to an experimental test using geometrically similar vessels of different sizesThough the results are not entirely conclusive, experimental evidence suggests that the hypothesis of locally isotropic flow may be applicable to the flow conditions described in the paper, and that statistical theories of turbulence can be of practical value in estimating droplet sizes in agitated dispersions
373 citations
••
TL;DR: An elastic-plastic model for contacting rough surfaces that is based on accurate Finite Element Analysis (FEA) of an elasticplastic single asperity contact is presented in this paper.
Abstract: An elastic-plastic model for contacting rough surfaces that is based on accurate Finite Element Analysis (FEA) of an elastic-plastic single asperity contact is presented. The plasticity index π is ...
373 citations
••
20 Jan 1997TL;DR: A new optimized standard implementation of DES on 64-bit processors is described, which is about twice faster than the fastest known standard DES implementation on the same processor.
Abstract: In this paper we describe a fast new DES implementation. This implementation is about five times faster than the fastest known DES implementation on a (64-bit) Alpha computer, and about three times faster than than our new optimized DES implementation on 64-bit computers. This implementation uses a non-standard representation, and view the processor as a SIMD computer, i.e., as 64 parallel one-bit processors computing the same instruction. We also discuss the application of this implementation to other ciphers. We describe a new optimized standard implementation of DES on 64-bit processors, which is about twice faster than the fastest known standard DES implementation on the same processor. Our implementations can also be used for fast exhaustive search in software, which can find a key in only a few days or a few weeks on existing parallel computers and computer networks.
372 citations
Authors
Showing all 31937 results
Name | H-index | Papers | Citations |
---|---|---|---|
Robert Langer | 281 | 2324 | 326306 |
Nicholas G. Martin | 192 | 1770 | 161952 |
Tobin J. Marks | 159 | 1621 | 111604 |
Grant W. Montgomery | 157 | 926 | 108118 |
David Eisenberg | 156 | 697 | 112460 |
David J. Mooney | 156 | 695 | 94172 |
Dirk Inzé | 149 | 647 | 74468 |
Jerrold M. Olefsky | 143 | 595 | 77356 |
Joseph J.Y. Sung | 142 | 1240 | 92035 |
Deborah Estrin | 135 | 562 | 106177 |
Bruce Yabsley | 133 | 1191 | 84889 |
Jerry W. Shay | 133 | 639 | 74774 |
Richard N. Bergman | 130 | 477 | 91718 |
Shlomit Tarem | 129 | 1306 | 86919 |
Allen Mincer | 129 | 1040 | 80059 |