scispace - formally typeset
Y

Yoav Levine

Researcher at Hebrew University of Jerusalem

Publications -  19
Citations -  849

Yoav Levine is an academic researcher from Hebrew University of Jerusalem. The author has contributed to research in topics: Deep learning & Network architecture. The author has an hindex of 10, co-authored 19 publications receiving 540 citations. Previous affiliations of Yoav Levine include Weizmann Institute of Science.

Papers
More filters
Journal ArticleDOI

Quantum Entanglement in Deep Learning Architectures.

TL;DR: The results show that contemporary deep learning architectures, in the form of deep convolutional and recurrent networks, can efficiently represent highly entangled quantum systems and can support volume-law entanglement scaling, polynomially more efficiently than presently employed RBMs.
Journal ArticleDOI

Deep Autoregressive Models for the Efficient Variational Simulation of Many-Body Quantum Systems

TL;DR: This work proposes a specialized neural- network architecture that supports efficient and exact sampling, completely circumventing the need for Markov-chain sampling, and demonstrates the ability to obtain accurate results on larger system sizes than those currently accessible to neural-network quantum states.
Proceedings ArticleDOI

SenseBERT: Driving Some Sense into BERT

TL;DR: The authors proposed a method to employ weak-supervision directly at the word sense level, which achieved state-of-the-art results on the SemEval Word Sense Disambiguation task.
Posted Content

Deep Learning and Quantum Entanglement: Fundamental Connections with Implications to Network Design

TL;DR: This work establishes a fundamental connection between the fields of quantum physics and deep learning, and shows an equivalence between the function realized by a deep convolutional arithmetic circuit (ConvAC) and a quantum many-body wave function, which relies on their common underlying tensorial structure.
Posted Content

SenseBERT: Driving Some Sense into BERT

TL;DR: This paper proposes a method to employ weak-supervision directly at the word sense level, pre-trained to predict not only the masked words but also their WordNet supersenses, and achieves a lexical-semantic level language model, without the use of human annotation.