scispace - formally typeset
Search or ask a question
Author

Amir Hesam Salavati

Bio: Amir Hesam Salavati is an academic researcher from École Polytechnique Fédérale de Lausanne. The author has contributed to research in topics: Content-addressable memory & Artificial neural network. The author has an hindex of 9, co-authored 27 publications receiving 226 citations. Previous affiliations of Amir Hesam Salavati include École Normale Supérieure & Sharif University of Technology.

Papers
More filters
Journal ArticleDOI
TL;DR: In this paper, effective masses for a large variety of perovskites of the form ABX3 differing in chemical composition (A= Na, Li, Cs; B = Pb, Sn; X= Cl, Br, I) and crystal structure were calculated.
Abstract: Effective masses are calculated for a large variety of perovskites of the form ABX3 differing in chemical composition (A= Na, Li, Cs; B = Pb, Sn; X= Cl, Br, I) and crystal structure. In addition, the effects of some defects and dopants are assessed. We show that the effective masses are highly correlated with the energies of the valence-band maximum, conduction-band minimum, and band gap. Using the k·p theory for the bottom of the conduction band and a tight-binding model for the top of the valence band, this trend can be rationalized in terms of the orbital overlap between halide and metal (B cation). Most of the compounds studied in this work are good charge-carrier transporters, where the effective masses of the Pb compounds (0 < mh* < me* < 1) are systematically larger than those of the Sn-based compounds (0 < mh* ≈ me* < 0.5). The effective masses show anisotropies depending on the crystal symmetry of the perovskite, whether orthorhombic, tetragonal, or cubic, with the highest anisotropy for the tetr...

36 citations

Proceedings Article
16 Jun 2013
TL;DR: This work devise an iterative algorithm that learns the redundancy among the patterns of a neural associative memory network and shows that by considering the inherent redundancy in the memorized patterns, one can obtain all the mentioned properties at once.
Abstract: The task of a neural associative memory is to retrieve a set of previously memorized patterns from their noisy versions by using a network of neurons. Hence, an ideal network should be able to 1) gradually learn a set of patterns, 2) retrieve the correct pattern from noisy queries and 3) maximize the number of memorized patterns while maintaining the reliability in responding to queries. We show that by considering the inherent redundancy in the memorized patterns, one can obtain all the mentioned properties at once. This is in sharp contrast with previous work that could only improve one or two aspects at the expense of the others. More specifically, we devise an iterative algorithm that learns the redundancy among the patterns. The resulting network has a retrieval capacity that is exponential in the size of the network. Lastly, by considering the local structures of the network, the asymptotic error correction performance can be made linear in the size of the network.

28 citations

Proceedings ArticleDOI
01 Jul 2012
TL;DR: This work shows that by forcing natural constraints on the set of learning patterns, it can drastically improve the retrieval capacity of the neural network.
Abstract: The problem of neural network association is to retrieve a previously memorized pattern from its noisy version using a network of neurons. An ideal neural network should include three components simultaneously: a learning algorithm, a large pattern retrieval capacity and resilience against noise. Prior works in this area usually improve one or two aspects at the cost of the third. Our work takes a step forward in closing this gap. More specifically, we show that by forcing natural constraints on the set of learning patterns, we can drastically improve the retrieval capacity of our neural network. Moreover, we devise a learning algorithm whose role is to learn those patterns satisfying the above mentioned constraints. Finally we show that our neural network can cope with a fair amount of noise.

24 citations

Journal ArticleDOI
TL;DR: Using analytical methods and simulations, the proposed methods can tolerate a fair amount of errors in the input while being able to memorize an exponentially large number of patterns.
Abstract: We consider the problem of neural association for a network of nonbinary neurons. Here, the task is to first memorize a set of patterns using a network of neurons whose states assume values from a finite number of integer levels. Later, the same network should be able to recall the previously memorized patterns from their noisy versions. Prior work in this area consider storing a finite number of purely random patterns, and have shown that the pattern retrieval capacities (maximum number of patterns that can be memorized) scale only linearly with the number of neurons in the network. In our formulation of the problem, we concentrate on exploiting redundancy and internal structure of the patterns to improve the pattern retrieval capacity. Our first result shows that if the given patterns have a suitable linear-algebraic structure, i.e., comprise a subspace of the set of all possible patterns, then the pattern retrieval capacity is exponential in terms of the number of neurons. The second result extends the previous finding to cases where the patterns have weak minor components, i.e., the smallest eigenvalues of the correlation matrix tend toward zero. We will use these minor components (or the basis vectors of the pattern null space) to increase both the pattern retrieval capacity and error correction capabilities. An iterative algorithm is proposed for the learning phase, and two simple algorithms are presented for the recall phase. Using analytical methods and simulations, we show that the proposed methods can tolerate a fair amount of errors in the input while being able to memorize an exponentially large number of patterns.

19 citations

Proceedings ArticleDOI
01 Dec 2011
TL;DR: Two simple neural update algorithms are presented, and it is shown that the proposed mechanisms result in a pattern retrieval capacity that is exponential in terms of the network size.
Abstract: We consider the problem of neural association for a network of non-binary neurons. Here, the task is to recall a previously memorized pattern from its noisy version using a network of neurons whose states assume values from a finite number of non-negative integer levels. Prior works in this area consider storing a finite number of purely random patterns, and have shown that the pattern retrieval capacities (maximum number of patterns that can be memorized) scale only linearly with the number of neurons in the network. In our formulation of the problem, we consider storing patterns from a suitably chosen set of patterns, that are obtained by enforcing a set of simple constraints on the coordinates (such as those enforced in graph based codes). Such patterns may be generated from purely random information symbols by simple neural operations. Two simple neural update algorithms are presented, and it is shown that our proposed mechanisms result in a pattern retrieval capacity that is exponential in terms of the network size. Furthermore, using analytical results and simulations, we show that the suggested methods can tolerate a fair amount of errors in the input.

19 citations


Cited by
More filters
Journal ArticleDOI
TL;DR: In this article, the authors describe the main ideas, recent developments and progress in a broad spectrum of research investigating ML and AI in the quantum domain, and discuss the fundamental issue of quantum generalizations of learning and AI concepts.
Abstract: Quantum information technologies, on the one hand, and intelligent learning systems, on the other, are both emergent technologies that are likely to have a transformative impact on our society in the future. The respective underlying fields of basic research-quantum information versus machine learning (ML) and artificial intelligence (AI)-have their own specific questions and challenges, which have hitherto been investigated largely independently. However, in a growing body of recent work, researchers have been probing the question of the extent to which these fields can indeed learn and benefit from each other. Quantum ML explores the interaction between quantum computing and ML, investigating how results and techniques from one field can be used to solve the problems of the other. Recently we have witnessed significant breakthroughs in both directions of influence. For instance, quantum computing is finding a vital application in providing speed-ups for ML problems, critical in our 'big data' world. Conversely, ML already permeates many cutting-edge technologies and may become instrumental in advanced quantum technologies. Aside from quantum speed-up in data analysis, or classical ML optimization used in quantum experiments, quantum enhancements have also been (theoretically) demonstrated for interactive learning tasks, highlighting the potential of quantum-enhanced learning agents. Finally, works exploring the use of AI for the very design of quantum experiments and for performing parts of genuine research autonomously, have reported their first successes. Beyond the topics of mutual enhancement-exploring what ML/AI can do for quantum physics and vice versa-researchers have also broached the fundamental issue of quantum generalizations of learning and AI concepts. This deals with questions of the very meaning of learning and intelligence in a world that is fully described by quantum mechanics. In this review, we describe the main ideas, recent developments and progress in a broad spectrum of research investigating ML and AI in the quantum domain.

684 citations

Book
07 Jun 2018
TL;DR: The recent developments that establish the fundamental limits for community detection in the stochastic block model are surveyed, both with respect to information-theoretic and computational thresholds, and for various recovery requirements such as exact, partial and weak recovery.
Abstract: The stochastic block model (SBM) is a random graph model with planted clusters. It is widely employed as a canonical model to study clustering and community detection, and provides generally a fertile ground to study the statistical and computational tradeoffs that arise in network and data sciences. This note surveys the recent developments that establish the fundamental limits for community detection in the SBM, both with respect to information-theoretic and computational thresholds, and for various recovery requirements such as exact, partial and weak recovery (a.k.a., detection). The main results discussed are the phase transitions for exact recovery at the Chernoff-Hellinger threshold, the phase transition for weak recovery at the Kesten-Stigum threshold, the optimal distortion-SNR tradeoff for partial recovery, the learning of the SBM parameters and the gap between information-theoretic and computational thresholds. The note also covers some of the algorithms developed in the quest of achieving the limits, in particular two-round algorithms via graph-splitting, semi-definite programming, linearized belief propagation, classical and nonbacktracking spectral methods. A few open problems are also discussed.

627 citations

Journal ArticleDOI
TL;DR: An efficient algorithm based on a semidefinite programming relaxation of ML is proposed, which is proved to succeed in recovering the communities close to the threshold, while numerical experiments suggest that it may achieve the threshold.
Abstract: The stochastic block model with two communities, or equivalently the planted bisection model, is a popular model of random graph exhibiting a cluster behavior. In the symmetric case, the graph has two equally sized clusters and vertices connect with probability $p$ within clusters and $q$ across clusters. In the past two decades, a large body of literature in statistics and computer science has focused on providing lower bounds on the scaling of $|p-q|$ to ensure exact recovery. In this paper, we identify a sharp threshold phenomenon for exact recovery: if $\alpha =pn/\log (n)$ and $\beta =qn/\log (n)$ are constant (with $\alpha >\beta $ ), recovering the communities with high probability is possible if $({\alpha +\beta }/{2}) - \sqrt {\alpha \beta }>1$ and is impossible if $({\alpha +\beta }/{2}) - \sqrt {\alpha \beta } . In particular, this improves the existing bounds. This also sets a new line of sight for efficient clustering algorithms. While maximum likelihood (ML) achieves the optimal threshold (by definition), it is in the worst case NP-hard. This paper proposes an efficient algorithm based on a semidefinite programming relaxation of ML, which is proved to succeed in recovering the communities close to the threshold, while numerical experiments suggest that it may achieve the threshold. An efficient algorithm that succeeds all the way down to the threshold is also obtained using a partial recovery algorithm combined with a local improvement procedure.

474 citations

Journal Article
TL;DR: In this article, the authors analyzed the electronic structure and optical properties of perovskite solar cells based on CH3NH3PbI3 with the quasiparticle self-consistent GW approximation.
Abstract: The performance of organometallic perovskite solar cells has rapidly surpassed those of both traditional dye-sensitized and organic photovoltaics, e.g. solar cells based on CH3NH3PbI3 have recently reached 18% conversion efficiency. We analyze its electronic structure and optical properties within the quasiparticle self-consistent GW approximation (QSGW ). Quasiparticle self-consistency is essential for an accurate description of the band structure: bandgaps are much larger than what is predicted by the local density approximation (LDA) or GW based on the LDA. Several characteristics combine to make the electronic structure of this material unusual. First, there is a strong driving force for ferroelectricity, as a consequence the polar organic moiety CH3NH3. The moiety is only weakly coupled to the PbI3 cage; thus it can rotate give rise to ferroelectric domains. This in turn will result in internal junctions that may aid separation of photoexcited electron and hole pairs, and may contribute to the current-voltage hysteresis found in perovskite solar cells. Second, spin orbit modifies both valence band and conduction band dispersions in a very unusual manner: both get split at the R point into two extrema nearby. This can be interpreted in terms of a large Dresselhaus term, which vanishes at R but for small excursions about R varies linearly in k. Conduction bands (Pb 6p character) and valence bands (I 5p) are affected differently; moreover the splittings vary with the orientation of the moiety. We will show how the splittings, and their dependence on the orientation of the moiety through the ferroelectric effect, have important consequences for both electronic transport and the optical properties of this material.

418 citations