scispace - formally typeset
Open AccessJournal ArticleDOI

Unsupervised machine learning account of magnetic transitions in the Hubbard model

Kelvin Ch'ng, +2 more
- 16 Jan 2018 - 
- Vol. 97, Iss: 1, pp 013306-013306
TLDR
In this paper, the authors employ several unsupervised machine learning techniques, including autoencoders, random trees embedding, and t-distributed stochastic neighboring ensemble (t-SNE), to reduce the dimensionality of raw spin configurations generated, through Monte Carlo simulations of small clusters, for the Ising and Fermi-Hubbard models at finite temperatures.
Abstract
We employ several unsupervised machine learning techniques, including autoencoders, random trees embedding, and t-distributed stochastic neighboring ensemble (t-SNE), to reduce the dimensionality of, and therefore classify, raw (auxiliary) spin configurations generated, through Monte Carlo simulations of small clusters, for the Ising and Fermi-Hubbard models at finite temperatures. Results from a convolutional autoencoder for the three-dimensional Ising model can be shown to produce the magnetization and the susceptibility as a function of temperature with a high degree of accuracy. Quantum fluctuations distort this picture and prevent us from making such connections between the output of the autoencoder and physical observables for the Hubbard model. However, we are able to define an indicator based on the output of the t-SNE algorithm that shows a near perfect agreement with the antiferromagnetic structure factor of the model in two and three spatial dimensions in the weak-coupling regime. t-SNE also predicts a transition to the canted antiferromagnetic phase for the three-dimensional model when a strong magnetic field is present. We show that these techniques cannot be expected to work away from half filling when the "sign problem" in quantum Monte Carlo simulations is present.

read more

Citations
More filters
Journal ArticleDOI

Machine learning & artificial intelligence in the quantum domain: a review of recent progress.

TL;DR: In this article, the authors describe the main ideas, recent developments and progress in a broad spectrum of research investigating ML and AI in the quantum domain, and discuss the fundamental issue of quantum generalizations of learning and AI concepts.
Journal ArticleDOI

A high-bias, low-variance introduction to Machine Learning for physicists

TL;DR: The review begins by covering fundamental concepts in ML and modern statistics such as the bias-variance tradeoff, overfitting, regularization, generalization, and gradient descent before moving on to more advanced topics in both supervised and unsupervised learning.
Posted Content

Machine learning \& artificial intelligence in the quantum domain

TL;DR: The main ideas, recent developments and progress are described in a broad spectrum of research investigating ML and AI in the quantum domain, investigating how results and techniques from one field can be used to solve the problems of the other.
Journal ArticleDOI

A high-bias, low-variance introduction to Machine Learning for physicists

TL;DR: In this paper, the authors provide an introduction to the core concepts and tools of machine learning in a manner easily understood and intuitive to physicists and emphasize the many natural connections between ML and statistical physics.
Journal ArticleDOI

Machine learning for quantum matter

TL;DR: Quantum matter, the research field studying phases of matter whose properties are intrinsically quantum mechanical, draws from areas as diverse as hard condensed matter physics, materials science, etc. as mentioned in this paper.
References
More filters
Journal Article

Visualizing Data using t-SNE

TL;DR: A new technique called t-SNE that visualizes high-dimensional data by giving each datapoint a location in a two or three-dimensional map, a variation of Stochastic Neighbor Embedding that is much easier to optimize, and produces significantly better visualizations by reducing the tendency to crowd points together in the center of the map.
Journal ArticleDOI

Reducing the Dimensionality of Data with Neural Networks

TL;DR: In this article, an effective way of initializing the weights that allows deep autoencoder networks to learn low-dimensional codes that work much better than principal components analysis as a tool to reduce the dimensionality of data is described.
Journal ArticleDOI

Monte Carlo Sampling Methods Using Markov Chains and Their Applications

TL;DR: A generalization of the sampling method introduced by Metropolis et al. as mentioned in this paper is presented along with an exposition of the relevant theory, techniques of application and methods and difficulties of assessing the error in Monte Carlo estimates.
Journal ArticleDOI

Extremely randomized trees

TL;DR: A new tree-based ensemble method for supervised classification and regression problems that consists of randomizing strongly both attribute and cut-point choice while splitting a tree node and builds totally randomized trees whose structures are independent of the output values of the learning sample.
Journal ArticleDOI

Electron correlations in narrow energy bands

TL;DR: In this paper, the Hartree-Fock approximation of the correlation problem for the d-and f-bands was applied to a simple, approximate model for the interaction of electrons in narrow energy bands.
Related Papers (5)