scispace - formally typeset
Open accessJournal ArticleDOI: 10.1021/ACS.JPCLETT.1C00079

Convolutional Neural Networks for Long Time Dissipative Quantum Dynamics.

05 Mar 2021-Journal of Physical Chemistry Letters (American Chemical Society (ACS))-Vol. 12, Iss: 9, pp 2476-2483
Abstract: Exact numerical simulations of dynamics of open quantum systems often require immense computational resources. We demonstrate that a deep artificial neural network composed of convolutional layers is a powerful tool for predicting long-time dynamics of open quantum systems provided the preceding short-time evolution of a system is known. The neural network model developed in this work simulates long-time dynamics efficiently and accurately across different dynamical regimes from weakly damped coherent motion to incoherent relaxation. The model was trained on a data set relevant to photosynthetic excitation energy transfer and can be deployed to study long-lasting quantum coherence phenomena observed in light-harvesting complexes. Furthermore, our model performs well for the initial conditions different than those used in the training. Our approach reduces the required computational resources for long-time simulations and holds the promise for becoming a valuable tool in the study of open quantum systems.

... read more

Topics: Quantum dynamics (62%), Convolutional neural network (52%), Dissipative system (52%) ... read more
Citations
  More

7 results found


Journal ArticleDOI: 10.1038/S41570-021-00278-1
Pavlo O. Dral1, Mario Barbatti2Institutions (2)
20 May 2021-
Abstract: Theoretical simulations of electronic excitations and associated processes in molecules are indispensable for fundamental research and technological innovations. However, such simulations are notoriously challenging to perform with quantum mechanical methods. Advances in machine learning open many new avenues for assisting molecular excited-state simulations. In this Review, we track such progress, assess the current state of the art and highlight the critical issues to solve in the future. We overview a broad range of machine learning applications in excited-state research, which include the prediction of molecular properties, improvements of quantum mechanical methods for the calculations of excited-state properties and the search for new materials. Machine learning approaches can help us understand hidden factors that influence photo-processes, leading to a better control of such processes and new rules for the design of materials for optoelectronic applications. Machine learning is starting to reshape our approaches to excited-state simulations by accelerating and improving or even completely bypassing traditional theoretical methods. It holds big promises for taking the optoelectronic materials design to a new level.

... read more

12 Citations


Open accessJournal ArticleDOI: 10.1021/ACS.JPCLETT.1C02672
Abstract: The recurrent neural network with the long short-term memory cell (LSTM-NN) is employed to simulate the long-time dynamics of open quantum systems. The bootstrap method is applied in the LSTM-NN construction and prediction, which provides a Monte Carlo estimation of a forecasting confidence interval. Within this approach, a large number of LSTM-NNs are constructed by resampling time-series sequences that were obtained from the early stage quantum evolution given by numerically exact multilayer multiconfigurational time-dependent Hartree method. The built LSTM-NN ensemble is used for the reliable propagation of the long-time quantum dynamics, and the simulated result is highly consistent with the exact evolution. The forecasting uncertainty that partially reflects the reliability of the LSTM-NN prediction is also given. This demonstrates the bootstrap-based LSTM-NN approach is a practical and powerful tool to propagate the long-time quantum dynamics of open systems with high accuracy and low computational cost.

... read more

Topics: Quantum dynamics (60%), Recurrent neural network (55%), Quantum evolution (51%) ... read more

1 Citations


Open accessJournal ArticleDOI: 10.1088/1367-2630/AC3261
Arif Ullah1, Pavlo O. Dral1Institutions (1)
Abstract: The future forecasting ability of machine learning (ML) makes ML a promising tool for predicting long-time quantum dissipative dynamics of open systems. In this Article, we employ nonparametric machine learning algorithm (kernel ridge regression as a representative of the kernel methods) to study the quantum dissipative dynamics of the widely-used spin-boson model. Our ML model takes short-time dynamics as an input and is used for fast propagation of the long-time dynamics, greatly reducing the computational effort in comparison with the traditional approaches. Presented results show that the ML model performs well in both symmetric and asymmetric spin-boson models. Our approach is not limited to spin-boson model and can be extended to complex systems.

... read more

Topics: Kernel method (61%), Complex system (51%)

Open accessPosted Content
Abstract: The time-evolution or equations of motions for many systems are usually described by a set of first-order ordinary differential equations, and for a variety of physical observables, the long-time limit or steady state solution is desired. In the case of open quantum systems, the time-evolution of the reduced density matrix is described by the Liouvillian. For inverse design or optimal control of such systems, the common approaches are based on brute-force search strategies. Here, we present a novel methodology, based on automatic differentiation, capable of differentiating the steady state solution with respect to any parameter of the Liouvillian. Our approach has a low memory cost, and is agnostic to the exact algorithm for computing the steady state. We illustrate the advantage of this method by inverse designing the parameters of a quantum heat transfer device that maximizes the heat current and the rectification coefficient. We also optimize the parameters of various Lindblad operators used in the simulation of energy transfer under natural incoherent light.

... read more

Topics: Steady state (electronics) (57%), Ordinary differential equation (52%), Optimal control (51%) ... read more

Open accessPosted Content
Nikolai D. Klimkin1Institutions (1)
Abstract: The simulation of driven dissipative quantum dynamics is often prohibitively computation-intensive, especially when it is calculated for various shapes of the driving field. We engineer a new feature space for representing the field and demonstrate that a deep neural network can be trained to emulate these dynamics by mapping this representation directly to the target observables. We demonstrate that with this approach, the system response can be retrieved many orders of magnitude faster. We verify the validity of our approach using the example of finite transverse Ising model irradiated with few-cycle magnetic pulses interacting with a Markovian environment. We show that our approach is sufficiently generalizable and robust to reproduce responses to pulses outside the training set.

... read more

Topics: Quantum dynamics (56%), Dissipative system (53%)

References
  More

82 results found


Journal ArticleDOI: 10.1162/NECO.1997.9.8.1735
Sepp Hochreiter1, Jürgen Schmidhuber2Institutions (2)
01 Nov 1997-Neural Computation
Abstract: Learning to store information over extended time intervals by recurrent backpropagation takes a very long time, mostly because of insufficient, decaying error backflow. We briefly review Hochreiter's (1991) analysis of this problem, then address it by introducing a novel, efficient, gradient based method called long short-term memory (LSTM). Truncating the gradient where this does not do harm, LSTM can learn to bridge minimal time lags in excess of 1000 discrete-time steps by enforcing constant error flow through constant error carousels within special units. Multiplicative gate units learn to open and close access to the constant error flow. LSTM is local in space and time; its computational complexity per time step and weight is O. 1. Our experiments with artificial data involve local, distributed, real-valued, and noisy pattern representations. In comparisons with real-time recurrent learning, back propagation through time, recurrent cascade correlation, Elman nets, and neural sequence chunking, LSTM leads to many more successful runs, and learns much faster. LSTM also solves complex, artificial long-time-lag tasks that have never been solved by previous recurrent network algorithms.

... read more

49,735 Citations


Journal ArticleDOI: 10.1109/5.726791
Yann LeCun1, Léon Bottou2, Léon Bottou3, Yoshua Bengio4  +3 moreInstitutions (5)
01 Jan 1998-
Abstract: Multilayer neural networks trained with the back-propagation algorithm constitute the best example of a successful gradient based learning technique. Given an appropriate network architecture, gradient-based learning algorithms can be used to synthesize a complex decision surface that can classify high-dimensional patterns, such as handwritten characters, with minimal preprocessing. This paper reviews various methods applied to handwritten character recognition and compares them on a standard handwritten digit recognition task. Convolutional neural networks, which are specifically designed to deal with the variability of 2D shapes, are shown to outperform all other techniques. Real-life document recognition systems are composed of multiple modules including field extraction, segmentation recognition, and language modeling. A new learning paradigm, called graph transformer networks (GTN), allows such multimodule systems to be trained globally using gradient-based methods so as to minimize an overall performance measure. Two systems for online handwriting recognition are described. Experiments demonstrate the advantage of global training, and the flexibility of graph transformer networks. A graph transformer network for reading a bank cheque is also described. It uses convolutional neural network character recognizers combined with global training techniques to provide record accuracy on business and personal cheques. It is deployed commercially and reads several million cheques per day.

... read more

Topics: Neocognitron (64%), Intelligent character recognition (64%), Artificial neural network (60%) ... read more

34,930 Citations


Journal ArticleDOI: 10.1038/NATURE14539
Yann LeCun1, Yann LeCun2, Yoshua Bengio3, Geoffrey E. Hinton4  +1 moreInstitutions (5)
28 May 2015-Nature
Abstract: Deep learning allows computational models that are composed of multiple processing layers to learn representations of data with multiple levels of abstraction. These methods have dramatically improved the state-of-the-art in speech recognition, visual object recognition, object detection and many other domains such as drug discovery and genomics. Deep learning discovers intricate structure in large data sets by using the backpropagation algorithm to indicate how a machine should change its internal parameters that are used to compute the representation in each layer from the representation in the previous layer. Deep convolutional nets have brought about breakthroughs in processing images, video, speech and audio, whereas recurrent nets have shone light on sequential data such as text and speech.

... read more

33,931 Citations


Open accessProceedings ArticleDOI: 10.3115/V1/D14-1181
Yoon Kim1Institutions (1)
25 Aug 2014-
Abstract: We report on a series of experiments with convolutional neural networks (CNN) trained on top of pre-trained word vectors for sentence-level classification tasks. We show that a simple CNN with little hyperparameter tuning and static vectors achieves excellent results on multiple benchmarks. Learning task-specific vectors through fine-tuning offers further gains in performance. We additionally propose a simple modification to the architecture to allow for the use of both task-specific and static vectors. The CNN models discussed herein improve upon the state of the art on 4 out of 7 tasks, which include sentiment analysis and question classification.

... read more

7,176 Citations


Open accessJournal Article
James Bergstra1, Yoshua Bengio1Institutions (1)
Abstract: Grid search and manual search are the most widely used strategies for hyper-parameter optimization. This paper shows empirically and theoretically that randomly chosen trials are more efficient for hyper-parameter optimization than trials on a grid. Empirical evidence comes from a comparison with a large previous study that used grid search and manual search to configure neural networks and deep belief networks. Compared with neural networks configured by a pure grid search, we find that random search over the same domain is able to find models that are as good or better within a small fraction of the computation time. Granting random search the same computational budget, random search finds better models by effectively searching a larger, less promising configuration space. Compared with deep belief networks configured by a thoughtful combination of manual search and grid search, purely random search over the same 32-dimensional configuration space found statistically equal performance on four of seven data sets, and superior performance on one of seven. A Gaussian process analysis of the function from hyper-parameters to validation set performance reveals that for most data sets only a few of the hyper-parameters really matter, but that different hyper-parameters are important on different data sets. This phenomenon makes grid search a poor choice for configuring algorithms for new data sets. Our analysis casts some light on why recent "High Throughput" methods achieve surprising success--they appear to search through a large number of hyper-parameters because most hyper-parameters do not matter much. We anticipate that growing interest in large hierarchical models will place an increasing burden on techniques for hyper-parameter optimization; this work shows that random search is a natural baseline against which to judge progress in the development of adaptive (sequential) hyper-parameter optimization algorithms.

... read more

Topics: Beam search (70%), Best-first search (66%), Random search (65%) ... read more

5,426 Citations


Performance
Metrics
No. of citations received by the Paper in previous years
YearCitations
20217