scispace - formally typeset
Search or ask a question
Author

William G. Macready

Bio: William G. Macready is an academic researcher from D-Wave Systems. The author has contributed to research in topics: Quantum computer & Optimization problem. The author has an hindex of 34, co-authored 91 publications receiving 13024 citations. Previous affiliations of William G. Macready include IBM & Santa Fe Institute.


Papers
More filters
Journal ArticleDOI
TL;DR: A quantum algorithm for computing generalized Ramsey numbers is presented by reformulating the computation as a combinatorial optimization problem which is solved using adiabatic quantum optimization and the Ramsey numbers r(Tm,Tn) are determined.
Abstract: Ramsey theory is an active research area in combinatorics whose central theme is the emergence of order in large disordered structures, with Ramsey numbers marking the threshold at which this order first appears. For generalized Ramsey numbers r(G, H), the emergent order is characterized by graphs G and H. In this paper we: (i) present a quantum algorithm for computing generalized Ramsey numbers by reformulating the computation as a combinatorial optimization problem which is solved using adiabatic quantum optimization; and (ii) determine the Ramsey numbers $$r({\mathscr {T}}_{m},{\mathscr {T}}_{n})$$r(Tm,Tn) for trees of order $$m,n = 6,7,8$$m,n=6,7,8, most of which were previously unknown.

3 citations

Patent
07 Sep 2017
TL;DR: A hybrid computer comprising a quantum processor can be operated to perform a scalable comparison of high-entropy samplers as mentioned in this paper, which can include comparing entropy and KL divergence of post-processed samples.
Abstract: A hybrid computer comprising a quantum processor can be operated to perform a scalable comparison of high-entropy samplers. Performing a scalable comparison of high-entropy samplers can include comparing entropy and KL divergence of post-processed samplers. A hybrid computer comprising a quantum processor generates samples for machine learning. The quantum processor is trained by matching data statistics to statistics of the quantum processor. The quantum processor is tuned to match moments of the data.

3 citations

Patent
05 Jan 2017
TL;DR: A sampling device may be summarized as including updating a set of samples to include the sample from the probability distribution, and returning the set of sampled samples as discussed by the authors, which can be used to create a desirable probability distribution for use in computing values used in computational techniques including Importance Sampling and Markov chain Monte Carlo systems.
Abstract: The systems, devices, articles, and methods generally relate to sampling from an available probability distribution. The samples maybe used to create a desirable probability distribution, for instance for use in computing values used in computational techniques including: Importance Sampling and Markov chain Monte Carlo systems. An analog processor may operate as a sample generator, for example by: programming the analog processor with a configuration of the number of programmable parameters for the analog processor, which corresponds to a probability distribution over qubits of the analog processor, evolving the analog processor, and reading out states for the qubits. The states for the qubits in the plurality of qubits correspond to a sample from the probability distribution. Operation of the sampling device may be summarized as including updating a set of samples to include the sample from the probability distribution, and returning the set of samples.

2 citations

Posted Content
TL;DR: An efficient method to train undirected posteriors is developed by showing that the gradient of the training objective with respect to the parameters of the Undirected posterior can be computed by backpropagation through Markov chain Monte Carlo updates.
Abstract: The representation of the posterior is a critical aspect of effective variational autoencoders (VAEs). Poor choices for the posterior have a detrimental impact on the generative performance of VAEs due to the mismatch with the true posterior. We extend the class of posterior models that may be learned by using undirected graphical models. We develop an efficient method to train undirected posteriors by showing that the gradient of the training objective with respect to the parameters of the undirected posterior can be computed by backpropagation through Markov chain Monte Carlo updates. We apply these gradient estimators for training discrete VAEs with Boltzmann machine posteriors and demonstrate that undirected models outperform previous results obtained using directed graphical models as posteriors.

2 citations

Patent
12 Dec 2018
TL;DR: In this paper, a VAE-based collaborative filtering system for search and anomaly detection is presented. But the authors do not specify the output of the corresponding column-based VAE as a set of parameters.
Abstract: Collaborative filtering systems based on variational autoencoders (VAEs) are provided. VAEs may be trained on row-wise data without necessarily training a paired VAE on column-wise data (or vice-versa), and may optionally be trained via minibatches. The row-wise VAE models the output of the corresponding column-based VAE as a set of parameters and uses these parameters in decoding. In some implementations, a paired VAE is provided which receives column-wise data and models row-wise parameters; each of the paired VAEs may bind their learned column- or row-wise parameters to the output of the corresponding VAE. The paired VAEs may optionally be trained via minibatches. Unobserved data may be explicitly modelled. Methods for performing inference with such VAE-based collaborative filtering systems are also disclosed, as are example applications to search and anomaly detection.

2 citations


Cited by
More filters
Journal ArticleDOI
01 Oct 2001
TL;DR: Internal estimates monitor error, strength, and correlation and these are used to show the response to increasing the number of features used in the forest, and are also applicable to regression.
Abstract: Random forests are a combination of tree predictors such that each tree depends on the values of a random vector sampled independently and with the same distribution for all trees in the forest. The generalization error for forests converges a.s. to a limit as the number of trees in the forest becomes large. The generalization error of a forest of tree classifiers depends on the strength of the individual trees in the forest and the correlation between them. Using a random selection of features to split each node yields error rates that compare favorably to Adaboost (Y. Freund & R. Schapire, Machine Learning: Proceedings of the Thirteenth International conference, aaa, 148–156), but are more robust with respect to noise. Internal estimates monitor error, strength, and correlation and these are used to show the response to increasing the number of features used in the splitting. Internal estimates are also used to measure variable importance. These ideas are also applicable to regression.

79,257 citations

Book
18 Nov 2016
TL;DR: Deep learning as mentioned in this paper is a form of machine learning that enables computers to learn from experience and understand the world in terms of a hierarchy of concepts, and it is used in many applications such as natural language processing, speech recognition, computer vision, online recommendation systems, bioinformatics, and videogames.
Abstract: Deep learning is a form of machine learning that enables computers to learn from experience and understand the world in terms of a hierarchy of concepts. Because the computer gathers knowledge from experience, there is no need for a human computer operator to formally specify all the knowledge that the computer needs. The hierarchy of concepts allows the computer to learn complicated concepts by building them out of simpler ones; a graph of these hierarchies would be many layers deep. This book introduces a broad range of topics in deep learning. The text offers mathematical and conceptual background, covering relevant concepts in linear algebra, probability theory and information theory, numerical computation, and machine learning. It describes deep learning techniques used by practitioners in industry, including deep feedforward networks, regularization, optimization algorithms, convolutional networks, sequence modeling, and practical methodology; and it surveys such applications as natural language processing, speech recognition, computer vision, online recommendation systems, bioinformatics, and videogames. Finally, the book offers research perspectives, covering such theoretical topics as linear factor models, autoencoders, representation learning, structured probabilistic models, Monte Carlo methods, the partition function, approximate inference, and deep generative models. Deep Learning can be used by undergraduate or graduate students planning careers in either industry or research, and by software engineers who want to begin using deep learning in their products or platforms. A website offers supplementary material for both readers and instructors.

38,208 citations

Book
01 Nov 2008
TL;DR: Numerical Optimization presents a comprehensive and up-to-date description of the most effective methods in continuous optimization, responding to the growing interest in optimization in engineering, science, and business by focusing on the methods that are best suited to practical problems.
Abstract: Numerical Optimization presents a comprehensive and up-to-date description of the most effective methods in continuous optimization. It responds to the growing interest in optimization in engineering, science, and business by focusing on the methods that are best suited to practical problems. For this new edition the book has been thoroughly updated throughout. There are new chapters on nonlinear interior methods and derivative-free methods for optimization, both of which are used widely in practice and the focus of much current research. Because of the emphasis on practical methods, as well as the extensive illustrations and exercises, the book is accessible to a wide audience. It can be used as a graduate text in engineering, operations research, mathematics, computer science, and business. It also serves as a handbook for researchers and practitioners in the field. The authors have strived to produce a text that is pleasant to read, informative, and rigorous - one that reveals both the beautiful nature of the discipline and its practical side.

17,420 citations

Journal ArticleDOI
TL;DR: Machine learning addresses many of the same research questions as the fields of statistics, data mining, and psychology, but with differences of emphasis.
Abstract: Machine Learning is the study of methods for programming computers to learn. Computers are applied to a wide range of tasks, and for most of these it is relatively easy for programmers to design and implement the necessary software. However, there are many tasks for which this is difficult or impossible. These can be divided into four general categories. First, there are problems for which there exist no human experts. For example, in modern automated manufacturing facilities, there is a need to predict machine failures before they occur by analyzing sensor readings. Because the machines are new, there are no human experts who can be interviewed by a programmer to provide the knowledge necessary to build a computer system. A machine learning system can study recorded data and subsequent machine failures and learn prediction rules. Second, there are problems where human experts exist, but where they are unable to explain their expertise. This is the case in many perceptual tasks, such as speech recognition, hand-writing recognition, and natural language understanding. Virtually all humans exhibit expert-level abilities on these tasks, but none of them can describe the detailed steps that they follow as they perform them. Fortunately, humans can provide machines with examples of the inputs and correct outputs for these tasks, so machine learning algorithms can learn to map the inputs to the outputs. Third, there are problems where phenomena are changing rapidly. In finance, for example, people would like to predict the future behavior of the stock market, of consumer purchases, or of exchange rates. These behaviors change frequently, so that even if a programmer could construct a good predictive computer program, it would need to be rewritten frequently. A learning program can relieve the programmer of this burden by constantly modifying and tuning a set of learned prediction rules. Fourth, there are applications that need to be customized for each computer user separately. Consider, for example, a program to filter unwanted electronic mail messages. Different users will need different filters. It is unreasonable to expect each user to program his or her own rules, and it is infeasible to provide every user with a software engineer to keep the rules up-to-date. A machine learning system can learn which mail messages the user rejects and maintain the filtering rules automatically. Machine learning addresses many of the same research questions as the fields of statistics, data mining, and psychology, but with differences of emphasis. Statistics focuses on understanding the phenomena that have generated the data, often with the goal of testing different hypotheses about those phenomena. Data mining seeks to find patterns in the data that are understandable by people. Psychological studies of human learning aspire to understand the mechanisms underlying the various learning behaviors exhibited by people (concept learning, skill acquisition, strategy change, etc.).

13,246 citations

Journal ArticleDOI
TL;DR: A framework is developed to explore the connection between effective optimization algorithms and the problems they are solving and a number of "no free lunch" (NFL) theorems are presented which establish that for any algorithm, any elevated performance over one class of problems is offset by performance over another class.
Abstract: A framework is developed to explore the connection between effective optimization algorithms and the problems they are solving. A number of "no free lunch" (NFL) theorems are presented which establish that for any algorithm, any elevated performance over one class of problems is offset by performance over another class. These theorems result in a geometric interpretation of what it means for an algorithm to be well suited to an optimization problem. Applications of the NFL theorems to information-theoretic aspects of optimization and benchmark measures of performance are also presented. Other issues addressed include time-varying optimization problems and a priori "head-to-head" minimax distinctions between optimization algorithms, distinctions that result despite the NFL theorems' enforcing of a type of uniformity over all algorithms.

10,771 citations