scispace - formally typeset
Search or ask a question
Author

William G. Macready

Bio: William G. Macready is an academic researcher from D-Wave Systems. The author has contributed to research in topics: Quantum computer & Optimization problem. The author has an hindex of 34, co-authored 91 publications receiving 13024 citations. Previous affiliations of William G. Macready include IBM & Santa Fe Institute.


Papers
More filters
Patent
05 Sep 2007
TL;DR: In this article, a discrete optimization problem is solved using an analog optimization device such as a quantum processor, where the objective function and at least one constraint corresponding to the discrete optimization problems are converted into a first set of inputs.
Abstract: Discrete optimization problem are solved using an analog optimization device such as a quantum processor. Problems are solved using an objective function and at least one constraint corresponding to the discrete optimization problems. The objective function is converted into a first set of inputs and the at least one constraint is converted into a second set of inputs for the analog optimization device. A third set of inputs is generated which are indicative of at least one penalty coefficient. A final state of the analog optimization device corresponds to at least a portion of the solution to the discrete optimization problem.

36 citations

Patent
16 Jun 2010
TL;DR: In this article, the authors propose a quantum processor with multiple sets of qubits coupled to respective annealing signal lines, such that dynamic evolution of each set of qu bits is controlled independently from the dynamic evolutions of the other qubits.
Abstract: Solving computational problems may include generating a logic circuit representation of the computational problem, encoding the logic circuit representation as a discrete optimization problem, and solving the discrete optimization problem using a quantum processor. Output(s) of the logic circuit representation may be clamped such that the solving involves effectively executing the logic circuit representation in reverse to determine input(s) that corresponds to the clamped output(s). The representation may be of a Boolean logic circuit. The discrete optimization problem may be composed of a set of miniature optimization problems, where each miniature optimization problem encodes a respective logic gate from the logic circuit representation. A quantum processor may include multiple sets of qubits, each set coupled to respective annealing signal lines such that dynamic evolution of each set of qubits is controlled independently from the dynamic evolutions of the other sets of qubits.

34 citations

Proceedings Article
01 Mar 2000
TL;DR: The precise measure of dissimilarity between scales that is proposed is the amount of extra information on one scale beyond that which exists on a di erent scale that is perhaps most naturally determined using a maximum entropy inference of the distribution of patterns at the second scale.
Abstract: For systems usually characterized as \complex/living/intelligent" very often the spatio-temporal patterns exhibited on di erent scales di er markedly from one another. For example the biomass distribution of a human body \looks very di erent" depending on the spatial scale at which one examines that biomass. Conversely, the density patterns at di erent scales in \dead/simple" systems (e.g., gases, mountains, crystals) do not vary signi cantly from one another. Accordingly, the degrees of self-dissimilarity between various scales constitutes a complexity \signature" of the system. Such signatures can be empirically measured for many real-world data sets concerning spatio-temporal densities, be they mass densities, species densities, or symbol densities. This allows us to compare the complexity signatures of wholly di erent kinds of systems (e.g., systems involving information density in a digital computer, vs. species densities in a rainforest, vs. capital density in an economy, etc.). Such signatures can also be clustered, to provide an empirically determined taxonomy of \kinds of systems" that share organizational traits. The precise measure of dissimilarity between scales that we propose is the amount of extra information on one scale beyond that which exists on a di erent scale. This \added information" is perhaps most naturally determined using a maximum entropy inference of the distribution of patterns at the second scale, based on the provided distribution at the rst scale. We brie y discuss using our measure with other inference mechanisms (e.g., Kolmogorov complexity-based inference).

31 citations

Patent
10 Jul 2006
TL;DR: In this article, the factoring may be accomplished by creating a factor graph, mapping the factor graph onto an analog processor, initializing the analog processor to an initial state, evolving the analog processors to a final state, and receiving an output from the analogue processor, the output comprising a set of factors of the number.
Abstract: Systems, methods and apparatus for factoring numbers are provided. The factoring may be accomplished by creating a factor graph, mapping the factor graph onto an analog processor, initializing the analog processor to an initial state, evolving the analog processor to a final state, and receiving an output from the analog processor, the output comprising a set of factors of the number.

30 citations

Book ChapterDOI
27 Sep 2017
TL;DR: In the last decade, QA systems as implemented by D-Wave have scaled with Moore-like growth and current architectures provide 2048 sparsely-connected qubits, and continued exponential growth is anticipated.
Abstract: Quantum annealers (QA) are specialized quantum computers that minimize objective functions over discrete variables by physically exploiting quantum effects. Current QA platforms allow for the optimization of quadratic objectives defined over binary variables, that is, they solve quadratic unconstrained binary optimization (QUBO) problems. In the last decade, QA systems as implemented by D-Wave have scaled with Moore-like growth. Current architectures provide 2048 sparsely-connected qubits, and continued exponential growth is anticipated.

30 citations


Cited by
More filters
Journal ArticleDOI
01 Oct 2001
TL;DR: Internal estimates monitor error, strength, and correlation and these are used to show the response to increasing the number of features used in the forest, and are also applicable to regression.
Abstract: Random forests are a combination of tree predictors such that each tree depends on the values of a random vector sampled independently and with the same distribution for all trees in the forest. The generalization error for forests converges a.s. to a limit as the number of trees in the forest becomes large. The generalization error of a forest of tree classifiers depends on the strength of the individual trees in the forest and the correlation between them. Using a random selection of features to split each node yields error rates that compare favorably to Adaboost (Y. Freund & R. Schapire, Machine Learning: Proceedings of the Thirteenth International conference, aaa, 148–156), but are more robust with respect to noise. Internal estimates monitor error, strength, and correlation and these are used to show the response to increasing the number of features used in the splitting. Internal estimates are also used to measure variable importance. These ideas are also applicable to regression.

79,257 citations

Book
18 Nov 2016
TL;DR: Deep learning as mentioned in this paper is a form of machine learning that enables computers to learn from experience and understand the world in terms of a hierarchy of concepts, and it is used in many applications such as natural language processing, speech recognition, computer vision, online recommendation systems, bioinformatics, and videogames.
Abstract: Deep learning is a form of machine learning that enables computers to learn from experience and understand the world in terms of a hierarchy of concepts. Because the computer gathers knowledge from experience, there is no need for a human computer operator to formally specify all the knowledge that the computer needs. The hierarchy of concepts allows the computer to learn complicated concepts by building them out of simpler ones; a graph of these hierarchies would be many layers deep. This book introduces a broad range of topics in deep learning. The text offers mathematical and conceptual background, covering relevant concepts in linear algebra, probability theory and information theory, numerical computation, and machine learning. It describes deep learning techniques used by practitioners in industry, including deep feedforward networks, regularization, optimization algorithms, convolutional networks, sequence modeling, and practical methodology; and it surveys such applications as natural language processing, speech recognition, computer vision, online recommendation systems, bioinformatics, and videogames. Finally, the book offers research perspectives, covering such theoretical topics as linear factor models, autoencoders, representation learning, structured probabilistic models, Monte Carlo methods, the partition function, approximate inference, and deep generative models. Deep Learning can be used by undergraduate or graduate students planning careers in either industry or research, and by software engineers who want to begin using deep learning in their products or platforms. A website offers supplementary material for both readers and instructors.

38,208 citations

Book
01 Nov 2008
TL;DR: Numerical Optimization presents a comprehensive and up-to-date description of the most effective methods in continuous optimization, responding to the growing interest in optimization in engineering, science, and business by focusing on the methods that are best suited to practical problems.
Abstract: Numerical Optimization presents a comprehensive and up-to-date description of the most effective methods in continuous optimization. It responds to the growing interest in optimization in engineering, science, and business by focusing on the methods that are best suited to practical problems. For this new edition the book has been thoroughly updated throughout. There are new chapters on nonlinear interior methods and derivative-free methods for optimization, both of which are used widely in practice and the focus of much current research. Because of the emphasis on practical methods, as well as the extensive illustrations and exercises, the book is accessible to a wide audience. It can be used as a graduate text in engineering, operations research, mathematics, computer science, and business. It also serves as a handbook for researchers and practitioners in the field. The authors have strived to produce a text that is pleasant to read, informative, and rigorous - one that reveals both the beautiful nature of the discipline and its practical side.

17,420 citations

Journal ArticleDOI
TL;DR: Machine learning addresses many of the same research questions as the fields of statistics, data mining, and psychology, but with differences of emphasis.
Abstract: Machine Learning is the study of methods for programming computers to learn. Computers are applied to a wide range of tasks, and for most of these it is relatively easy for programmers to design and implement the necessary software. However, there are many tasks for which this is difficult or impossible. These can be divided into four general categories. First, there are problems for which there exist no human experts. For example, in modern automated manufacturing facilities, there is a need to predict machine failures before they occur by analyzing sensor readings. Because the machines are new, there are no human experts who can be interviewed by a programmer to provide the knowledge necessary to build a computer system. A machine learning system can study recorded data and subsequent machine failures and learn prediction rules. Second, there are problems where human experts exist, but where they are unable to explain their expertise. This is the case in many perceptual tasks, such as speech recognition, hand-writing recognition, and natural language understanding. Virtually all humans exhibit expert-level abilities on these tasks, but none of them can describe the detailed steps that they follow as they perform them. Fortunately, humans can provide machines with examples of the inputs and correct outputs for these tasks, so machine learning algorithms can learn to map the inputs to the outputs. Third, there are problems where phenomena are changing rapidly. In finance, for example, people would like to predict the future behavior of the stock market, of consumer purchases, or of exchange rates. These behaviors change frequently, so that even if a programmer could construct a good predictive computer program, it would need to be rewritten frequently. A learning program can relieve the programmer of this burden by constantly modifying and tuning a set of learned prediction rules. Fourth, there are applications that need to be customized for each computer user separately. Consider, for example, a program to filter unwanted electronic mail messages. Different users will need different filters. It is unreasonable to expect each user to program his or her own rules, and it is infeasible to provide every user with a software engineer to keep the rules up-to-date. A machine learning system can learn which mail messages the user rejects and maintain the filtering rules automatically. Machine learning addresses many of the same research questions as the fields of statistics, data mining, and psychology, but with differences of emphasis. Statistics focuses on understanding the phenomena that have generated the data, often with the goal of testing different hypotheses about those phenomena. Data mining seeks to find patterns in the data that are understandable by people. Psychological studies of human learning aspire to understand the mechanisms underlying the various learning behaviors exhibited by people (concept learning, skill acquisition, strategy change, etc.).

13,246 citations

Journal ArticleDOI
TL;DR: A framework is developed to explore the connection between effective optimization algorithms and the problems they are solving and a number of "no free lunch" (NFL) theorems are presented which establish that for any algorithm, any elevated performance over one class of problems is offset by performance over another class.
Abstract: A framework is developed to explore the connection between effective optimization algorithms and the problems they are solving. A number of "no free lunch" (NFL) theorems are presented which establish that for any algorithm, any elevated performance over one class of problems is offset by performance over another class. These theorems result in a geometric interpretation of what it means for an algorithm to be well suited to an optimization problem. Applications of the NFL theorems to information-theoretic aspects of optimization and benchmark measures of performance are also presented. Other issues addressed include time-varying optimization problems and a priori "head-to-head" minimax distinctions between optimization algorithms, distinctions that result despite the NFL theorems' enforcing of a type of uniformity over all algorithms.

10,771 citations