scispace - formally typeset
Search or ask a question
Journal ArticleDOI

Can quantum probability provide a new direction for cognitive modeling

01 Jun 2013-Behavioral and Brain Sciences (Behav Brain Sci)-Vol. 36, Iss: 3, pp 255-274
TL;DR: The thesis is that quantum probability theory provides a more accurate and powerful account of certain cognitive processes than classical probability theory, and this work discusses ways in which QP and CP theories converge.
Abstract: Classical (Bayesian) probability (CP) theory has led to an influential research tradition for modeling cognitive processes. Cognitive scientists have been trained to work with CP principles for so long that it is hard even to imagine alternative ways to formalize probabilities. However, in physics, quantum probability (QP) theory has been the dominant probabilistic approach for nearly 100 years. Could QP theory provide us with any advantages in cognitive modeling as well? Note first that both CP and QP theory share the fundamental assumption that it is possible to model cognition on the basis of formal, probabilistic principles. But why consider a QP approach? The answers are that (1) there are many well-established empirical findings (e.g., from the influential Tversky, Kahneman research tradition) that are hard to reconcile with CP principles; and (2) these same findings have natural and straightforward explanations with quantum principles. In QP theory, probabilistic assessment is often strongly context- and order- dependent, individual states can be superposition states (that are impossible to associate with specific values), and composite systems can be entangled (they cannot be decomposed into their subsystems). All these characteristics appear perplexing from a classical perspective. However, our thesis is that they provide a more accurate and powerful account of certain cognitive processes. We first introduce QP theory and illustrate its application with psychological examples. We then review empirical findings that motivate the use of quantum theory in cognitive theory, but also discuss ways in which QP and CP theories converge. Finally, we consider the implications of a QP theory approach to cognition for human rationality.

Summary (11 min read)

Jump to: [1.1. Why move toward quantum probability theory?][1.2. Why move away from existing formalisms?][2.1. The outcome space][2.2. Compatibility][2.3. Time evolution][3. The empirical case for QP theory in psychology][3.1. Conjunction fallacy][3.2. Failures of commutativity in decision making][3.3. Violations of the sure thing principle][3.4. Asymmetry in similarity][3.5. Other related empirical evidence][4.1 Can the psychological relevance of CP theory be][4.2. Heuristics vs. formal probabilistic modeling][4.3. Is QP theory more complex than CP theory?][5. The rational mind][6.1 Theoretical challenges][6.2. Empirical challenges][6.3. Implications for brain neurophysiology][6.4. The future of QP theory in psychology][Projectors (or projection operators)][Composite systems][Time dependence][An example of how interference can arise in QP theory][ACKNOWLEDGMENTS][Institute for Frontier Areas of Psychology, D-79098 Freiburg, Germany; Collegium Helveticum, CH-8092 Zurich, Switzerland.][13083-886 SP, Brazil.][Pittsburgh, PA 15213.][NOTE][United Kingdom.][Montreal, QC H3A 1B1, Canada.][Kingdom.][Los Angeles, CA 90032.][3G1, Canada.][R1. Beyond classical probability (CP) theory: The potential of quantum theory in psychology][R2. Misconceptions on limitations][R3. Empirical and theoretical extensions][R4. Empirical challenges][R5. Neural basis] and [R6. Rationality]

1.1. Why move toward quantum probability theory?

  • In this article the authors evaluate the potential of quantum probability (QP) theory for modeling cognitive processes.
  • Rather, the authors are interested in QP theory as a mathematical framework for cognitive modeling.
  • Superposition, entanglement, incompatibility, and interference are all related aspects of QP theory, which endow it with a unique character.
  • He served as chief editor of Journal of Mathematical Psychology from 2005 through 2010 and he is currently an associate editor of Psychological Review.
  • Instead, QP defines conjunction between incompatible questions in a sequential way, such as “A and then B.”Crucially, the outcome of question A can affect the consideration of question B, so that interference and order effects can arise.

1.2. Why move away from existing formalisms?

  • By now, the authors hope they have convinced readers that QP theory has certain unique properties, whose potential for cognitive modeling appears, at the very least, intriguing.
  • Many of these findings relate to order/context effects, violations of the law of total probability (which is fundamental to Bayesian modeling), and failures of compositionality.
  • On the one hand, there was the strong intuition from classical models (e.g., Newtonian physics, classical electromagnetism).
  • This is exactly what makes CP (and QP) models appealing to many theorists and why, as noted, in seeking to understand the unique features of QP theory, it is most natural to compare it with CP theory.
  • Note that the authors do not develop an argument that CP theory is unsuitable for cognitive modeling; it clearly is, in many cases.

2.1. The outcome space

  • First, a sample space is defined, in which specific outcomes about a question are subsets of this sample space.
  • Also, more general emotions, such as happiness, would be represented by subspaces of higher dimensionality.
  • To determine the probability of the answer happy, the authors need to project the state represented by |Ψ〉 onto the subspace for “happy” spanned by the vector |happy〉.
  • An important feature of QP theory is the distinction between superposition and basis states.
  • Therefore, a decision, which causes a person to resolve the indefinite state regarding a question into a definite state, is not a simple read-out from a pre-existing definite state; instead, it is constructed from the current context and question (Aerts & Aerts 1995).

2.2. Compatibility

  • Suppose that the authors are interested in two questions, whether the person is happy or not, and also whether the person is employed or not.
  • Psychologically, incompatibility between questions means that a cognitive agent cannot formulate a single thought for combinations of the corresponding outcomes.
  • Conversely, certainty about employment aligns the state vector with the subspace for employed, which makes the person somewhat uncertain about her happiness (perhaps her job is sometimes stressful).
  • Order and context dependence of probability assessments (and, relatedly, the failure of commutativity in conjunction) are some of the most distinctive and powerful features of QP theory.
  • Such intuitions can be readily realized in a QP framework through tensor product representations.

2.3. Time evolution

  • So far, the authors have seen static QPmodels, whereby they assess the probability for various outcomes for a state at a single point in time.
  • It is important to recall that the state vector is a superposition of components along different basis vectors.
  • In CP theory, the time-evolved state directly gives the probabilities for the possible outcomes.
  • Otherwise, QP theory can produce violations of the law of total probability.
  • Suppose that the hypothetical person knows she will find out whether she will be employed or not, before having the inner reflection about happiness (perhaps she plans to think about her happiness after a professional review).

3. The empirical case for QP theory in psychology

  • The authors explore whether the main characteristics of QP theory (order/context effects, interference, superposition, entanglement) provide us with any advantage in understanding psychological processes.
  • Many of these situations concern Kahneman and Tversky’s hugely influential research program on heuristics and biases (Kahneman et al.
  • The authors strategy is to first discuss how the empirical finding in question is inconsistent with CP theory axioms.
  • This is not to say that some model broadly based on classical principles cannot be formulated.
  • Such illustrations will be simplifications of the correspondingquantummodels.

3.1. Conjunction fallacy

  • In a famous demonstration, Tversky and Kahneman (1983) presented participants with a story about a hypothetical person, Linda, who sounded very much like a feminist.
  • The important comparison concerned the statements “Linda is a bank teller” (extremely unlikely given Linda’s description) and “Linda is a bank teller and a feminist.”.
  • The state vector could not be placed in between the bank teller and feminist subspaces, as this would mean that it is has a high projection to both the bank teller and the feminist outcomes (only the latter is true).
  • Psychologically, the QP model explains the conjunction fallacy in terms of the context dependence of probability assessment.
  • Also, the QP model is compatible with the representativeness and availability heuristics.

3.2. Failures of commutativity in decision making

  • The authors next consider failures of commutativity in decision making, whereby asking the same two questions in different orders can lead to changes in response (Feldman & Lynch 1988; Schuman & Presser 1981; Tourangeau et al.
  • QP theory can accommodate order effects inGallup polls, in a way analogous to how the conjunction fallacy is explained.
  • The two sets of basis vectors are not entirely orthogonal; the authors assume that if a person considers Clinton honest, then that person is a little more likely to consider.
  • It can be seen that the direct projection is less, compared to the projection via the |Gore yes〉 vector.
  • Trueblood and Busemeyer (2011) proposed a QP model for two such situations, a jury decision-making task (McKenzie et al. 2002) and a medical inference one (Bergus et al. 1998).

3.3. Violations of the sure thing principle

  • The model Trueblood and Busemeyer (2011) developed is an example of a dynamic QP model, whereby the inference process requires evolution of the state vector.
  • When participants were told that the opponent was going to cooperate, they decided to defect; and when they were told that the opponent was defecting, they decided to defect as well.
  • Tversky and Shafir (1992) described such violations of the sure thing principle as failures of consequential reasoning.
  • The same unitary operator also embodied the idea of wishful thinking, rotating the state vector so that the amplitudes for the “cooperate–cooperate” and “defect–defect” combinations for participant and opponent actions increased.
  • Note that this quantum model is more complex than the ones considered previously.

3.4. Asymmetry in similarity

  • The authors have considered how the QP explanation for the conjunction fallacy can be seen as a formalization of the representativeness heuristic (Tversky & Kahneman 1983).
  • In some cases, the similarity of A to B would not be the same as the similarity of B to A. Tversky’s (1977) findings profoundly challenged the predominant approach to similarity, whereby objects are represented as points in a multidimensional space, and similarity is modeled as a function of distance.
  • Pothos and Busemeyer (2011) proposed that different concepts in their experience correspond to subspaces of different dimensionality, so that concepts for which there is more extensive knowledge were naturally associated with subspaces of greater dimensionality.
  • This is set so that it is neutral with respect to the A and B subspaces (i.e., prior to the similarity comparison, a participant would not be thinking more about A than about B, or vice versa).
  • Tversky’s proposal was that symmetry is violated, because the authors have more extensive knowledge about China than about Korea, and, therefore, China has more distinctive features relative to Korea.

4.1 Can the psychological relevance of CP theory be

  • It is always possible to augment a model with additional parameters or mechanisms to accommodate problematic results.
  • Moreover, deviations from CP predictions in judgment could be explained by introducing assumptions of how participants interpret the likelihood of statements in a particular hypothesis, over and above what is directly stated (e.g., Sher & McKenzie 2008).
  • Also, the introduction of post-hoc parameters will lead to models that are descriptive and limited in insight.
  • Therefore, when obtaining psychological evidence for a formal framework, the authors do not just support the particular principles under scrutiny.
  • There is a clear sense that if one wishes to pursue a formal, probabilistic approach for the Tversky, Kahneman type of findings, then CP theory is not the right choice, even if it is not actually possible to disprove the applicability of CP theory to such findings.

4.2. Heuristics vs. formal probabilistic modeling

  • The critique of CP theory by Tversky, Kahneman and collaborators can be interpreted in a more general way, as a statement that the attempt to model cognition with any axiomatic set of principles is misguided.
  • Many of these proposals sought to relate generic memory or similarity processes to performance in decision making (e.g., the availability and representativeness heuristics; Tversky & Kahneman 1983).
  • Other researchers have developed heuristics as individual computational rules.
  • Likewise, failures of consequential reasoning in prisoner’s dilemma (Tversky & Shafir 1992) can be formalized with quantum interference effects.
  • The contrast between heuristic and formal probabilistic approaches to cognition is a crucial one for psychology.

4.3. Is QP theory more complex than CP theory?

  • The authors have discussed the features of QP theory, which distinguish it from CP theory.
  • Dynamic QP models must obey the law of double stochasticity, while CP Markov models can violate this law.
  • More generally, a fundamental constraint of QP theory concerns Gleason’s theorem, namely that probabilities have to be associated with subspaces via the equation Prob(A|c) = ‖PA|cl‖2. Finding that Gleason’s theorem is psychologically implausible would rule out quantum models.
  • Even if at a broad level CP and QP theories are subject to analogous constraints, a critic may argue that it is still possible that QP models are more flexible (perhaps because of their form).
  • The models could still differ with respect to their complexity.

5. The rational mind

  • Beginning with Aristotle and up until recently, scholars have believed that humans are rational because they are capable of reasoning on the basis of logic.
  • Considerable evidence accumulated that naïve observers do not typically reason with classical logic (Wason 1960); therefore, classical logic could not be maintained as a theory of thinking.
  • Finally, optimality is a key aspect of Anderson’s (1990) rational analysis and concerns the accuracy of probabilistic inference.
  • Classical theory would assume that this story generates a sample space for all possible characteristic combinations for Linda, including unfamiliar ones such as feminist bank teller.
  • Note that the perspective dependence of probabilistic assessment in QP theory may seem to go against an intuition that “objective” probabilities are somehow more valid or correct.

6.1 Theoretical challenges

  • The results of Tversky, Kahneman, and colleagues (e.g., Tversky & Kahneman 1974) preclude a complete explanation of cognitive processes with CP theory.
  • The authors have suggested that QP theory is the appropriate framework to employ for cases in which CP theory fails.
  • In exploring such a proposal, the first step should be to identify the precise boundary conditions between the applicability and failure of CP principles in cognitive modeling.
  • The results of Tversky et al. (1974) reveal situations in which this reliance breaks down.
  • There is a further, potentially relevant literature on quantum information theory (Nielsen & Chuang 2000), which concerns the processing advantages of probabilistic inference based on QP theory.

6.2. Empirical challenges

  • So far, the quantum program has involved employing quantum computational principles to explain certain, prominent empirical findings.
  • Rather, the authors discussed results that have presented ongoing challenges and have resisted explanation based on classical principles.
  • Trueblood and Busemeyer (2011) developed a model to accommodate order effects in the assessment of evidence in McKenzie et al.’s (2002) task.
  • The model successfully described data from both the original conditions and a series of relevant extensions.
  • Overall, understanding the quantum formalism to the extent that surprising, novel predictions for cognition can be generated is no simple task (in physics, this was a process that took several decades).

6.3. Implications for brain neurophysiology

  • An unresolved issue is how QP computations are implemented in the brain.
  • The authors have avoided a detailed discussion of this research area because, although exciting, is still in its infancy.
  • The most controversial (Atmanspacher 2004; Litt et al. 2006) perspective is that the brain directly supports quantum computations.
  • For quantum computation to occur, a system must be isolated from the environment, as environmental interactions cause quantum superposition states to rapidly decohere into classical states.
  • Overall, in cognitive science it has been standard to initially focus on identifying the mathematical principles underlying cognition, and later address the issue of how the brain can support the postulated computations.

6.4. The future of QP theory in psychology

  • There is little doubt that extensive further work is essential before all aspects of QP theory can acquire psychological meaning.
  • But this does not imply that current QP models are not satisfactory.
  • The purpose of this article is to argue that researchers attracted to probabilistic cognitive models need not be restricted to classical theory.
  • Rather, quantum theory provides many theoretical and practical advantages, and its applicability to psychological explanation should be further considered.

Projectors (or projection operators)

  • For a onedimensional subspace, corresponding, for example, to the |happy〉 ray, the projector is a simple outer product, Phappy = |happy〉 〈happy|.
  • Given the above subspace for “happy,” the probability that a person is happy is given by ‖Phappy|cl‖2 = ‖happyl khappy|cl‖2.
  • 〈happy|Ψ〉 is the standard dot product and |happy〉 is a unit length vector.
  • The single lines on the right hand side denote the modulus of a complex number.

Composite systems

  • Two subspaces can be combined into a composite space in two ways: one way is by forming a tensor product space (as in Figure 1b) and the other way is by forming a space from a direct sum.
  • First consider the formation of a tensor product space.
  • Suppose |happy〉, |∼happy〉 are two basis vectors that span the subspace H, representing the possibility of happiness, and suppose |employed〉, |∼employed〉 are two basis vectors that span the subspace E, representing the possibility of employment.
  • Then, the tensor product space equals the span of the four basis vectors formed by the tensor products {|happyl⊗ |employedl, |happyl⊗ | employedl, | happyl⊗ |employedl, | happyl⊗ | employedl}.
  • Next consider the formation of a space by direct sum.

Time dependence

  • The quantum state vector changes over time according to Schrödinger’s equation, d dt |c(t)l = −i ·H · |c(t)l where H is a Hermitian linear operator.
  • This is the QP theory equivalent of the Kolmogorov forward equation for Markov models in CP theory.
  • The two (obviously related) operators H and U(t) contain all the information about the dynamical aspects of a system.
  • Thus, the effect of U(t) on a state vector is to rotate it in a way that captures some dynamical aspect of the situation of interest.

An example of how interference can arise in QP theory

  • Consider a situation whereby a person tries to assess whether she is happy or not, depending upon whether she is employed or not.
  • Note that so far the situation is identical to what the authors would have had if they were applying a CP theory Markov model.
  • This nonlinearity in QP theory can lead to interference terms that produce violations of the law of total probability.
  • Therefore, regardless of the outcome regarding employment, the evolved state will be a state that is not a superposition one.

ACKNOWLEDGMENTS

  • Research relevant to this work has been supported by the MIUR grant “Problem solving and decision making: Logical, psychological and neuroscientific aspects within criminal justice” (PRIN, n.2010RP5RNM_006) and by Grant CR 409/1-1 from the Deutsche Forshungsgemeinshaft (DFG) as part of the prority program New Frameworks of Rationality (SPP 1516).
  • The debate, the evidence, and the future doi:10.

Institute for Frontier Areas of Psychology, D-79098 Freiburg, Germany; Collegium Helveticum, CH-8092 Zurich, Switzerland.

  • It was an old idea by Niels Bohr, one of the founding architects of quantum physics, that central features of quantum theory, such as complementarity, are also of pivotal significance beyond the domain of physics.
  • The proper framework for a logic of incompatible propositions is a partial Boolean lattice (Primas 2007), where locally Boolean sublattices are pasted together in a globally non-Boolean way – just like an algebra of generally non-commuting operations may contain a subset of commuting operations.
  • The authors use the notion of “quantum probability” for psychological and cognitive models and their predictions (cf. Gudder 1988; Redei & Summers 2007).
  • Whereas Kolmogorov probabilities refer to events for a single condition, quantum probabilities refer to the entire set of incompatible conditions, necessary for a comprehensive description of the experiment.

13083-886 SP, Brazil.

  • Pothos & Busemeyer’s (P&B’s) query about whether quantum probability can provide a foundation for the cognitive modeling embodies so many underlying implications that the subject is far from exhausted, also known as Abstract.
  • Quantum superposition is commonly considered to be a mapping of two bit states into one.
  • In their target article, Pothos & Busemeyer (P&B) elegantly argue that there may be quantum principles – notably superposition and entanglement – at play in the context of human cognitive behavior.
  • From the point of view of the process of subsuming information, the material meaningfully incorporated within an individual’s cognitive structure is never lost, but a process called “forgetting” takes place in a much more spontaneous manner, because it is a continuation of the very process of associative subsumption by which one learns.
  • In support of this idea, Todd (1999) also advocated that the unit of information embedded in Brookes’ theory is a concept derived from Ausubel’s learning theory.

Pittsburgh, PA 15213.

  • Quantum probability (QP) theory provides an alternative account of empirical phenomena in decision making that classical probability (CP) theory cannot explain, also known as Abstract.
  • Here, the authors argue that cognitive architectures, a modeling approach with a long history in the cognitive sciences, may also address the outlined challenges.
  • Whereas a formal probability theory such as QP and CP represent the latter as a logical conjunction, they are represented as independent instances in memory in a computational theory such as ACT-R.
  • When eschewing a formal probabilistic framework in favor of a computational account, apparent impossibilities simply dissolve in light of the cognitive processes used to actually produce the decisions.

NOTE

  • ACT-R code, publications, and models are available at http://act-r. psy.cmu.edu.
  • Attributes and associations to the options in the Linda problem.
  • 286 BEHAVIORAL AND BRAIN SCIENCES (2013) 36:3 Quantum probability and comparative cognition doi:10.

United Kingdom.

  • The authors concentrate on two aspects of the article by Pothos & Busemeyer (P&B): the relationship between classical and quantum probability and quantum probability as a basis for rational decisions.
  • From a mathematical point of view, CP is embedded as a special case in the more general non-commutative (also referred to as “quantum”) probability theory.
  • It is worth noting that the decisions of nonhuman animals violate the principles of rational decision making (Houston et al. 2007b).
  • With these examples, the authors are trying to illustrate that the macroscopic world of decisions is more complex than traditional models of decision theory assume.

Montreal, QC H3A 1B1, Canada.

  • Pothos & Busemeyer (P&B) present the Dirac formalism of quantum probability (DQP) as a potential direction for cognitive modeling.
  • P&B do not show how the framework could be used to build predictive theories: all the examples listed are post hoc descriptive models.
  • Local and global hidden variable theories cannot be distinguished, and a classical explanation cannot be ruled out.
  • DQP is restricted to closed systems, but the mind is an open system, and, therefore, NQP yields a better modeling framework.

Kingdom.

  • Quantum probability models for choice in large worlds may be motivated pragmatically – there is no third theory – or metaphysically: statistical processing in the brain adapts to the true scale-relative structure of the universe.
  • The premise that the CP-based theory of decision –which is better characterized as a theory of incentive response – has been refuted by experimental evidence is questionable.
  • This would invite the question as to whether there is some general feature of the world that explains why both fundamental physical structure and fundamental cognitive structure follow QP rather than CP.
  • Many philosophers presume a kind of atomism that is inconsistent with quantum physics (Ladyman & Ross 2007).

Los Angeles, CA 90032.

  • Pothos & Busemeyer (P&B) argue that classical probability (CP) fails to describe human decision processes accurately and should be supplanted by quantum probability, also known as Abstract.
  • To use Baron’s (2004) terminology, CP may be useful as a prescriptive theory of behavior, but not as a descriptive theory.
  • The focus is on how the answers interplay: happiness might be more probable if a person is employed (because he or she has money) or less probable (because she hates her job).
  • The authors are too focused on mathematics at the expense of usefulness.”.

3G1, Canada.

  • Abstract:Quantum probability (QP) theory can be seen as a type of vector symbolic architecture (VSA): mental states are vectors storing structured information and manipulated using algebraic operations.
  • This allows existing biologically realistic neural models to be adapted to provide a mechanistic explanation of the cognitive phenomena described in the target article by Pothos & Busemeyer (P&B).
  • If HAPPY is a particular 500-dimensional vector, and EMPLOYED is a different 500-dimensional vector, then HAPPY⊛EMPLOYED gives a new 500-dimensional vector (a tensor product would give a 250,000-dimensional vector).
  • Resolving this ambiguity will be a key test of QP.

R1. Beyond classical probability (CP) theory: The potential of quantum theory in psychology

  • As the authors mentioned in their main article, quantum probability theory simply refers to the theory for assigning probabilities to events developed in quantum mechanics, without any of the physics (cf. Aerts, Broekaert, Gabora, & Sozzo [Aerts et al.]).
  • Moreover, a sense of probabilistic determinism can arise in quantum theory in a way analogous to that of classical theory: in quantum theory, if it is likely that thinking that Gore is honest makes Clinton likely to be honest too, then the subspaces for the corresponding outcomes are near to each other.
  • It will not always be the case that a person thinking that Gore is honest will also think that Clinton is honest, but, on average, this will be the case.
  • Contrary to what Lee & Vanpaemel suggest, the objective of quantum cognitive models is exactly to provide insight into those aspects of cognitive process for which classical explanation breaks down.
  • Gonzalez & Lebiere rightly point out that cognitive architectures, such as Adaptive Character of Thought – Rational (ACT-R), go some way toward addressing their criticisms of approaches based on individual heuristics.

R2. Misconceptions on limitations

  • Even given the broad description of the theory in the target article, the authors were impressed that some commentators were able to develop their own variations of quantum models.
  • It is true that the authors motivate properties of quantum theory, notably incompatibility, partly by appeal to the unrealistic demands from the principle of unicity.
  • Processing one question plausibly 312 BEHAVIORAL AND BRAIN SCIENCES (2013) 36:3 interferes with knowledge about another; the available empirical results strongly indicate this to be the case, at least in some cases (e.g., the conjunction fallacy; Busemeyer et al. 2011).
  • Equally, the specific characteristics of quantum evolution in quantum models do not always map well onto cognitive processes, and in some cases classical models appear more successful (Busemeyer et al. 2006).
  • Note first that Probability (Clinton honest) ∗ Probability (Gore honest |Clinton honest) is the probability of deciding that Clinton is honest and Gore is honest.

R3. Empirical and theoretical extensions

  • Aerts et al. point out that it is not just QP that is relevant.
  • Rather, there many aspects of quantum theory that are potentially relevant to the modeling of cognition.
  • Their demonstrations could be extended in a way such that the question of interest can have an answer along a continuum, rather than a binary yes–no.
  • By contrast, if the emphasis is on ensembles (e.g., how the behavior of a whole group of people changes), then perhaps mixed states are more appropriate (Franceschetti & Gire).
  • The authors knowledge space would be populated with several possible subspaces, corresponding to questions that relate to their world knowledge.

R4. Empirical challenges

  • Whether researchers accept the quantum framework as a viable alternative to CP theory is partly an empirical issue.
  • If the authors were to adopt a representation analogous to the one for the Linda problem, they would need an initial state that exists within a subspace for pet fish, which corresponds to the concept of guppy.
  • This is one examination for one particular quantum model, but this examination does provide the only available evidence and this evidence does its small bit toward undermining a claim that quantum models are in general more flexible than matched classical ones.
  • Information with regard to each question can be evaluated without the appeal to the other” to conclude that these questions cannot be incompatible.
  • Exactly howprecise the specification of subspaces and the state vector is will depend on exactly how precise the authors require the predictions to be.

R5. Neural basis

  • It is possible to utilize quantum theory to build models of neural activity.
  • The issue of quantum neural processing, as described previously, is distinct to that of the neural implementation of quantum cognitive models.
  • Noori & Spanagel argue that brain neurobiology is deterministic and, moreover, that it deterministically specifies behavior.
  • But in this comment, he advances his ideas by providing a detailed discussion for how his orchestrated objective reduction (Orch OR) model can be extended to incorporate subspaces and projections for complex thoughts, as would be required in quantum cognitive models.
  • It is currently unclear whether such dimensionality increases present a problem in relation to neural implementation.

R6. Rationality

  • The question of whether an account of human rationality (or not) should emerge from quantum cognitive models partly relates to their intended explanatory level.
  • Aerts, D. (1986) A possible explanation for the probabilities of quantum mechanics.
  • Proceedings of the 32nd Annual Conference of the Cognitive Science Society, ed. S. Ohlsson & R. Cattrambone, pp. 2188–93, also known as In.
  • Trueblood, J. S. & Busemeyer, J. R. (2011) A comparison of the belief-adjustment model and the quantum inference model as explanations of order effects in human inference.
  • Tversky, A. & Kahneman, D. (1974) Judgment under uncertainty: Heuristics and biases.

Did you find this useful? Give us your feedback

Figures (4)

Content maybe subject to copyright    Report

Can quantum probability provide a
new direction for cognitive modeling?
Emmanuel M. Pothos
Department of Psychology, City University London, London EC1V 0HB,
United Kingdom
emmanuel.pothos.1@city.ac. uk
http://www.staff.city.ac.uk/sbbh932/
Jerome R. Busemeyer
Department of Psychological and Brain Scienc es, Indiana University,
Bloomington, IN 47405
jbusemey@indiana.edu
http://mypage.iu.edu/jbus emey/home.html
Abstract: Classical (Bayesian) probability (CP) theory has led to an inuential research tradition for modeling cognitive processes.
Cognitive scientists have been trained to work with CP principles for so long that it is hard even to imagine alternative ways to
formalize probabilities. However, in physics, quantum probability (QP) theory has been the dominant probabilistic approach for
nearly 100 years. Could QP theory provide us with any advantages in cognitive modeling as well? Note rst that both CP and QP
theory share the fundamental assumption that it is possible to model cognition on the basis of formal, probabilistic principles. But
why consider a QP approach? The answers are that (1) there are many well-established empirical ndings (e.g., from the inuential
Tversky, Kahneman research tradition) that are hard to reconcile with CP principles; and (2) these same ndings have natural and
straightforward explanations with quantum principles. In QP theory, probabilistic assessment is often strongly context- and order-
dependent, individual states can be superposition states (that are impossible to associate with specic values), and composite systems
can be entangled (they cannot be decomposed into their subsystems). All these characteristics appear perplexing from a classical
perspective. However, our thesis is that they provide a more accurate and powerful account of certain cognitive processes. We rst
introduce QP theory and illustrate its application with psychological examples. We then review empirical ndings that motivate the
use of quantum theory in cognitive theory, but also discuss ways in which QP and CP theories converge. Finally, we consider the
implications of a QP theory approach to cognition for human rationality.
Keywords: category membership; classical probability theory; conjunction effect; decision making; disjunction effect; interference
effects; judgment; quantum probability theory; rationality; similarity ratings
1. Preliminary issues
1.1. Why move toward quantum probability theory?
In this article we evaluate the potential of quantum prob-
ability (QP) theory for modeling cognitive processes.
What is the motivation for employing QP theory in cogni-
tive modeling? Does the use of QP theory offer the
promise of any unique insights or predictions regarding
cognition? Also, what do quantum models imply regarding
the nature of human rationality? In other words, is there
anything to be gained, by seeking to develop cognitive
models based on QP theory? Especially over the last
decade, there has been growing interest in such models,
encompassing publications in major journals, special
issues, dedicated workshops, and a comprehensive book
(Busemeyer & Bruza 2012). Our strategy in this article is
to briey introduce QP theory, summarize progress with
selected, QP models, and motivate answers to the above-
mentioned questions. We note that this article is not
about the application of quantum physics to brain physi-
ology. This is a controversial issue (Hammeroff 2007; Litt
et al. 2006) about which we are agnostic. Rather, we are
interested in QP theory as a mathematical framework for
cognitive modeling. QP theory is potentially relevant in
any behavioral situation that involves uncertainty. For
example, Moore (2002) reported that the likelihood of a
yes response to the que stions Is Gore honest? and Is
Clinton honest? depends on the relative order of the ques-
tions. We will subsequently discuss how QP principles can
provide a simple and intuitive account for this and a range
of other ndings.
QP theory is a formal fram ework for assigning probabil-
ities to even ts (Hughes 1989; Isham 1989). QP theory can
be distinguished from quantum mechanics, the latter being
a theory of physical phenomena. For the present purposes,
it is sufcient to consider QP theory as the abstract foun-
dation of quantum mechanics not specically tied to
physics (for more rened characterizations see, e.g., Aerts
& Gabora 2005b; Atmanspacher et al. 2002; Khrennikov
2010; Redei & Summers 2007). The development of
quantum theory has been the result of intense effort
from some of the greatest scientists of all time, over a
period of >30 years. The idea of quantum was rst pro-
posed by Planck in the early 1900s and advanced by Ein-
stein. Contributions from Bohr, Born, Heisenberg, and
Schrödinger all led to the eventual formalization of QP
BEHAVIORAL AND BRAIN SCIENCES (2013) 36, 255327
doi:10.1017/S0140525X12001525
© Cambridge University Press 2013 0140-525X/13 $40.00 255

theory by von Neumann and Dirac in the 1930s. Part of the
appeal of using QP theory in cognition relates to condence
in the robustness of its mathematics. Few other theoretical
frameworks in any science have been scrutinized so inten-
sely, led to such surprising predictions , and, also, changed
human existence as much as QP theory (when applied to
the physical world; quantum mechanics has enabled the
development of, e.g., the transistor, and, therefore, the
microchip and the laser).
QP theory is, in principle, applicable not just in physics,
but in any science in which there is a need to formalize
uncertainty. For example, researchers have been pursuing
applications in areas as diverse as economics (Baaquie
2004) and information theory (e.g., Grover 1997; Nielsen
& Chuang 2000). The idea of using quantum theory in psy-
chology has existed for nearly 100 years: Bohr, one of the
founding fathers of quantum theory, was known to
believe that aspects of quantum theory could provide
insight about cogniti ve process (Wang et al., in press).
However, Bohr never made any attempt to provide a
formal cognitive model based on QP theory, and such
models have started appearing only fairly recently (Aerts
& Aerts 1995; Aerts & Gabora 2005b; Atmanspacher
et al. 2004; Blutner 2009; Bordley 1998; Bruza et al.
2009; Busemeyer et al. 2006b; Buseme yer et al. 2011;
Conte et al. 2009; Khrennikov 2010; Lambert-Mogiliansky
et al. 2009; Pothos & Busemeyer 2009; Yukalov & Sornette
2010). But what are the features of quantum theory that
make it a promising framework for understanding cogni-
tion? It seems essential to address this question before
expecting readers to invest the time for understanding
the (relatively) new mathematics of QP theory.
Superposition, entanglement, incompatibility, and inter-
ference are all related aspects of QP theory, which endow
it with a unique character. Consider a cognitive system,
which concerns the cognitive representation of some infor-
mation about the world (e.g., the story about the hypotheti-
cal Linda, used in Tversky and Kahnemans [1983] famous
experiment; sect. 3.1 in this article). Questions posed to
such systems (Is Linda feminist?) can have different out-
comes (e.g., Yes, Linda is feminist). Superposition has to
do with the nature of uncertainty about question outcomes.
The classical notion of uncertainty concerns our lack of
knowledge about the state of the system that determines
question outcomes. In QP theory, there is a deeper
notion of uncertainty that arises when a cognitive system
is in a superposition among different possible outcomes.
Such a state is not consistent with any single possible
outcome (that this is the case is not obvious; this remarkable
property follows from the KochenSpecker theorem).
Rather, there is a potentiality (Isham 1989, p. 153) for
different possible outcomes, and if the cognitive system
evolves in time, so does the potentiality for each possibility.
In quantum physics, superposition appears puzzling: what
does it mean for a particle to have a potentiality for different
positions, without it actually existing at any particular pos-
ition? By contrast, in psychology, superposition appears an
intuitive way to characterize the fuzziness (the conict,
ambiguity, and ambivalence) of everyday thought.
Entanglement concerns the compositionality of complex
cognitive systems. QP theory allows the specication of
entangled systems for which it is not possible to specify a
joint probability distribution from the probability distri-
butions of the constituent parts. In other words, in entangled
composite systems, a change in one constituent part of the
system necessitates changes in another part. This can lead
to interdependencies among the constituent parts not poss-
ible in classical theory, and surprising predictions, especially
when the parts are spatially or temporally separated.
In quantum theory, there is a fundamental distinction
between compatible and incompatible questions for a cog-
nitive system. Note that the terms compatible and incompa-
tible have a specic, technical meaning in QP theory, which
should not be confused with their lay use in language. If
two questions, A and B, about a system are compatible, it
is always possible to dene the conjunction between A
and B. In classical systems, it is assumed by default that
all questions are compatible. Therefore, for example, the
conjunctive question are A and B true always has a yes
or no answer and the order between questions A and B
in the conjunction does not matter. By contrast, in QP
theory, if two questions A and B are incompatible, it is
impossible to dene a single question regarding their con-
junction. This is because an answer to question A implies a
superposition state regarding question B (e.g., if A is true at
a time point, then B can be neither true nor false at the
EMMANUEL POTHOS studied physics at Imperial
College, during which time he obtained the Stanley
Raimes Memorial prize in mathematics, and continued
with a doctorate in experimental psychology at Oxford
University. He has worked with a range of compu-
tational frameworks for cognitive modeling, including
ones based on information theory, exible represen-
tation spaces, Bayesian methods, and, more recently,
quantum theory. He has authored approximately sixty
journal articles on related topics, as well as on appli-
cations of cognitive methods to health and clinical psy-
chology. Pothos is currently a senior lecturer in
psychology at City University London.
J
EROME BUSEMEYER received his PhD as a mathemat-
ical psychologist from University of South Carolina in
1980, and later he enjoyed a post-doctoral position at
University of Illinois. For 14 years he was a faculty
member at Purdue University. He moved on to
Indiana University, where he is provost professor, in
1997. Busemeyers research has been steadily funded
by the National Science Foundation, National Institute
of Mental Health, and National Institute on Drug
Abuse, and in return he served on national grant
review panels for these agencies. He has published
over 100 articles in various cognitive and decision
science journals, such as Psychological Review, as well
as serving on their editorial boards. He served as chief
editor of Journal of Mathematical Psychology from
2005 through 2010 and he is currently an associate
editor of Psychological Review. From 2005 through
2007, Busemeyer served as the manager of the Cogni-
tion and Decision Program at the Air Force Ofce of
Scientic Research. He became a fellow of the Society
of Experimental Psychologists in 2006. His research
includes mathematical models of learning and decision
making, and he formulated a dynamic theory of
human decision making called decision eld theory.
Currently, he is working on a new theory applying
quantum probability to human judgment and decision
making, and he published a new book on this topic
with Cambridge University Press.
Pothos & Busemeyer: Can quantum probability provide a new direction for cognitive modeling?
256
BEHAVIORAL AND BRAIN SCIENCES (2013) 36:3

same time point). Instead, QP denes conjunction between
incompatible questions in a sequential way, such as A and
then B. Crucially, the outcome of question A can affect the
consideration of question B, so that interference and order
effects can arise. This is a novel way to think of probability,
and one that is key to some of the most puzzling predictions
of quantum physics. For example, knowledge of the pos-
ition of a particle imposes uncertainty on its momentum.
However, incompatibility may make more sense when con-
sidering cognitive systems and, in fact, it was rst intro-
duced in psychology. The physicist Niels Bohr borrowed
the notion of incompatibility from the work of William
James. For example, answering one attitude question can
interfere with answers to subsequent questions (if they
are incompatible), so that their relative order becomes
important. Human judgment and preference often
display order and context effects, and we shall argue that
in such cases quantum theory provides a natural expla-
nation of cognitive process.
1.2. Why move away from existing formalisms?
By now, we hope we have convinced readers that QP
theory has certain unique properties, whose potential for
cognitive modeling appears, at the very least, intriguing.
For many researchers, the inspiration for applying
quantum theory in cognitive modeling has been the wide-
spread interest in cognitive models based on CP theory
(Anderson 1991; Grifths et al. 2010; Oaksford & Chater
2007; Tenenbaum et al. 2011). Both CP and QP theories
are formal probabilistic frameworks. They are founded on
different axioms (the Kolmogorov and Dirac/von
Neumann axioms, respectively) and, therefore, often
produce divergent predictions regarding the assignment
of probabilities to events. However, they share profound
commonalities as well, such as the central objective of
quantifying uncertainty, and similar mechanisms for
manipulating probabilities. Regarding cognitive modeling,
quantum and classical theorists share the fundamental
assumption that human cognition is best understood
within a formal probabilistic framework.
As Grifths et al. (2010, p. 357) note, probabilistic
models of cognition pursue a top-down or function-rst
strategy, beginning with abstract principles that allow
agents to solve problems posed by the world and then
attempting to reduce these principles to psychological
and neural processes. That is, the application of CP
theory to cognition requires a scientist to create hypotheses
regarding cognitive representations and inductive biases
and, therefore, elucidate the fundamental questions of
how and why a cognitive problem is successfully addressed.
In terms of Marrs(1982) analysis, CP models are typically
aimed at the computational and algorithmic levels,
although perhaps it is more accurate to characterize them
as top down or function rst (as Grifths et al. 2010,
p. 357).
We can recognize the advantage of CP cogniti ve models
in at least two ways. First, in a CP cognitive model, the prin-
ciples that are invoked (the axioms of CP theory) work as a
logical team and always deductively constrain each other.
By contrast, alternative cognitive modeling approaches
(e.g., based on heuristics) work alone and therefore are
more likely to fall foul of arbitrariness problems, whereby
it is possible to manipulate each principle in the model
independently of other principles. Second, neuroscience
methods and computational bottom-up approaches are
typically unable to provide much insig ht into the funda-
mental why and how questions of cognitive process (Grif-
ths et al. 2010). Overall, there are compelling reasons
for seeking to under stand the mind with CP theory. The
intention of QP cognitive models is aligned with that of
CP models. Therefore, it makes sense to present QP
theory side by side with CP theory, so that readers can
appreciate their commonalities and differences.
A related key issue is this: if CP theory is so successful
and elegant (at least, in cognitive applications), why seek
an alternative? Moreover, part of the motivation for using
CP theory in cognitive modeling is the strong intuition sup-
porting many CP principles. For example, the probability
of A and B is the same as the probability of B and A
(Prob(A&B)=Prob(A&B)). How can it be possible that
the probability of a conjunction depends upon the order
of the constituents? Indeed, as Laplace (1816, cited in
Perfors et al. 2011) said, probability theory is nothing
but common sense reduced to calculation. By contrast,
QP theory is a paradigm notorious for its conceptual dif-
culties (in the 1960s, Feynman famously said I think I
can safely say that nobody understands quantum mech-
anics). A classical theorist might argue that, when it
comes to modeling psychological intuition, we should
seek to apply a computational framework that is as intuitive
as possible (CP theory) and avoid the one that can lead to
puzzling and, supercially at least, counterintuitive predic-
tions (QP theory).
Human judgment, however, often goes directly against
CP principles. A large body of evidence has accumulated
to this effect, mostly associated with the inuential research
program of Tversky and Kahneman (Kahneman et al.
1982;
Tversky & Kahneman 1973; 1974 ; Tversky & Shar 1992).
Many of these ndings relate to order/context effects, vio-
lations of the law of total probability (which is fundamental
to Bayesian modeling), and failures of compositionality.
Therefore, if we are to understand the intuition behind
human judgment in such situations, we have to look for
an alternative probabilistic framework. Quantum theory
was originally developed so as to model analogous effects
in the physical world and therefore, perhaps, it can offer
insight into those aspects of human judgment that seem
paradoxical from a classical perspective. This situation is
entirely analogous to that faced by physicists early in the
last century. On the one hand, there was the strong intui-
tion from classical models (e.g., Newtonian physics, classi-
cal electromagnetism). On the other hand, there were
compelling empirical ndings that were resisting expla-
nation on the basis of classical formalisms. Therefore, phy-
sicists had to turn to quantum theory, and so paved the way
for some of the most impressive scientic achievements.
It is important to note that other cognitive theories
embody order/context effects or interference effects or
other quantum-like components. For example, a central
aspect of the gestalt theory of perception concerns how
the dynamic relationships among the parts of a distal
layout together determine the conscious experience corre-
sponding to the image. Query theory (Johnson et al. 2007)
is a proposal for how value is constructed through a series of
(internal) queries, and has been used to explain the endow-
ment effect in economic choice. In query theory, value is
constructed, rather than read off, and also different
Pothos & Busemeyer: Can quantum probability provide a new direction for cognitive modeling?
BEHAVIORAL AND BRAIN SCIENCES (2013) 36:3 257

queries can interfere with each other, so that query order
matters. In congural weight models (e.g., Birnbaum
2008) we also encounter the idea that, in evaluating
gambles, the context of a particular probability-conse-
quence branch (e.g., its rank order) will affe ct its weight.
The theory also allows weight changes depending upon
the observer perspective (e.g., buyer vs. seller). Andersons
(1971) integration theory is a family of models for how a
person integrates information from several sources, and
also incorporates a dependence on order. Fuzzy trace
theory (Reyna 2008; Reyna & Brainerd 1995) is based on
a distinctio n between verbatim and gist information, the
latter corresponding to the general semantic qualities of
an event. Gist information can be strongly context and
observer dependent and this has led fuzzy trace theory to
some surprising predictions (e.g., Brainerd et al. 2008).
This brief overview shows that there is a diverse range of
cognitive models that include a role for context or order,
and a comprehensive comparison is not practical here.
However, when comparisons have been made, the results
favored quantum theory (e.g., averaging theory was shown
to be inferior to a matched quantum model, Trueblood &
Busemeyer 2011). In some other cases, we can view QP
theory as a way to formalize previously informal conceptual-
izations (e.g., for query theory and the fuzzy trace theory).
Overall, there is a fair degree of exibility in the particu-
lar speci cation of computational frameworks in cognitive
modeling. In the case of CP and QP models, this exibility
is tempered by the requirement of adherence to the axioms
in each theory: all speci c models have to be consistent
with these axioms. This is exactly what makes CP (and
QP) models appealing to many theorists and why, as
noted, in seeking to understand the unique features of
QP theory, it is most natural to compare it with CP theory.
In sum, a central aspect of this article is the debate about
whether psychologists should explore the utility of
quantum theory in cognitive theory; or whether the existing
formalisms are (mostly) adequate and a different paradigm
is not necessary. Note that we do not develop an argument
that CP theory is unsuitable for cognitive modeling; it
clearly is, in many cases. And, moreover, as will be dis-
cussed, CP and QP processes sometimes converge in
their predictions. Rather, what is at stake is whether
there are situations in which the distinctive features of
QP theory provide a more accurate and elegant explanation
for empirical data. In the next section we provide a brief
consideration of the basic mechanisms in QP theory.
Perhaps contrary to common expectation, the relevant
mathematics is simple and mostly based on geometry and
linear algebra. We next consider empirical results that
appear puzzling from the perspective of CP theory, but
can naturally be accommodated within QP models.
Finally, we discuss the implications of QP theory for under-
standing rationality.
2. Basic assumptions in QP theory and
psychological motivation
2.1. The outcome space
CP theory is a set-theoretic way to assign probabilities to
the possible outcomes of a question. First, a sample
space is dened, in which specic outcomes about a ques-
tion are subsets of this sample space. Then, a probability
measure is postulated, which assigns probabilities to dis-
joint outcomes in an additive manner (Kolmogorov 1933/
1950). The formulation is different in QP theory, which is
a geometric theor y of assigning probabilities to outcomes
(Isham 1989). A vector space (called a Hilbert space)is
dened, in which possible outcomes are represented as
subspaces of this vector space. Note that our use of the
terms questions and outcomes are meant to imply the tech-
nical QP terms observables and propositions.
A vector space represents all possible outcomes for ques-
tions we could ask about a system of interest. For example,
consider a hypothetical person and the general question of
that persons emotional state. Then, one-dimensional sub-
spaces (called rays) in the vector space would correspond
to the most elementary emotions possible. The number
of unique elementary emotions and their relation to each
other determine the overall dimensionality of the vector
space. Also, more general emotions, such as happiness,
would be represented by subspaces of higher dimensional-
ity. In Figure 1a, we consider the question of whether a
Figure 1. An illustration of basic processes in QP theory. In Figure 1b, all vectors are co-planar, and the gure is a two-dimensional one.
In Figure 1c, the three vectors Happy, employed,”“Happy, unemployed, and Unhappy, employed are all orthogonal to each other, so
that the gure is a three-dimensional one. (The fourth dimension, unhappy, unemployed is not shown).
Pothos & Busemeyer: Can quantum probability provide a new direction for cognitive modeling?
258
BEHAVIORAL AND BRAIN SCIENCES (2013) 36:3

hypothetical person is happy or not. However, because it is
hard to picture high multidimensional subspaces, for prac-
tical reasons we assume that the outcomes of the happiness
question are one-dimensional subspaces. Therefore, one
ray corr esponds to the person denitely being happy and
another one to that person denitely being unhappy.
Our initial knowledge of the hypothetical person is indi-
cated by the state vector, a unit length vector, denoted as
|Ψ (the bracket notation for a vector is called the Dirac
notation). In psychological applications, it often refers to
the state of mind, perhaps after reading some instructions
for a psychological task. More formally, the state vector
embodies all our current knowledge of the cognitive
system under consideration. Using the simple vector space
in Figure 1a, we can write |Ψ = a|happy + b|unhappy.
Any vector |Ψ can be expressed as a linear combination of
the |happy and |unhappy vectors, so that these two
vectors form a basis for the two-dimensional space we
have employed. The a and b constants are called amplitudes
and they reect the components of the state vector along the
different basis vectors.
To determine the probability of the answer happy, we need
to project the state represented by |Ψ onto the subspace for
happy spanned by the vector |happy. This is done using
what is called a projector, which takes the vector |Ψ and
lays it down on the subspace spanned by |happy;thisprojec-
tor can be denoted as P
happy
. The projection to the |happy
subspace is denoted by P
happy
|Ψ=a |happy. (Here and
elsewhere we will slightly elaborate on some of the basic
denitions in the Appendix.) Then, the probability that
the person is happy is equal to the squared length of the
projection, ||P
happy
|Ψ||
2
. That is, the probability that the
person has a particular property depends upon the projec-
tion of |Ψ onto the subspace corresponding to the prop-
erty. In our simple example, this probability reduces to
||P
happy
|Ψ||
2
=|a|
2
, which is the squared magnitude of
the amplitude of the state vector along the |happy basis
vector. The idea that projection can be employed in psy-
chology to model the match between representations has
been explored before (Sloman 1993), and the QP cognitive
program can be seen as a way to generalize these early
ideas. Also, note that a remarkable mathematical result,
Gleasons theorem, shows that the QP way for assigning
probabilities to subspaces is unique (e.g., Isham 1989,
p. 210). It is not possible to devise another scheme for
assigning numbers to subspaces that satisfy the basic
requirements for an additive probability measure (i.e.,
that the probabilities assigned to a set of mutu ally exclusive
and exhaustive outcomes are individually between 0 and 1,
and sum to 1).
An important feature of QP theory is the distinction
between superposition and basis states. In the abovemen-
tioned example, after the person has decided that she is
happy, then the state vector is |Ψ = |happy; alternatively
if she decides that she is unhappy, then |Ψ = |unhappy.
These are called basis states, with respect to the question
about happiness, because the answer is certain when the
state vector |Ψ exactly coincides with one basis vector.
Note that this explains why the subspaces corresponding
to mutually exclusive outcomes (such as being happy and
being unhappy) are at right angles to each other. If a
person is denitely happy, i.e., |Ψ = |happy, then we
want a zero probability that the person is unhap py, which
means a zero projection to the subspace for unhappy.
This will only be the case if the happy, unhappy subspaces
are orthogonal.
Before the decision, the state vector is a superposition of
the two possibilities of happiness or unhappiness, so that
|Ψ = a|happy + b|unhappy. The concept of superposition
differs from the CP concept of a mixed state. According
to the latter, the pers on is either exactly happy or exactly
unhappy, but we dont know which, and so we assign
some probability to each possibility. However, in QP
theory, when a state vector is expressed as |Ψ = a
|happy + b|unhappy the person is neither happy nor
unhappy. She is in an indenite state regarding happiness,
simultaneously entertaining both possibilities, but being
uncommitted to either. In a superposition state, all we
can talk about is the potential or tendency that the
person will decide that she is happy or unhappy. Therefor e,
a decision, which causes a person to resolve the indenite
state regarding a question into a denite (basis) state, is
not a simple read-out from a pre-existing denite state;
instead, it is constructed from the current context and
question (Aerts & Aerts 1995). Note that other researchers
have suggested that the way of exploring the available pre-
mises can affect the eventual judgment, as much as the pre-
mises themselves, so that judgment is a constructive
process (e.g., Johnson et al. 2007; Shafer & Tversky
1985). The interesting aspect of QP theory is that it
funda-
mentally requires a constructive role for the process of dis-
ambiguating a superposition state (this relates to the
KochenSpecker theorem).
2.2. Compatibility
Suppose that we are interested in two questions, whether
the person is happy or not, and also whether the person
is employed or not. In this example, there are two out-
comes with respect to the question about happiness, and
two outcomes regarding employment. In CP theory, it is
always possible to specify a single joint probability distri-
bution over all four possible conjunctions of outcomes for
happiness and employment, in a particular situation. (Grif-
ths [2003] calls this the unicity principle, and it is funda-
mental in CP theory). By contrast, in QP theory, there is
a key distinction between compatible and incompatible
questions. For compatible questions, one can specify a
joint probability function for all outcome combinations
and in such cases the predictions of CP and QP theories
converge (ignoring dynamics). For incompatible questions,
it is impossible to determine the outcomes of all questions
concurrently. Being certain about the outcome of one
question induces an indenite state regarding the outcomes
of other, incompatible questions.
This absolutely crucial property of incompatibility is one
of the characteristics of QP theory that differentiates it
from CP theory. Psychologically, incompatibility between
questions means that a cognitive agent cannot formulate
a single thought for combinations of the corresponding out-
comes. This is perhaps because that agent is not used to
thinking about these outcomes together, for example, as in
the case of asking whether Linda (Tversky & Kahneman
1983) can be both a bank teller and a feminist. Incompatible
questions need to be assessed one after the other. A heuristic
guide of whether some questions should be considered
compatible is whether clarifying one is expected to interfere
with the evaluation of the other. Psychologically, the
Pothos & Busemeyer: Can quantum probability provide a new direction for cognitive modeling?
BEHAVIORAL AND BRAIN SCIENCES (2013) 36:3 259

Citations
More filters
Journal ArticleDOI
TL;DR: Barwise and Perry as discussed by the authors tackle the slippery subject of ''meaning, '' a subject that has long vexed linguists, language philosophers, and logicians, and they tackle it in this book.
Abstract: In this provocative book, Barwise and Perry tackle the slippery subject of \"meaning, \" a subject that has long vexed linguists, language philosophers, and logicians.

1,834 citations

Book
28 Aug 2014
TL;DR: Quantum Machine Learning bridges the gap between abstract developments in quantum computing and the applied research on machine learning by paring down the complexity of the disciplines involved.
Abstract: Quantum Machine Learning bridges the gap between abstract developments in quantum computing and the applied research on machine learning. Paring down the complexity of the disciplines involved, it ...

292 citations

Journal ArticleDOI
TL;DR: It is suggested that quantum probability theory, initially invented to explain noncommutativity of measurements in physics, provides a simple account for a surprising regularity regarding measurement order effects in social and behavioral science.
Abstract: The hypothesis that human reasoning obeys the laws of quantum rather than classical probability has been used in recent years to explain a variety of seemingly “irrational” judgment and decision-making findings. This article provides independent evidence for this hypothesis based on an a priori prediction, called the quantum question (QQ) equality, concerning the effect of asking attitude questions successively in different orders. We empirically evaluated the predicted QQ equality using 70 national representative surveys and two laboratory experiments that manipulated question orders. Each national study contained 651–3,006 participants. The results provided strong support for the predicted QQ equality. These findings suggest that quantum probability theory, initially invented to explain noncommutativity of measurements in physics, provides a simple account for a surprising regularity regarding measurement order effects in social and behavioral science.

229 citations


Additional excerpts

  • ...from a classical probability perspective (11, 12)....

    [...]

Journal ArticleDOI
TL;DR: It is concluded that quantum probability theory, initially invented to explain order effects on measurements in physics, appears to be a powerful natural explanation for order effects of self-report measures in social and behavioral sciences, too.
Abstract: Question order effects are commonly observed in self-report measures of judgment and attitude. This article develops a quantum question order model (the QQ model) to account for four types of question order effects observed in literature. First, the postulates of the QQ model are presented. Second, an a priori, parameter-free, and precise prediction, called the QQ equality, is derived from these mathematical principles, and six empirical data sets are used to test the prediction. Third, a new index is derived from the model to measure similarity between questions. Fourth, we show that in contrast to the QQ model, Bayesian and Markov models do not generally satisfy the QQ equality and thus cannot account for the reported empirical data that support this equality. Finally, we describe the conditions under which order effects are predicted to occur, and we review a broader range of findings that are encompassed by these very same quantum principles. We conclude that quantum probability theory, initially invented to explain order effects on measurements in physics, appears to be a powerful natural explanation for order effects of self-report measures in social and behavioral sciences, too.

222 citations

Book
20 Mar 2015
TL;DR: This book presents Quantum Models of Cognition and Decision, a new approach to Mathematical and Computational Modeling in Clinical Psychology that combines Bayesian Estimation in Hierarchical Models and Quantum Models, and its Applications.
Abstract: Preface 1. Introduction Jerome R. Busemeyer, Zheng Wang, James T. Townsend, and Ami Eidels Part I. Elementary Cognitive Mechanisms 2. Multidimensional Signal Detection Theory F. Gregory Ashby and Fabian A. Soto 3. Modeling Simple Decisions and Applications Using a Diffusion Model Roger Ratcliff and Philip Smith 4. Features of Response Times: Identification of Cognitive Mechanisms through Mathematical Modeling Daniel Algom, Ami Eidels, Robert X. D. Hawkins, Brett Jefferson, and James T. Townsend 5. Computational Reinforcement Learning Todd M. Gureckis and Bradley C. Love Part II. Basic Cognitive Skills 6. Why Is Accurately Labeling Simple Magnitudes So Hard? A Past, Present, and Future Look at Simple Perceptual Judgment Chris Donkin, Babette Rae, Andrew Heathcote, and Scott D. Brown 7. An Exemplar-Based Random-Walk Model of Categorization and Recognition Robert M. Nosofsky and Thomas J. Palmeri 8. Models of Episodic Memory Amy H. Criss and Marc W. Howard Part III. Higher Level Cognition 9. Structure and Flexibility in Bayesian Models of Cognition Joseph L. Austerweil, Samuel J. Gershman, and Thomas L. Griffiths 10. Models of Decision Making under Risk and Uncertainty Timothy J. Pleskac, Adele Diederich, and Thomas S. Wallsten 11. Models of Semantic Memory Michael N. Jones, Jon Willits, and Simon Dennis 12. Shape Perception Tadamasa Sawada, Yunfeng Li, and Zygmunt Pizlo Part IV. New Directions 13. Bayesian Estimation in Hierarchical Models John K. Kruschke and Wolf Vanpaemel 14. Model Comparison and the Principle of Parsimony Joachim Vandekerckhove, Dora Matzke, and Eric-Jan Wagenmakers 15. Neurocognitive Modeling of Perceptual Decision Making Thomas J. Palmeri, Jeffrey D. Schall, and Gordon D. Logan 16. Mathematical and Computational Modeling in Clinical Psychology Richard W. J. Neufeld 17. Quantum Models of Cognition and Decision Jerome R. Busemeyer, Zheng Wang, and Emmanuel Pothos Index

169 citations

References
More filters
Book
01 Jan 1962
TL;DR: The Structure of Scientific Revolutions as discussed by the authors is a seminal work in the history of science and philosophy of science, and it has been widely cited as a major source of inspiration for the present generation of scientists.
Abstract: A good book may have the power to change the way we see the world, but a great book actually becomes part of our daily consciousness, pervading our thinking to the point that we take it for granted, and we forget how provocative and challenging its ideas once were-and still are. "The Structure of Scientific Revolutions" is that kind of book. When it was first published in 1962, it was a landmark event in the history and philosophy of science. And fifty years later, it still has many lessons to teach. With "The Structure of Scientific Revolutions", Kuhn challenged long-standing linear notions of scientific progress, arguing that transformative ideas don't arise from the day-to-day, gradual process of experimentation and data accumulation, but that revolutions in science, those breakthrough moments that disrupt accepted thinking and offer unanticipated ideas, occur outside of "normal science," as he called it. Though Kuhn was writing when physics ruled the sciences, his ideas on how scientific revolutions bring order to the anomalies that amass over time in research experiments are still instructive in our biotech age. This new edition of Kuhn's essential work in the history of science includes an insightful introductory essay by Ian Hacking that clarifies terms popularized by Kuhn, including paradigm and incommensurability, and applies Kuhn's ideas to the science of today. Usefully keyed to the separate sections of the book, Hacking's essay provides important background information as well as a contemporary context. Newly designed, with an expanded index, this edition will be eagerly welcomed by the next generation of readers seeking to understand the history of our perspectives on science.

36,808 citations

Book ChapterDOI
TL;DR: In this paper, the authors present a critique of expected utility theory as a descriptive model of decision making under risk, and develop an alternative model, called prospect theory, in which value is assigned to gains and losses rather than to final assets and in which probabilities are replaced by decision weights.
Abstract: This paper presents a critique of expected utility theory as a descriptive model of decision making under risk, and develops an alternative model, called prospect theory. Choices among risky prospects exhibit several pervasive effects that are inconsistent with the basic tenets of utility theory. In particular, people underweight outcomes that are merely probable in comparison with outcomes that are obtained with certainty. This tendency, called the certainty effect, contributes to risk aversion in choices involving sure gains and to risk seeking in choices involving sure losses. In addition, people generally discard components that are shared by all prospects under consideration. This tendency, called the isolation effect, leads to inconsistent preferences when the same choice is presented in different forms. An alternative theory of choice is developed, in which value is assigned to gains and losses rather than to final assets and in which probabilities are replaced by decision weights. The value function is normally concave for gains, commonly convex for losses, and is generally steeper for losses than for gains. Decision weights are generally lower than the corresponding probabilities, except in the range of low prob- abilities. Overweighting of low probabilities may contribute to the attractiveness of both insurance and gambling. EXPECTED UTILITY THEORY has dominated the analysis of decision making under risk. It has been generally accepted as a normative model of rational choice (24), and widely applied as a descriptive model of economic behavior, e.g. (15, 4). Thus, it is assumed that all reasonable people would wish to obey the axioms of the theory (47, 36), and that most people actually do, most of the time. The present paper describes several classes of choice problems in which preferences systematically violate the axioms of expected utility theory. In the light of these observations we argue that utility theory, as it is commonly interpreted and applied, is not an adequate descriptive model and we propose an alternative account of choice under risk. 2. CRITIQUE

35,067 citations

Book
01 Jan 1974
TL;DR: The authors described three heuristics that are employed in making judgements under uncertainty: representativeness, availability of instances or scenarios, and adjustment from an anchor, which is usually employed in numerical prediction when a relevant value is available.
Abstract: This article described three heuristics that are employed in making judgements under uncertainty: (i) representativeness, which is usually employed when people are asked to judge the probability that an object or event A belongs to class or process B; (ii) availability of instances or scenarios, which is often employed when people are asked to assess the frequency of a class or the plausibility of a particular development; and (iii) adjustment from an anchor, which is usually employed in numerical prediction when a relevant value is available. These heuristics are highly economical and usually effective, but they lead to systematic and predictable errors. A better understanding of these heuristics and of the biases to which they lead could improve judgements and decisions in situations of uncertainty.

31,082 citations

Journal ArticleDOI

27,773 citations


"Can quantum probability provide a n..." refers background in this paper

  • ...This is essential, for example, in order to take into account the observed risk aversion in human decisions (Kahneman & Tversky, 1979)....

    [...]

Frequently Asked Questions (9)
Q1. What have the authors contributed in "Can quantum probability provide a new direction for cognitive modeling?" ?

The authors first introduce QP theory and illustrate its application with psychological examples. The authors then review empirical findings that motivate the use of quantum theory in cognitive theory, but also discuss ways in which QP and CP theories converge. Finally, the authors consider the implications of a QP theory approach to cognition for human rationality. 

There is little doubt that extensive further work is essential before all aspects of QP theory can acquire psychological meaning. In fact, the authors argue that the quantum approach to cognition embodies all the characteristics of good cognitive theory: it is based on a coherent set of formal principles, the formal principles are related to specific assumptions about psychological process ( e. g., the existence of order/context effects in judgment ), and it leads to quantitative computational models that can parsimoniously account for both old and new empirical data. Rather, quantum theory provides many theoretical and practical advantages, and its applicability to psychological explanation should be further considered. 

For the real, noisy, confusing, ever-changing, chaotic world, QP is the only system that works in physics and, the authors strongly suspect, in psychology as well. 

Because in QP theory probability is computed from the overlap between a vector and a subspace, it is naturally interpreted as similarity (Sloman 1993). 

Another example from memory research is Bruza et. al.’s (2009) application of quantum entanglement (which implies a kind of holism inconsistent with classical notions of causality) to explain associativememoryfindings,which cannot beaccommodated within the popular theory of spreading activation. 

In one of the most influential demonstrations in the similarity literature, Tversky (1977) showed that similarity judgments violate all metric axioms. 

the probability of defecting when the opponent is known to cooperate is based on the projection Pparticipant to D |Ψopponent known C〉. But, in the unknown case, the relevant state vector is the superposition 1 2√ |copponent known Dl+ 1 2√ |copponent known Cl. 

In other words, deciding that Gore is honest increases the probability that Clinton is judged to be honest as well (and, conversely,deciding that Clinton is honest first, reduces the probability that Gore is judged as honest). 

one of the founding fathers of quantum theory, was known to believe that aspects of quantum theory could provide insight about cognitive process (Wang et al., in press).