scispace - formally typeset
Search or ask a question
Author

Matthew A. Kelly

Bio: Matthew A. Kelly is an academic researcher from Penn State College of Information Sciences and Technology. The author has contributed to research in topics: Language model & Cognition. The author has an hindex of 9, co-authored 33 publications receiving 202 citations. Previous affiliations of Matthew A. Kelly include Pennsylvania State University & Queen's University.

Papers
More filters
Journal ArticleDOI
TL;DR: A novel analysis of the mathematics of VSAs and a novel technique for representing data in HRRs, where HRRs can successfully encode vectors of locally structured data if vectors are shuffled, are presented.
Abstract: Vector Symbolic Architectures (VSAs) such as Holographic Reduced Representations (HRRs) are computational associative memories used by cognitive psychologists to model behavioural and neurological aspects of human memory. We present a novel analysis of the mathematics of VSAs and a novel technique for representing data in HRRs. Encoding and decoding in VSAs can be characterised by Latin squares. Successful encoding requires the structure of the data to be orthogonal to the structure of the Latin squares. However, HRRs can successfully encode vectors of locally structured data if vectors are shuffled. Shuffling results are illustrated using images but are applicable to any nonrandom data. The ability to use locally structured vectors provides a technique for detailed modelling of stimuli in HRR models.Keywords: holographic reduced representations, vector symbolic architectures, associative memory, Latin squares, permutationFirst proposed by Longuet-Higgins (1968) and Gabor (1969), a holographic associative memory is a computational memory based on the mathematics of holography. Holographic associative memory has been of interest to cognitive psychologists because of the following:(i) Associative memories are content-addressable, allowing items to be retrieved without search, in a manner similar to the fast, parallel retrieval of memories in the human mind.(ii) Just as human memory can store complicated and recursive relations between ideas, holographic associative memories can compactly store associations between associations.(iii) Holographic associative memories have what is called "lossy" storage, which is useful for modelling human forgetting.The mathematics of holography has long been suggested as the principle underlying the neural basis of memory (Pribram, 1969). Cognitive models based on holographic associative memory, such as TODAM (Murdock, 1982), TODAM2 (Murdock, 1993), and CHARM (Metcalfe-Eich, 1982), can explain and predict a variety of human memory phenomena.Holographic Reduced Representations (HRRs; Plate, 1994) are a refinement of Gabor's holographic associative memory. HRRs have also been used to model how humans understand analogies (Plate, 2000b; Eliasmith & Thagard, 2001) and the meaning of words (BEAGLE; Jones & Mewhort, 2007), how humans encode strings of characters (Hannagan, Dupoux, & Christophe, 2011), and to model how humans perform simple memory and problem-solving tasks such as playing rocks, paper, scissors (Rutledge-Taylor, 2010) and solving Raven's progressive matrices (Rasmussen & Eliasmith, 2011).Research into HRRs and HRR-based models has been motivated by limitations in the ability of traditional connectionist models (i.e., nonrecurrent models with one or two layers of connections) to represent knowledge with complicated structure (Plate, 1995). In traditional connectionist models, an item is represented by a pattern of activation across a group of neurons. Mathematically, the pattern of activation is represented by a vector of numbers that stand for the activations of the neurons. Relationships between pairs of items are defined by the connection weights between groups of neurons. The connection weights between two groups of neurons can be represented as a matrix of numbers. Relationships between more than two items can be defined using more connections, represented as tensors of numbers (i.e., multidimensional arrays; for psychological theory see Humphreys, Bain, & Pike, 1989; for computational theory see Smolensky, 1990). Smolensky's tensor memories provide a powerful approach to representing and manipulating relationships between items that differs substantively from traditional connectionist models. In particular, tensor memories (and, in fact, HRRs) do not need to be trained. However, as the number of items bound together into an association grows, the size of the tensor needed to represent the relationship grows exponentially. …

35 citations

Journal ArticleDOI
TL;DR: It is shown that orthogonal contrasts limit the computational load and, second, that when combined with Gill’s (2007) algorithm, the factorial permutation test is both practical and efficient.
Abstract: The permutation test follows directly from the procedure in a comparative experiment, does not depend on a known distribution for error, and is sometimes more sensitive to real effects than are the corresponding parametric tests. Despite its advantages, the permutation test is seldom (if ever) applied to factorial designs because of the computational load that they impose. We propose two methods to limit the computation load. We show, first, that orthogonal contrasts limit the computational load and, second, that when combined with Gill’s (2007) algorithm, the factorial permutation test is both practical and efficient. For within-subjects designs, the factorial permutation test is equivalent to an ANOVA when the latter’s assumptions have been met. For between-subjects designs, the factorial test is conservative. Code to execute the routines described in this article may be downloaded from http://brm.psychonomic-journals.org/content/supplemental.

24 citations

Journal ArticleDOI
TL;DR: An algorithm is presented to circumvent the problem when the smaller group has the larger variance and it is shown, by simulation, that the algorithm brings the error rate back to the nominal value without sacrificing the ability to detect true effects.
Abstract: When both the variance and the N are unequal in a two-group design, the probability of a Type I error shifts from the nominal 5% error rate. The probability is too liberal when the small cell has the larger variance and too conservative when the large cell has the larger variance. We present an algorithm to circumvent the problem when the smaller group has the larger variance and show, by simulation, that the algorithm brings the error rate back to the nominal value without sacrificing the ability to detect true effects.

22 citations

Journal ArticleDOI
01 Jul 2014
TL;DR: It is argued that cognition needs to be understood at both the symbolic and sub-symbolic levels, and it is demonstrated that DSHM intrinsically operates at both of these levels of description.
Abstract: We describe the DSHM (Dynamically Structured Holographic Memory) model of human memory, which uses high dimensional vectors to represent items in memory. The complexity and intelligence of human behavior can be attributed, in part, to our ability to utilize vast knowledge acquired over a lifetime of experience with our environment. Thus models of memory, particularly models that can scale up to lifetime learning, are critical to modeling human intelligence. DHSM is based on the BEAGLE model of language acquisition ( Jones & Mewhort, 2007 ) and extends this type of model to general memory phenomena. We demonstrate that DHSM can model a wide variety of human memory effects. Specifically, we model the fan effect, the problem size effect (from math cognition), dynamic game playing (detecting sequential dependencies from memories of past moves), and time delay learning (using an instance based approach). This work suggests that DSHM is suitable as a basis for learning both over the short-term and over the lifetime of the agent, and as a basis for both procedural and declarative memory. We argue that cognition needs to be understood at both the symbolic and sub-symbolic levels, and demonstrate that DSHM intrinsically operates at both of these levels of description. In order to situate DSHM in a familiar context, we discuss the relationship between DHSM and ACT-R.

21 citations

Journal ArticleDOI
TL;DR: In this article, a metallographic method for preparing samples and evaluating techniques to quantitatively measure structure of thermally barrier coatings (TBC) for turbine applications is proposed.

20 citations


Cited by
More filters
01 Jan 2016
TL;DR: The the critique of pure reason is universally compatible with any devices to read and is available in the digital library an online access to it is set as public so you can get it instantly.
Abstract: Thank you very much for downloading the critique of pure reason. Maybe you have knowledge that, people have look hundreds times for their favorite novels like this the critique of pure reason, but end up in infectious downloads. Rather than reading a good book with a cup of coffee in the afternoon, instead they cope with some infectious bugs inside their computer. the critique of pure reason is available in our digital library an online access to it is set as public so you can get it instantly. Our digital library hosts in multiple countries, allowing you to get the most less latency time to download any of our books like this one. Kindly say, the the critique of pure reason is universally compatible with any devices to read.

998 citations

01 Jan 1991
TL;DR: This chapter contains sections titled connectionist Representation and Tensor Product Binding: Definition and Examples, and tensor Product Representation: Properties.
Abstract: This chapter contains sections titled: 1 Introduction, 2 Connectionist Representation and Tensor Product Binding: Definition and Examples, 3 Tensor Product Representation: Properties, 4 Conclusion

515 citations

Journal ArticleDOI
01 Jan 1987
TL;DR: The History of StatisticsThe history of Statistics in the 17th and 18th CenturiesThe Politics of Large NumbersStatistics on the TableFiguring Out The PastDicing with DeathHow to Lie with StatisticsAnnotated Readings in the History of statistics.
Abstract: The History of StatisticsThe History of Statistics in the 17th and 18th CenturiesThe Politics of Large NumbersStatistics on the TableFiguring Out The PastDicing with DeathHow to Lie with StatisticsAnnotated Readings in the History of StatisticsPast, Present, and Future of Statistical ScienceCartoon Guide to StatisticsStatistics in PsychologyFisher, Neyman, and the Creation of Classical StatisticsHistory of the Mathematical Theory of Probability from the Time of Pascal to that of LaplaceMeasurement and Statistics on Science and TechnologyThe Science of ConjectureDisciplining StatisticsThe Seven Pillars of Statistical WisdomA History of Mathematical Statistics from 1750 to 1930The History of StatisticsThe History of StatisticsIntroducing StatisticsA History of Probability and Statistics and Their Applications before 1750Making It CountStatistics in the 21st CenturyModern Statistics for Modern BiologyClassic Problems of ProbabilityStatistical ThoughtThe Rise of Statistical Thinking, 1820-1900Strength in Numbers: The Rising of Academic Statistics Departments in the U. S.Mathematical StatisticsStatistics on the TableStatistics and the German State, 1900-1945Statistics, Public Debate and the State, 1800–1945Modern Interdisciplinary University Statistics EducationUncertaintyProbability and StatisticsHistory, Theory, and Technique of StatisticsArthur L BowleyHandbook of Forensic StatisticsClassic Topics on the History of Modern Mathematical Statistics

323 citations

Journal ArticleDOI
TL;DR: A key foundational hypothesis in artificial intelligence is that minds are computational entities of a special sort — that is, cognitive systems — that can be implemented through a diversity of physical devices, whether natural brains, traditional generalpurpose computers, or other sufficiently functional forms of hardware or wetware.
Abstract: The purpose of this article is to begin the process of engaging the international research community in developing what can be called a standard model of the mind, where the mind we have in mind here is human-like. The notion of a standard model has its roots in physics, where over more than a half-century the international community has developed and tested a standard model that combines much of what is known about particles. This model is assumed to be internally consistent, yet still have major gaps. Its function is to serve as a cumulative reference point for the field while also driving efforts to both extend and break it.

257 citations