scispace - formally typeset
Search or ask a question
Author

William E. Montague

Bio: William E. Montague is an academic researcher from University of Illinois at Urbana–Champaign. The author has contributed to research in topics: Recall & Instructional design. The author has an hindex of 15, co-authored 44 publications receiving 4714 citations.


Cited by
More filters
Journal ArticleDOI
TL;DR: An implicit association test (IAT) measures differential association of 2 target concepts with an attribute when instructions oblige highly associated categories to share a response key, and performance is faster than when less associated categories share a key.
Abstract: An implicit association test (IAT) measures differential association of 2 target concepts with an attribute. The 2 concepts appear in a 2-choice task (e.g., flower vs. insect names), and the attribute in a 2nd task (e.g., pleasant vs. unpleasant words for an evaluation attribute). When instructions oblige highly associated categories (e.g., flower + pleasant) to share a response key, performance is faster than when less associated categories (e.g., insect + pleasant) share a key. This performance difference implicitly measures differential association of the 2 concepts with the attribute. In 3 experiments, the IAT was sensitive to (a) near-universal evaluative differences (e.g., flower vs. insect), (b) expected individual differences in evaluative associations (Japanese + pleasant vs. Korean + pleasant for Japanese vs. Korean subjects), and (c) consciously disavowed evaluative differences (Black + pleasant vs. White + pleasant for self-described unprejudiced White subjects).

9,731 citations

Book ChapterDOI
TL;DR: This chapter presents a general theoretical framework of human memory and describes the results of a number of experiments designed to test specific models that can be derived from the overall theory.
Abstract: Publisher Summary This chapter presents a general theoretical framework of human memory and describes the results of a number of experiments designed to test specific models that can be derived from the overall theory. This general theoretical framework categorizes the memory system along two major dimensions. The first categorization distinguishes permanent, structural features of the system from control processes that can be readily modified or reprogrammed at the will of the subject. The second categorization divides memory into three structural components: the sensory register, the short-term store, and the long-term store. Incoming sensory information first enters the sensory register, where it resides for a very brief period of time, then decays and is lost. The short-term store is the subject's working memory; it receives selected inputs from the sensory register and also from long-term store. The chapter also discusses the control processes associated with the sensory register. The term control process refers to those processes that are not permanent features of memory, but are instead transient phenomena under the control of the subject; their appearance depends on several factors such as instructional set, the experimental task, and the past history of the subject.

6,232 citations

Journal ArticleDOI
TL;DR: A perceptual theory of knowledge can implement a fully functional conceptual system while avoiding problems associated with amodal symbol systems and implications for cognition, neuroscience, evolution, development, and artificial intelligence are explored.
Abstract: Prior to the twentieth century, theories of knowledge were inherently perceptual. Since then, developments in logic, statis- tics, and programming languages have inspired amodal theories that rest on principles fundamentally different from those underlying perception. In addition, perceptual approaches have become widely viewed as untenable because they are assumed to implement record- ing systems, not conceptual systems. A perceptual theory of knowledge is developed here in the context of current cognitive science and neuroscience. During perceptual experience, association areas in the brain capture bottom-up patterns of activation in sensory-motor areas. Later, in a top-down manner, association areas partially reactivate sensory-motor areas to implement perceptual symbols. The stor- age and reactivation of perceptual symbols operates at the level of perceptual components - not at the level of holistic perceptual expe- riences. Through the use of selective attention, schematic representations of perceptual components are extracted from experience and stored in memory (e.g., individual memories of green, purr, hot). As memories of the same component become organized around a com- mon frame, they implement a simulator that produces limitless simulations of the component (e.g., simulations of purr). Not only do such simulators develop for aspects of sensory experience, they also develop for aspects of proprioception (e.g., lift, run) and introspec- tion (e.g., compare, memory, happy, hungry). Once established, these simulators implement a basic conceptual system that represents types, supports categorization, and produces categorical inferences. These simulators further support productivity, propositions, and ab- stract concepts, thereby implementing a fully functional conceptual system. Productivity results from integrating simulators combinato- rially and recursively to produce complex simulations. Propositions result from binding simulators to perceived individuals to represent type-token relations. Abstract concepts are grounded in complex simulations of combined physical and introspective events. Thus, a per- ceptual theory of knowledge can implement a fully functional conceptual system while avoiding problems associated with amodal sym- bol systems. Implications for cognition, neuroscience, evolution, development, and artificial intelligence are explored.

5,259 citations

Journal ArticleDOI
TL;DR: In this article, KlUGER and Denisi analyzed all the major reasons to reject a paper from the meta-analysis, even though the decision to exclude a paper came at the first identification of a missing inclusion criterion.
Abstract: the total number of papers may exceed 10,000. Nevertheless, cost consideration forced us to consider mostly published papers and technical reports in English. 4 Formula 4 in Seifert (1991) is in error—a multiplier of n, of cell size, is missing in the numerator. 5 Unfortunately, the technique of meta-analysis cannot be applied, at present time, to such effects because the distribution of dis based on a sampling of people, whereas the statistics of techniques such as ARIMA are based on the distribution of a sampling of observations in the time domain regardless of the size of the people sample involved (i.e., there is no way to compare a sample of 100 points in time with a sample of 100 people). That is, a sample of 100 points in time has the same degrees of freedom if it were based on an observation of 1 person or of 1,000 people. 258 KLUGER AND DENISI From the papers we reviewed, only 131 (5%) met the criteria for inclusion. We were concerned that, given the small percentage of usable papers, our conclusions might not fairly represent the larger body of relevant literature. Therefore, we analyzed all the major reasons to reject a paper from the meta-analysis, even though the decision to exclude a paper came at the first identification of a missing inclusion criterion. This analysis showed the presence of review articles, interventions of natural feedback removal, and papers that merely discuss feedback, which in turn suggests that the included studies represent 1015% of the empirical FI literature. However, this analysis also showed that approximately 37% of the papers we considered manipulated feedback without a control group and that 16% reported confounded treatments, that is, roughly two thirds of the empirical FI literature cannot shed light on the question of FI effects on performance—a fact that requires attention from future FI researchers. Of the usable 131 papers (see references with asterisks), 607 effect sizes were extracted. These effects were based on 12,652 participants and 23,663 observations (reflecting multiple observations per participant). The average sample size per effect was 39 participants. The distribution of the effect sizes is presented in Figure 1. The weighted mean (weighted by sample size) of this distribution is 0.41, suggesting that, on average, FI has a moderate positive effect on performance. However, over 38% of the effects were negative (see Figure 1). The weighted variance of this distribution is 0.97, whereas the estimate of the sampling error variance is only 0.09. A potential problem in meta-analyses is a violation of the assumption of independence. Such a violation occurs either when multiple observations are taken from the same study (Rosenthal, 1984) or when several papers are authored by the same person (Wolf, 1986). In the present investigation, there were 91 effects derived from the laboratory experiments reported by Mikulincer (e.g., 1988a, 1988b). This raises the possibility that the average effect size is biased, because his studies manipulated extreme negative FIs and used similar tasks. In fact, the weighted average d in Mikulincer's studies was —0.39; whereas in the remainder of the

5,126 citations

Journal ArticleDOI
TL;DR: In this paper, the authors define basic objects as those categories which carry the most information, possess the highest category cue validity, and are the most differentiated from one another, and thus the most distinctive from each other.

5,074 citations