scispace - formally typeset
Search or ask a question

Showing papers in "Psychological Review in 1982"


Journal ArticleDOI
TL;DR: In this paper, a framework for skill acquisition is proposed that includes two major stages in the development of a cognitive skill: a declarative stage in which facts about the skill domain are interpreted and a procedural stage where the domain knowledge is directly embodied in procedures for performing the skill.
Abstract: A framework for skill acquisition is proposed that includes two major stages in the development of a cognitive skill: a declarative stage in which facts about the skill domain are interpreted and a procedural stage in which the domain knowledge is directly embodied in procedures for performing the skill. This general framework has been instantiated in the ACT system in which facts are encoded in a propositional network and procedures are encoded as productions. Knowledge compilation is the process by which the skill transits from the declarative stage to the procedural stage. It consists of the subprocesses of composition, which collapses sequences of productions into single productions, and proceduralization, which embeds factual knowledge into productions. Once proceduralized, further learning processes operate on the skill to make the productions more selective in their range of applications. These processes include generalization, discrimination, and strengthening of productions. Comparisons are made to similar concepts from past learning theories. How these learning mechanisms apply to produce the power law speedup in processing time with practice is discussed.

3,539 citations


Journal ArticleDOI
TL;DR: The interactive activation model of context effects in letter perception is reviewed, elaborated, and tested, and several recent findings that seem to challenge the model are considered and a number of extensions are proposed.
Abstract: The interactive activation model of context effects in letter perception is reviewed, elaborated, and tested. According to the model context aids the perception of target letters as they are processed in the perceptual system. The implication that the duration and timing of the context in which a letter occurs should greatly influence the perceptibility of the target is confirmed by a series of experiments demonstrating that early or enhanced presentations of word and pronounceablepseudoword contexts greatly increase the perceptibility of target letters. Also according to the model, letters in strings that share several letters with words should be equally perceptible whether they are orthographically regular and pronounceable (SLET) or irregular (SLNT) and should be much more perceptible than letters in contexts that share few letters with any word (XLQJ). This prediction is tested and confirmed. The basic results of all the experiments are accounted for, with some modification of parameters, although there are some discrepancies in detail. Several recent findings that seem to challenge the model are considered and a number of extensions are proposed.

1,162 citations


Journal ArticleDOI
TL;DR: A theory for the storage and retrieval of item and associative information is presented, and expressions for signal-to-noise ratio and relative efficiency are derived.
Abstract: A theory for the storage and retrieval of item and associative information is presented. In the theory, items or events are represented as random vectors. Convolution is used as the storage operation, and correlation is used as the retrieval operation. A distributed-memory system is assumed; all information is stored in a common memory vector. The theory applies to both recognition and recall and covers both accuracy and latency. Noise in the decision stage necessitates a two-criterion decision system, and over time the criteria converge until a decision is reached. Performance is predicted from the moments (expectation and variance) of the similarity distributions, and these can be derived from the theory. Several alternative models with varying degrees of distributed memory are considered, and expressions for signal-to-noise ratio and relative efficiency are derived.

933 citations



Journal ArticleDOI
TL;DR: In this paper, the authors investigated the cognitive representation of harmonic and tonal structure in Western music using a tone profile technique and found that the perceived relations between chords and keys and between different keys are mediated through an internal representation of the hierarchy of tonal functions of single tones in music.
Abstract: The cognitive representation of harmonic and tonal structure in Western music is investigated using a tone-profile technique. In this method listeners rate how well single tones (any one of the 12 tones of the chromatic scale) follow a musical element such as a scale, chord, or cadence. Very stable rating profiles reflecting the tonal hierarchies in major and minor keys are obtained, which, when intercorrelated and analyzed using multidimensional scaling, produce a four-dimensional spatial map of the distances between keys. The keys are located on the surface of a torus, in which the circle of fifths and the parallel and relative relations between major and minor keys are represented. In addition, single chords (major, minor, diminished, and dominant seventh) are found to be closely associated with the major and minor keys in which they play harmonic functions. The developing and changing sense of key during sequences of chords is traced by obtaining probe tone ratings following each chord in 10 different sequences, 8 of which contain modulations (changes) between keys. Modulations between closely related keys are found to be effected more immediately than are modulations between relatively distant keys. In all cases beyond the initial chord, the sense of the prevailing key is stronger than that produced by the last heard chord in isolation. Thus, listeners integrate harmonic functions over multiple chords, developing a sense of key that may need to be reevaluated as additional chords are sounded. It is suggested that the perceived relations between chords and keys and between different keys are mediated through an internal representation of the hierarchy of tonal functions of single tones in music. Music consists of tones varying in pitch, serve to highlight rhythmic patterns, further duration, loudness, and timbre, but the per- emphasize phrase structure, and distinguish ception of music extends well beyond the between tones constituting the primary meregistration of these physical attributes of lodic line and tones serving more ornamental the musical stimulus. Indeed, music contains or harmonic functions. Timbral characterconsiderable structure even in the relations istics may additionally provide important that obtain among the individual tones. For cues for the overall structure of the musical example, the durations are such that metri- composition.

789 citations


Journal ArticleDOI
TL;DR: A broad framework for models of production is outlined that incorporates interactions between syntactic and lexical processing within a limited-capacity processing system, and permits a resolution of contradictions in the literature on pragmatic determinants of constituent order in adult language use.
Abstract: it is widely acknowledged that characteristics of the general information processing svstem in which sentence formulation occurs mav provide constraints on syntax in language use. This paper proposes one possibli&urce of such constraints. Evidence is reviewed indicating that the syntax of sentences may .to some degree reflect the transient processing demands of lexical retrieval, suggesting an interaction between syntactic and lexical processing. Specifically, the syntactic structure of utterances appears to be sensitive to the accessibility of lexical information, with phrases containing more accessible information occurring earlier in sentences. The existence of such an interaction argues that the utterance formulation system is not strictly hierarchical, as most current approaches to sentence production imply. A broad framework for models of production is outlined that incorporates these interactions within a limited-capacity processing system. This framework also permits a resolution of contradictions in the literature on pragmatic determinants of constituent order in adult language use.

643 citations


Journal ArticleDOI
TL;DR: In this paper, the Bem-Funder (1978) template-matching approach did not enhance the search for cross-situational consistency either in their original data or in an extended replication presented here.
Abstract: Recent efforts to resolve the debate regarding the consistency of social behavior are critically analyzed and reviewed in the light of new data. Even with reliable measures, based on multiple behavior observations aggregated over occasions, mean cross-situational consistency coefficients were of modest magnitude; in contrast, impressive temporal stability was found. Although aggregation of measures over occasions is a useful step in establishing reliability, aggregation of measures over situations bypasses rather than resolves the problem of cross-situational consistency. The Bem-Funder (1978) template-matching approach did not enhance the search for cross-situational consistency either in their original data or in an extended replication presented here. The Bern-Allen (1974) moderator-variable approach also was not found to yield greater cross-situational consistency in the behavior of "some of the people some of the time" either in their original data or in the present study of conscientiousness. Congruent with a cognitive prototype approach, it was proposed and demonstrated that the judgment of trait consistency is strongly related to the temporal stability of highly prototypic behaviors. In contrast, the global impression of consistency may not be strongly related to highly generalized cross-situational consistency, even in prototypic behaviors. Thus, the perception and organization of personality consistencies seems to depend more on the temporal stability of key features than on the observation of cross-situational behavioral consistency, and the former may be easily interpreted as if it were the latter.

588 citations


Journal ArticleDOI
TL;DR: An activation-verification model for letter and word recognition yielded predictions of two-alternative forced-choice performance for 864 individual stimuli that were either words, orthographically regular nonwords, or orthographically irregular nonwords.
Abstract: An activation-verification model for letter and word recognition yielded predictions of two-alternative forced-choice performance for 864 individual stimuli that were either words, orthographically regular nonwords, or orthographically irregular nonwords. The encoding algorithm (programmed in APL) uses empirically determined confusion matrices to activate units in both an alphabetum and a lexicon. In general, predicted performance is enhanced when decisions are based on lexical information, because activity in the lexicon tends to constrain the identity of test letters more than the activity in the alphabetum. Thus, the model predicts large advantages of words over irregular nonwords, and smaller advantages of words over regular nonwords. The predicted differences are close to those obtained in a number of experiments and clearly demonstrate that the effects of manipulating lexicality and orthography can be predicted on the basis of lexical constraint alone. Furthermore, within each class (word, regular nonword, irregular nonword) there are significant correlations between the simulated and obtained performance on individual items. Our activation-verification model is contrasted with McClelland and Rumelhart's (1981) interactive activation model.

544 citations







Journal ArticleDOI
TL;DR: A unified processing framework is suggested wherein attentional and orienting subsystems coexist in a complementary relationship that controls the adaptive self-organization of internal representations in response to expected and unexpected events.
Abstract: Some recent formal models of Pavlovian and instrumental conditioning contain internal paradoxes that restrict their predictive power. These paradoxes can be traced to an inadequate formulation of how mechanisms of short-term memory and long-term memory work together to control the shifting balance between the processing of expected and unexpected events. Once this formulation is strengthened, a unified processing framework is suggested wherein attentional and orienting subsystems coexist in a complementary relationship that controls the adaptive self-organization of internal representations in response to expected and unexpected events. In this framework, conditioning and attentional constructs can be more directly validated by interdisciplinary paradigms in which seemingly disparate phenomena can be shown to share similar physiological and pharmacological mechanisms. A model of cholinergic-catecholaminergic interactions suggests how drive, reinforcer, and arousal inputs regulate motivational baseline, hysteresis, and rebound, with the hippocampus as a final common path. Extinction, conditioned emotional responses, conditioned avoidance responses, secondary conditioning, and inverted U effects also occur. A similar design in sensory and cognitive representations suggests how short-term memory reset and attentional resonance occur and are related to evoked potentials such as N200, P300, and contingent negative variation (CNV). Competitive feedback properties such as pattern matching, contrast enhancement, and normalization of short-term memory patterns make possible the hypothesis testing procedures that search for arid define new internal representations in response to unexpected events. Longterm memory traces regulate adaptive filtering, expectancy learning, conditioned reinforcer learning, incentive motivational learning, and habit learning. When these mechanisms act together, conditioning phenomena such as overshadowing, unblocking, latent inhibition, overexpectation, and behavioral contrast emerge. Internal Problems of Some Mackintosh (1971), Rescorla and Wagner


Journal ArticleDOI
TL;DR: A two-dimensional "harmonic map," obtained by an affine transformation of the melodic map, provides for optimally compact representations of chords and harmonic relations; it is isomorphic to the toroidal structure that Krumhansl and Kessler (1982) show to represent the psychological relations among musical keys.
Abstract: ' Rectilinear scales of pitch can account for the similarity of tones close together in frequency but not for the heightened relations at special intervals, such as the octave or perfect fifth, that arise when the tones are interpreted musically. Increasingly adequate accounts of musical pitch are provided by increasingly generalized, geometrically regular helical structures: a simple helix, a double helix, and a double helix wound around a torus in four dimensions or around a higher order helical cylinder in five dimensions. A two-dimensional \"melodic map\" of these double-helical structures provides for optimally compact representations of musical scales and melodies. A two-dimensional \"harmonic map,\" obtained by an affine transformation of the melodic map, provides for optimally compact representations of chords and harmonic relations; moreover, it is isomorphic to the toroidal structure that Krumhansl and Kessler (1982) show to represent the • psychological relations among musical keys.





Journal ArticleDOI
Drazen Prelec1
TL;DR: It is argued that insensitivity to marginal variables undermines not only the specific hypothesis of reinforcement-rate maximization but also the more general theories of value maximization developed by Rachlin, Staddon, .
Abstract: A theory of hyperbolic feedback functions for schedules of reinforcement is developed, followed by an analysis of matching and maximizing behavior in an environment characterized by such feedback functions. The hyperbolic function classifies schedules along two dimensions: one that measures the time and one • that measures the responses that are needed to collect a unit of reinforcement. Among other results it is shown that (a) both response rules predict pairwise linearity, a condition which states that absolute rates of response on any two schedules are mutually constrained by a linear function, (b) matching and maximizing rules predict identical behavior if and only if the predictions of either one are consistent with Luce's choice axiom, and (c) the hyperbolic feedback function is preserved under aggregation of response classes. The evidence collected from single and concurrent schedules of reinforcement strongly favors the matching interpretation of equilibrium behavior, as subjects do not seem to be influenced by the marginal trade-offs that define the maximizing behavior distribution. It is argued that insensitivity to marginal variables undermines not only the specific hypothesis of reinforcement-rate maximization but also the more general theories of value maximization developed by Rachlin, Staddon, . and others.


Journal ArticleDOI
TL;DR: McClelland's (1979) cascade model is investigated, and it is shown that the model does not have a well-defined reaction time (RT) distribution function because it always predicts a nonzero probability that a response never occurs.
Abstract: McClelland's (1979) cascade model is investigated, and it is shown that the model does not have a well-defined reaction time (RT) distribution function because it always predicts a nonzero probability that a response never occurs. By conditioning on the event that a response does occur, RT density and distribution functions are derived, thus allowing most RT statistics to be computed directly and eliminating the need for computer simulations. Using these results, an investigation of the model revealed that (a) it predicts mean RT additivity in most cases of pure insertion or selective influence; (b) it predicts only a very small increase in standard deviations as mean RT increases; and (c) it does not mimic the distribution of discrete-stage models that have a serial stage with an exponentially distributed duration. Recently, McClelland (1979) proposed a continuous-time linear systems model of simple cognitive processes based on sequential banks of parallel integrators. This model, referred t o by McClelland as the cascade model, exhibits some potentially very interesting properties. For example, McClelland argues that under certain conditions it mimics some of the reaction time (RT) additivities characteristic o f serial discrete-stage models. Unfortunately, however, rigorous empirical testing of the model is precluded because McClelland (1979) offers no method for computing any of the RT statistics it predicts. The format of this note is as follows: I will show that the model always predicts a nonzero probability that a response never occurs, which means, for example, that it always predicts infinite mean RTs. One way to circumvent this problem is to look only at trials on which a reponse does occur. By doing this it is possible to derive an RT probability density function predicted by the cascade model. From it, virtually any desired RT statistic can be accurately computed. Some of these (e.g., means and variances) will be examined, with particular regard to how well they correspond t o known empirical results. For example, it turns out





Journal ArticleDOI
TL;DR: The new student of color vision begins with a false sense of security that there are simple computational rules for assigning three-dimensional coordinates to lights and that these same coordinate values—or some close relative—can be used to calculate.
Abstract: The new student of color vision begins with a false sense of security. The student learns that there are simple computational rules for assigning three-dimensional coordinates to lights. This scheme assigns equal coordinates to lights only when the lights appear identical (even though the lights may be physically different). Furthermore, the coordinates assigned to a light (a '+' b) formed by mixing together lights a and b is simply the sum of the coordinates assigned to light a plus the coordinates assigned to b. Such a scheme for assigning vectors to lights characterizes those instances when different lights have precisely the same color appearance. This leads the initiate to imagine that these same coordinate values—or some close relative—can be used to calculate