scispace - formally typeset
Search or ask a question

Showing papers on "Perceptual learning published in 1988"


Journal ArticleDOI
TL;DR: It is found that training with isolated words only increased the intelligibility of isolated words, although training with sentences increased theelligibility of both isolated words and sentences, suggesting that perceptual learning depends on the degree to which the training stimuli characterize the underlying structure of the full stimulus set.
Abstract: Speech signals provide an especially interesting and important class of stimuli for studying the effect of stimulus variability on perceptual learning, primarily because of the lack of acoustic–phonetic invariance of the speech signal (e.g., Liberman, Cooper, Shankweiler, & Studdert-Kennedy, 1967). Despite large differences in the acoustic–phonetic structure of speech produced by different talkers, listeners seldom have any difficulty recognizing the speech produced by a novel talker. Although context-dependent and talker-dependent acoustic–phonetic variability has often been viewed as noise that must be stripped away from the speech signal in order to reveal invariant phonetic structures (e.g., Stevens & Blumstein, 1978), it is also possible that this variability serves as an important source of information for the listener, which indicates structural relations among acoustic cues as well as information about the talker (Elman & McClelland, 1986). If the sources of variability in the speech waveform are understood by the listener, this information may play an important role in the perceptual decoding of linguistic segments (see Liberman, 1970). Therefore, if a listener must learn to recognize speech that is either degraded or impoverished, information about acoustic–phonetic variability of the speech signal may be critical to the learning process. Schwab, Nusbaum, and Pisoni (1985) recently demonstrated that moderate amounts of training with low-intelligibility synthetic speech will improve word recognition performance for novel stimuli generated by the same text-to-speech system. Schwab et al. trained subjects by presenting synthetic speech followed by immediate feedback in recognition tasks for words in isolation, for words in fluent meaningful sentences, and for words in fluent semantically anomalous sentences. Subjects trained under these conditions improved significantly in recognition performance for synthetic words in isolation and in sentence contexts compared to subjects who either received no training or received training on the same experimental tasks with natural speech. Thus, the improvement found for subjects trained with synthetic speech could not be ascribed to mere practice with or exposure to the test procedures. In addition, a follow-up study indicated that the effects of training with synthetic speech persisted even after 6 months. Thus, training with synthetic speech produced reliable and long-lasting improvements in the perception of words in isolation and of words in fluent sentences. One interesting aspect of the study reported by Schwab et al. is that subjects were presented with novel words, sentences, and passages on every day of the experiment. Thus, these subjects were presented with a relatively large sample of synthetic speech during training, and as a result, these listeners perceptually sampled much of the structural variability in this “synthetic talker.” The improvements in recognition of the synthetic speech may have been a direct result of learning the variability inherent in the acoustic–phonetic space of the text-to-speech system. On the other hand, listeners may simply have learned new prototypical acoustic–phonetic mappings (see Massaro & Oden, 1980) and ignored the structural relationships among these mappings. Another interesting aspect of the Schwab et al. study is the finding that recognition improved both for words in isolation and for words in fluent speech. This finding is of some theoretical relevance because recognizing words in fluent speech presents a problem that is not present when words are presented in isolation: The context-conditioned variability between words and the lack of phonetic independence between adjacent acoustic segments leads to enormous problems for the segmentation of speech into psychologically meaningful units that can be used for recognition. In fluent, continuous speech it is extremely difficult to determine where one word ends and another begins if only acoustic criteria are used (Pisoni, 1985; although cf. Nakatani & Dukes, 1977). Indeed, almost all current models and theories of auditory word recognition assume that word segmentation is a by-product of word recognition. Instead of proposing an explicit segmentation stage that generates word-length patterns that are matched against stored lexical representations in memory, current theories propose that words are recognized one at a time, in the sequence by which they are produced (Cole & Jakimik, 1980; Marslen-Wilson & Welsh, 1978; McClelland & Elman, 1986; Reddy, 1976). These theories claim that there is a lexical basis for segmentation such that recognition of the first word in an utterance determines the end of that word as well as the beginning of the next word. Although none of these models was proposed to address issues surrounding perceptual learning of words, these models suggest that training subjects with isolated words generated by a synthetic speech system should improve the recognition of words in fluent synthetic speech: If listeners recognize isolated words more accurately, word recognition in fluent speech should also improve, assuming that perception of words in fluent speech is a direct consequence of the same recognition processes that operate on isolated words. Conversely, training with fluent synthetic speech should improve performance on isolated words. However, recent evidence from studies using visual stimuli suggests that differences in the perceived structure of training stimuli may lead to the acquisition of different types of perceptual skills. Kolers and Magee (1978) presented inverted printed text in a training task and instructed subjects either to name the individual letters in the text or to read the words. After extensive training, subjects were found to have improved only on the task for which they received training: Attending to letters improved performance with letters but had little effect on reading words; conversely, attending to words improved performance with words but had little effect on naming letters. However, results for visual stimuli may not necessarily apply to speech because of the substantial differences that exist between spatially distributed, discrete printed text and temporally distributed, context-conditioned speech. The present study was carried out to investigate the role of stimulus variability in perceptual learning and the operation of lexical segmentation as a consequence of word recognition. In the perceptual learning study carried out by Schwab et al. (1985), subjects were trained and tested on the same types of linguistic materials, but they were never presented with the same stimuli twice. In the present study, we manipulated the amount of stimulus variability presented to subjects during training and we trained different groups of subjects on different types of linguistic materials. Half of the subjects received novel stimuli on each training day, and the other half received a constant training set repeated over and over. The ability to generalize to novel stimuli should indicate how stimulus variability affects perceptual learning of speech. Also, half of the subjects were trained on isolated words, and half were trained on fluent sentences. Transfer of training from one set of materials to the other should indicate the effects of linguistic structure on perceptual learning.

133 citations


ReportDOI
22 Jan 1988
TL;DR: A series of experiments has explored the perception of conjunctions of features, attempting to determine what makes this difficult or easy, and suggest that automatization in search is highly specific to the practiced task and has little effect on other perceptual tests.
Abstract: : The first year of the grant was spent in setting up the laboratory, and in starting research on a number of different projects. All are concerned with the visual processing of information in the perception of objects. A series of experiments has explored the perception of conjunctions of features, attempting to determine what makes this difficult or easy. A new method (detection of apparent motion) was tested and a modification of feature-integration theory was developed to accommodate the new results. Other projects have been concerned with coding of features, finding evidence for modularity, testing the level of abstraction at which features (such as orientation) are coded, the different media which support the coding of shape, and the space in which they are represented (retinal or three-dimensional). Another project has probed the effects of perceptual learning with extended practice at detecting particular sets of targets; the results suggest that automatization in search is highly specific to the practiced task and has little effect on other perceptual tests. Six graduate students are at present, working on projects wholly or partly supported by the grant.

37 citations


Journal ArticleDOI
TL;DR: Impairment of advanced perceptual skills which are needed to establish relationships of and among features of the forms are suggested in the context of key task demands.

28 citations


Journal ArticleDOI
TL;DR: Characteristics of auditory perceptual learning were investigated by monitoring improvements in the identification of tonal patterns ranging in length from 135 to 540 msec in total duration, suggesting differences in learning strategies or differences in the focusing of auditory attention.
Abstract: Investigations of listeners' perceptions of various au­ ditory stimuli have typically measured the end result of perceptual learning. Most psychoacousticians ignore the training phases of their studies, simply reporting that their data were obtained from "well-practiced listeners. " There is usually at least tacit acknowledgment, however, that a listener's performance changes during an experiment. Data collected after some period of training, which are characterized by improved levels of performance and in­ creased consistency and stability, are assumed to more accurately reflect listeners' true sensory capabilities. The goal of the present study was to examine the process oflearning to recognize temporally complex stimuli. Se­ quences of pure tones were constructed with some sys­ tematic constraints on the allowable order of individual tonal elements. Improvements in the ability to identify these patterns were monitored over a period of several weeks of listening. In addition to providing information concerning the possible limits on learning to recognize a small set oftonal patterns, this experiment was designed to determine the time course of improvements in stimu­ lus identification and to gain some insight into the qualita­ tive aspects of auditory perceptual learning. Descriptions of identification learning are characterized by long training times and significant differences among individual subjects. In an early study of the time course

21 citations


Journal ArticleDOI
TL;DR: A standardized psychophysiological therapeutic treatment procedure of 'optimal' physiological activation simultaneously combined with 'analytical-specific perceptual stimulation' is presented.

21 citations


Journal ArticleDOI
Rita Dunn1
TL;DR: In this article, the authors provide the research basis for an experimental procedure which permits teachers to use large group instruction while, simultaneously, having each student introduced to the new information through the strongest modality, reinforcing through multisensory assignments, and having the information internalized through application in a creative activity.
Abstract: The purpose of this manuscript is to review the aptitude/treatment/ interaction studies concerned with perceptual learning styles and to recommend experimentation which capitalizes on their findings Those investigations revealed that when students were introduced to difficult, new material through their strongest perception and reinforced through a secondary or tertiary channel, significantly higher test scores resulted Thus, this manuscript provides the research basis for an experimental procedure which permits teachers to use large‐group instruction while, simultaneously: (a) having each student introduced to the new information through the strongest modality; (b) reinforcing through multisensory assignments; and (c) having the information internalized through application in a creative activity

18 citations


Journal ArticleDOI
TL;DR: Two tests designed specifically for hearing-impaired students are described, the Test of Visual Perceptual Abilities and the test of Spatial Perception in Sign Language, which separate students with strong visual perceptual abilities from those whose visual perceptual deficits interfere with their ability to comprehend sign language.
Abstract: This article describes two tests designed specifically for hearing-impaired students, the Test of Visual Perceptual Abilities and the Test of Spatial Perception in Sign Language. Both tests were used in a research project involving 682 hearing-impaired students. Resulting data separate students with strong visual perceptual abilities from those whose visual perceptual deficits interfere with their ability to comprehend sign language. The Test of Visual Perceptual Abilities differs from currently used tests of visual perception in that it contains dynamic stimuli, is administered to groups, and is appropriate for hearing-impaired students, ages 8 through 18 years.

7 citations


Journal ArticleDOI
TL;DR: Some degree of selectivity of preserved learning capabilities in amnesia is found, and it is hypothesized that this selectivity may be determined by several variables, including the type of task, the etiology and site of pathology, and the severity of amnesia.

6 citations


Journal ArticleDOI
01 Oct 1988
TL;DR: In this paper, the authors investigated age-related differences in perceptual learning under conditions of consistent mapping (CM), varied mapping (VM), and context-specific training and found significant differences between young and old adults only under CM training.
Abstract: The focus of the present study was the investigation of age-related differences in perceptual learning under conditions of consistent mapping (CM), varied mapping (VM), and context-specific training. Context-specific training involved conditions where specific target and distractor sets were paired consistently within a condition but were inconsistent across conditions. Eight young (mean age 25) and eight old (mean age 67) subjects participated for 8000 trials of training and 3200 trials of various transfer conditions. The transfer conditions were designed to ascertain the extent to which the subjects had automatized their performance in each of the training conditions. The training results yielded significant differences between young and old adults only under CM training. Performance in the context conditions for young adults mimicked that of the old subjects in the CM condition. The training results suggest that manipulations which disrupt the development of attention-calling strength of stimuli lead to equivalent performance for young and old adults. The transfer results provide similar information. It is proposed that the ability to “strengthen” target information is disrupted in older adults. Based on our previous and the present findings, processing principles are presented which outline important differential considerations for training young and/or older adults.

6 citations





Journal ArticleDOI
TL;DR: Analysis showed faster identification by those subjects using their left hands on Series 1 with no hand-differences appearing on Series 2 and 3, and significant over-all improvement in identification time occured with practice.
Abstract: The present study investigated the effects of the use of the right and left hands on haptic identification of letters of the alphabet Each of the 64 right-handed subjects was given three series of randomly ordered presentations of the 26 letters of the alphabet The subjects were asked to feel each letter and name correctly each letter as quickly but as accurately as possible Analysis showed faster identification by those subjects using their left hands on Series 1 with no hand-differences appearing on Series 2 and 3 Significant over-all improvement in identification time occurred with practice The results were interpreted in terms of a novelty hypothesis of right-hemisphere function and an explanation of perceptual learning of letter identification

Journal ArticleDOI
TL;DR: Results indicate that performance on search tasks with stimuli that are variably mapped show no qualitative changes attributable to manipulation of response format and improvement due to consistent mapping (CM) practice is attenuated in the no-only response condition.
Abstract: Interactions of stimulus consistency and type of responding were examined during perceptual learning. Subjects performed hybrid memory-visual search tasks over extended consistent and varied mapping practice. Response conditions required subjects to respond to both the presence and absence of a target, only when a target was present or only when a target was not present. After training, the subjects were transferred to a different response condition. The results indicate that: (1) performance on search tasks with stimuli that are variably mapped show no qualitative changes attributable to manipulation of response format; (2) improvement due to consistent mapping (CM) practice is attenuated in the no-only response condition; (3) yes-only CM training attenuates the subjects ability to transfer to no-only responding; and (4) yes/no CM training leads to the greatest improvement and transfer when compared with other responding conditions. The practice and transfer data support and extend previous research investigating effects of response set in memory/visual search and help to delineate factors that facilitate or inhibit reduction of load effects in memory and visual search.

Book ChapterDOI
01 Jan 1988
TL;DR: A thorough study of congenitally blind persons learning to use a vision substitution system offered several unique opportunities to evaluate central nervous system mechanisms involved in the perceptual development and sensory substitution process.
Abstract: The loss of a major sensory system, such as sight, markedly alters cortical activity. The loss can be considered to produce “brain damage.” Our sensory substitution studies were initiated a number of years ago as a model of brain plasticity; congenitally blind persons were considered a Jacksonian model [Hughlings Jackson emphasized the opportunities for discovery offered by the “. . . experiments made on the brain by disease.” (excerpts in Clarke and O’Malley, 1968)]. Thus, a thorough study of congenitally blind persons learning to use a vision substitution system offered several unique opportunities: 1. The ability to control and evaluate all aspects of an entirely novel perceptual learning experience, since no relevant visual learning could go on without the use of the substitute receptor matrix (TV camera). 2. The opportunity to evaluate central nervous system mechanisms involved in the perceptual development and sensory substitution process. 3. The opportunity to evaluate the interrelationships of relevant systems, such as the role of motor control (e.g., of camera movement), on spatial localization with a vision substitution system.