scispace - formally typeset
Search or ask a question

Showing papers on "Perceptual learning published in 2011"


Journal ArticleDOI
TL;DR: It is reported that peripheral vision is limited with regard to pattern categorization by a distinctly lower representational complexity and processing speed than those imposed on low-level functions and by way of crowding.
Abstract: We summarize the various strands of research on peripheral vision and relate them to theories of form perception. After a historical overview, we describe quantifications of the cortical magnification hypothesis, including an extension of Schwartz's cortical mapping function. The merits of this concept are considered across a wide range of psychophysical tasks, followed by a discussion of its limitations and the need for non-spatial scaling. We also review the eccentricity dependence of other low-level functions including reaction time, temporal resolution, and spatial summation, as well as perimetric methods. A central topic is then the recognition of characters in peripheral vision, both at low and high levels of contrast, and the impact of surrounding contours known as crowding. We demonstrate how Bouma's law, specifying the critical distance for the onset of crowding, can be stated in terms of the retinocortical mapping. The recognition of more complex stimuli, like textures, faces, and scenes, reveals a substantial impact of mid-level vision and cognitive factors. We further consider eccentricity-dependent limitations of learning, both at the level of perceptual learning and pattern category learning. Generic limitations of extrafoveal vision are observed for the latter in categorization tasks involving multiple stimulus classes. Finally, models of peripheral form vision are discussed. We report that peripheral vision is limited with regard to pattern categorization by a distinctly lower representational complexity and processing speed. Taken together, the limitations of cognitive processing in peripheral vision appear to be as significant as those imposed on low-level functions and by way of crowding.

719 citations


Journal ArticleDOI
09 Dec 2011-Science
TL;DR: It is suggested that early visual areas are so plastic that mere inductions of activity patterns are sufficient to cause visual perceptual learning (VPL).
Abstract: It is controversial whether the adult primate early visual cortex is sufficiently plastic to cause visual perceptual learning (VPL). The controversy occurs partially because most VPL studies have examined correlations between behavioral and neural activity changes rather than cause-and-effect relationships. With an online-feedback method that uses decoded functional magnetic resonance imaging (fMRI) signals, we induced activity patterns only in early visual cortex corresponding to an orientation without stimulus presentation or participants' awareness of what was to be learned. The induced activation caused VPL specific to the orientation. These results suggest that early visual areas are so plastic that mere inductions of activity patterns are sufficient to cause VPL. This technique can induce plasticity in a highly selective manner, potentially leading to powerful training and rehabilitative protocols.

425 citations


Journal ArticleDOI
TL;DR: The results are consistent with an account of perceptual learning according to which visual processing is remodeled by the brain, utilizing sensory information acquired during task performance, and may lead to perceptual overfitting and over-specificity.

365 citations


Journal ArticleDOI
TL;DR: The principal goal of the present study was to verify the possibility of inducing differential plasticity effects using two tES approaches [i.e., direct current stimulation (tDCS) and random noise stimulation ( tRNS)] during the execution of a visual perceptual learning task.
Abstract: Perceptual learning is considered a manifestation of neural plasticity in the human brain. We investigated brain plasticity mechanisms in a learning task using noninvasive transcranial electrical stimulation (tES). We hypothesized that different types of tES would have varying actions on the nervous system, which would result in different efficacies of neural plasticity modulation. Thus, the principal goal of the present study was to verify the possibility of inducing differential plasticity effects using two tES approaches [i.e., direct current stimulation (tDCS) and random noise stimulation (tRNS)] during the execution of a visual perceptual learning task.

297 citations


Journal ArticleDOI
25 Aug 2011-Neuron
TL;DR: It is shown that the observed uniform reduction in noise correlations leads to little change in population coding efficiency when all neurons are decoded, suggesting that global changes in correlated noise among sensory neurons may be insufficient to account for perceptual learning.

218 citations


Journal ArticleDOI
14 Apr 2011-Neuron
TL;DR: This study quantified changes in perceptual ability after pairing tones with stimulation of the cholinergic nucleus basalis to induce auditory cortex map plasticity outside of a behavioral context, providing evidence that cortical map Plasticity can enhance perceptual learning.

215 citations


Journal ArticleDOI
12 May 2011-Neuron
TL;DR: Results provide strong evidence for perceptual learning-related changes in higher order areas and suggest that perceptual and reward learning are based on a common neurobiological mechanism.

166 citations


Journal ArticleDOI
TL;DR: This work shows that it can capture both the neurophysiological and behavioral aspects of perceptual learning by altering only the feedforward connectivity in a recurrent network of spiking neurons so as to improve probabilistic inference in early visual areas.
Abstract: Extensive training on simple tasks such as fine orientation discrimination results in large improvements in performance, a form of learning known as perceptual learning. Previous models have argued that perceptual learning is due to either sharpening and amplification of tuning curves in early visual areas or to improved probabilistic inference in later visual areas (at the decision stage). However, early theories are inconsistent with the conclusions of psychophysical experiments manipulating external noise, whereas late theories cannot explain the changes in neural responses that have been reported in cortical areas V1 and V4. Here we show that we can capture both the neurophysiological and behavioral aspects of perceptual learning by altering only the feedforward connectivity in a recurrent network of spiking neurons so as to improve probabilistic inference in early visual areas. The resulting network shows modest changes in tuning curves, in line with neurophysiological reports, along with a marked reduction in the amplitude of pairwise noise correlations.

121 citations


Journal ArticleDOI
TL;DR: It is suggested that for risk- or loss-related stimuli, less specificity could be a benefit, as it invokes the same mechanisms that respond quickly and efficiently in the face of danger.
Abstract: Perceptual learning typically improves discrimination thresholds. Here the authors find that aversive learning actually increased discrimination thresholds, resulting in stimulus generalization. The authors suggest that less specificity could be a benefit in this kind of situation, as it could speed responses in dangerous situations.

114 citations


Journal ArticleDOI
TL;DR: The behavioral results, mechanisms, physiological basis, computational models, and applications of visual perceptual learning are reviewed.

112 citations


Journal ArticleDOI
TL;DR: The specificity of the learning effect, and the lack of changes to the fPRL location and fixation stability suggest that the improvements are likely to represent genuine plasticity of the visual system despite the older ages of the observers, coupled with long-standing sensory deficits.
Abstract: Reading is difficult and slow for many low vision patients, especially those with central vision loss who are obligated to use their peripheral retina to read. The leading cause of visual impairment in developed countries is age-related macular degeneration (AMD),1–3 which is also the leading cause of central vision loss. Because reading is the most common clinical complaint as well as the primary goal for patients with central vision loss seeking visual rehabilitation,1,4,5 improving the reading performance for these patients is a key challenge facing low vision rehabilitation. Previous studies have examined a number of ways to improve reading performance in people with central vision loss. For instance, in low vision clinics, patients are routinely prescribed with magnifiers for reading tasks. However, even with magnification, reading speed in people with central vision loss is still lower than that at the normal fovea.5–8 Substantial effort has been invested to determine the mode of text presentation that offers people with central vision loss the fastest reading speed, including page format, scrolling-text in the horizontal or the vertical direction, and rapid serial visual presentation (RSVP), where words are presented one at a time on a display. Most studies found no significant differences in reading speed for different text presentation modes for people with central vision loss.9–11 A handful of studies found a small advantage of using RSVP,12 especially if the word presentation rate varied with word length13 or when observers were allowed to adjust their own presentation rate.14 Other attempts have explored whether simple manipulation of text typography and typesetting such as increasing letter spacing15,16 and line spacing,17 which presumably reduces the crowding effect among text, could improve reading speed. Unfortunately, none of these simple manipulations of text typography or typesetting improve reading speed for people with central vision loss.16,17 In this study, I explored the feasibility of using perceptual learning, a method that has proven to be effective in improving visual functions in normal and amblyopic visual systems, to improve reading speed for people with central vision loss. Perceptual learning is defined as “any relatively permanent and consistent change in the perception of a stimulus array, after practice or experience with this array”.18 Practically, perceptual learning is synonymous with “training” or “practice.”19 Previous studies have shown that visual performance improves with practice for a variety of tasks,19–25 in younger as well as in older adults,26,27 and in the normal fovea and periphery alike.19,27–31 In addition, perceptual learning has also shown effectiveness in improving visual functions in adults with amblyopia (monocular sensory loss of vision in the absence of an organic origin).32–38 In many cases, adults with amblyopia improved not only on the trained task, but their visual acuities (an untrained task) also improved as a result of training.33–37 Considering the effectiveness of perceptual learning in improving visual functions in the normal visual system and in adults with amblyopia, I asked whether perceptual learning would also be effective in improving reading performance for people with central vision loss. Clearly, there are many challenges facing the use of perceptual learning in improving visual functions in people with central vision loss. Specifically, the most common cause of central vision loss is AMD,1–3 which primarily afflicts people older than 65 years of age. It is well known that even though visual performance of older adults can improve with practice, more training may be required before the improvement reaches a plateau26 and that there may be more day-to-day lapses in improvement, which would lead to an overall reduction in the amount of learning.27 Also, in contrast to amblyopia, the majority of people with central vision loss suffer from bilateral vision loss and their functioning retina may not be healthy; whether these would impact the effectiveness of perceptual learning for people with central vision loss is unknown. Hence, despite the promising benefits that perceptual learning can deliver, it remains unclear if people with central vision loss can benefit from it. To my knowledge, there exists no published paper on using perceptual learning to improve visual functions in people with central vision loss, although previous studies have examined whether or not reading performance could be improved by training comprehension,39 or training patients to use a CCTV or stand magnifier to read.40,41 Comprehension training is a cognitive task, and the use of a CCTV or stand magnifier requires motor skills, making it unclear that any improvement from these training represents genuine improvement in the sensory system, which is the basis of perceptual learning. The goal of this study was to determine the feasibility of using perceptual learning to improve reading speed for people with central vision loss. Previous works have established that reading performance in the normal periphery benefits from perceptual learning based on the following training tasks: identifying random sequences of three letters at various positions across the visual field,19,27,31 performing a lexical decision task,31 and reading.31 The greatest improvement in reading speed was obtained using reading as the training task.31 Consequently, reading was used as the training task in this study.

Journal ArticleDOI
TL;DR: The results revealed that improvements of VH training on letter recognition and handwriting quality were higher than improvements after V training, and the link between visuo-motor skills, perceptual skills and handwriting was assessed.

Journal ArticleDOI
TL;DR: These findings demonstrate that perceptual learning in a coarse discrimination task indeed can change the response properties of a cortical sensory area.

Journal ArticleDOI
TL;DR: Dissociation clearly shows that the functional role of the hippocampus for learning is determined by the domain of the learned association and that the function of the medial temporal lobe system is the processing of contingencies between perceptual features regardless of the explicit or implicit nature of the ensuing memory.
Abstract: Traditionally, the medial temporal lobe (MTL) was linked to explicit or declarative memory in associative learning. However, recent studies have reported MTL involvement even when volunteers are not consciously aware of the learned contingencies. Therefore, the mechanism of the MTL-related learning process cannot be described sufficiently by the explicit/implicit distinction, and the underlying process in the MTL for associative learning needs a more functional characterization. A possible feature that would allow a functional specification also for implicit learning is the nature of the material that is learned. Given that implicit memory tasks often comprise a combination of perceptual and motor learning, we hypothesized that implicit learning of the perceptual but not the motor component entails MTL activation in these studies. To directly test this hypothesis, we designed a purely perceptual and a purely motor variant of the serial reaction time task. In two groups of human volunteers, behavioral results clearly showed that both variants were learned without awareness. Neuronal recordings using fMRI revealed that bilateral hippocampal activation was observed only for implicit learning of the perceptual sequence, not for the motor sequence. This dissociation clearly shows that the functional role of the hippocampus for learning is determined by the domain of the learned association and that the function of the medial temporal lobe system is the processing of contingencies between perceptual features regardless of the explicit or implicit nature of the ensuing memory.

Journal ArticleDOI
TL;DR: In this article, a whole-system approach to evaluate perception of second-language vowels in two experiments was applied, and the results supported the predicted positive association between L2 vocabulary size and L2 vowel perception rather than a general prediction of increased exposure duration leading to improved perception.
Abstract: Improvement in second-language (L2) perception has been posited to occur early in L2 learning when the L2 vocabulary is still small, whereas a large L2 vocabulary curtails perceptual learning (the perceptual assimilation model for SLA [PAM-L2]; Best & Tyler, 2007). This proposition is extended by suggesting that early L2 lexical development facilitates the establishment of phonological categories in a manner analogous to children’s first-language (L1) acquisition before as opposed to after the vocabulary spurt. According to this view, L2 speech should be assimilated more consistently to L1 phonological categories and cross-boundary contrasts should be discriminated more accurately by learners with larger L2 vocabularies. To test this proposition, a novel whole-system approach to evaluate perception of L2 vowels in two experiments was applied. In Experiment 1, Japanese learners of Australian English (AusE) with less than 12 weeks of L2 learning in Australia completed labeling and goodness ratings on all AusE vowels, selecting from among all monomoraic and bimoraic Japanese vowels and vowel combinations. They also discriminated four L2 vowel contrasts, representing a range of PAM-L2 contrast types, and completed a L2 vocabulary size assessment. Learners with larger vocabularies had more consistent L2-L1 vowel assimilation and more accurate cross-boundary discrimination than those with smaller vocabularies, supporting the proposition that lexical development assists L2 phonological acquisition. Experiment 2 compared the perception of AusE vowels by Japanese learners after only 4–8 weeks in Australia with their perception after 6–8 months of L2 exposure. The results also supported the predicted positive association between L2 vocabulary size and L2 vowel perception rather than a general prediction of increased exposure duration leading to improved perception.

Journal ArticleDOI
TL;DR: It is argued that the more surprising recent findings are those showing that mult isensory experience also influences the subsequent unisensory processing.
Abstract: Multisensory perception has been the focus of intense research in recent years. It is now well established that crossmodal interactions are ubiquitous in perceptual processing and endow the system with improved precision, accuracy, processing speed, etc. While these findings have shed much light on principles and mechanisms of perception, ultimately it is not very surprising that multiple sources of information provide benefits in performance compared to a single source of information.Here, we argue that the more surprising recent findings are those showing that multisensory experience also influences the subsequent unisensory processing. For example, exposure to auditory-visual stimuli, can change the way auditory or visual stimuli are processed subsequently even in isolation. We review three sets of findings that represent three different types of learning ranging from perceptual learning, to sensory recalibration, to associative learning. In all these cases exposure to multisensory stimuli profoundly influences the subsequent unisensory processing. This diversity of phenomena may suggest that continuous modification of unisensory representations by multisensory relationships may be a general learning strategy used by the brain.

Journal ArticleDOI
22 Dec 2011-Neuron
TL;DR: It is argued that developmental studies must take greater advantage of behavioral benchmarks and how they can play a much larger role in guiding experimental design to establish empirical connections among environment, neural development, and perception.

Journal ArticleDOI
TL;DR: The first evidence that practice with specific visual stimuli (gratings) induces long-term potentiation (LTP) of synaptic responses in the rat V1 is provided, and the notion that learning can occur at processing stages as early as the primary sensory cortices is highlighted.

Journal ArticleDOI
TL;DR: Can perceptual learning be used to treat amblyopia beyond the critical period of visual development?

Journal ArticleDOI
TL;DR: Whether training listeners with specific types of NV speech improves intelligibility of vocoded speech with different acoustic characteristics is investigated and generalization across frequency regions suggests that learning occurs at a stage of processing at which some abstraction from the physical signal has occurred.
Abstract: Recent work demonstrates that learning to understand noise-vocoded (NV) speech alters sublexical perceptual processes but is enhanced by the simultaneous provision of higher-level, phonological, but not lexical content (Hervais-Adelman, Davis, Johnsrude, & Carlyon, 2008), consistent with top-down learning (Davis, Johnsrude, Hervais-Adelman, Taylor, & McGettigan, 2005; Hervais-Adelman et al., 2008). Here, we investigate whether training listeners with specific types of NV speech improves intelligibility of vocoded speech with different acoustic characteristics. Transfer of perceptual learning would provide evidence for abstraction from variable properties of the speech input. In Experiment 1, we demonstrate that learning of NV speech in one frequency region generalizes to an untrained frequency region. In Experiment 2, we assessed generalization among three carrier signals used to create NV speech: noise bands, pulse trains, and sine waves. Stimuli created using these three carriers possess the same slow, time-varying amplitude information and are equated for naive intelligibility but differ in their temporal fine structure. Perceptual learning generalized partially, but not completely, among different carrier signals. These results delimit the functional and neural locus of perceptual learning of vocoded speech. Generalization across frequency regions suggests that learning occurs at a stage of processing at which some abstraction from the physical signal has occurred, while incomplete transfer across carriers indicates that learning occurs at a stage of processing that is sensitive to acoustic features critical for speech perception (e.g., noise, periodicity).

Journal ArticleDOI
TL;DR: It is shown that listeners use within-speaker variation to accommodate gross categorical variation, which suggests that increasing variation improves the mapping of perceptually mismatched stimuli.

Journal ArticleDOI
TL;DR: The prefrontal cortex is considered essential for learning to perform cognitive tasks though little is known about how the representation of stimulus properties is altered by learning as discussed by the authors, and the results indicate that mastery of a cognitive task degrades the apparent stimulus selectivity as neurons represent more abstract information related to the task.
Abstract: The prefrontal cortex is considered essential for learning to perform cognitive tasks though little is known about how the representation of stimulus properties is altered by learning. To address this issue, we recorded neuronal activity in monkeys before and after training on a task that required visual working memory. After the subjects learned to perform the task, we observed activation of more prefrontal neurons and increased activity during working memory maintenance. The working memory–related increase in firing rate was due mostly to regular-spiking putative pyramidal neurons. Unexpectedly, the selectivity of neurons for stimulus properties and the ability of neurons to discriminate between stimuli decreased as the information about stimulus properties was apparently present in neural firing prior to training and neuronal selectivity degraded after training in the task. The effect was robust and could not be accounted for by differences in sampling sites, selection of neurons, level of performance, or merely the elapse of time. The results indicate that, in contrast to the effects of perceptual learning, mastery of a cognitive task degrades the apparent stimulus selectivity as neurons represent more abstract information related to the task. This effect is countered by the recruitment of more neurons after training.

Journal ArticleDOI
TL;DR: This work investigates how subjects learn to see initially indiscriminable metacontrast-masked shapes and finds that sensitivity and subjective awareness increase with training, which indicates that improvements in shape sensitivity involve visual areas up to V4, whereas changes in subjective awareness involve other brain regions.
Abstract: Perceptual learning not only improves sensitivity, but it also changes our subjective experience. However, the question of how these two learning effects relate is largely unexplored. Here we investigate how subjects learn to see initially indiscriminable metacontrast-masked shapes. We find that sensitivity and subjective awareness increase with training. However, sensitivity and subjective awareness dissociate in space: Learning effects on performance are lost when the task is performed at an untrained location in another quadrant, whereas learning effects on subjective awareness are maintained. This finding indicates that improvements in shape sensitivity involve visual areas up to V4, whereas changes in subjective awareness involve other brain regions. Furthermore, subjective awareness dissociates from sensitivity in time: In an early phase of perceptual learning, subjects perform above chance on trials that they rate as subjectively invisible. Later, this phenomenon disappears. Subjective awareness is thus neither necessary nor sufficient for achieving above-chance objective performance.

Journal ArticleDOI
TL;DR: This study is not only the first to successfully and unambiguously compare brain activation between perceptual and motor levels of implicit sequence learning, it also provides new insights into the specific hippocampal and caudate learning function.
Abstract: The present fMRI study investigated the neural areas involved in implicit perceptual sequence learning. To obtain more insight in the functional contributions of the brain areas, we tracked both the behavioral and neural time course of the learning process, using a perceptual serial color matching task. Next, to investigate whether the neural time course was specific for perceptual information, imaging results were compared to the results of implicit motor sequence learning, previously investigated using an identical serial color matching task. Results indicated that implicit sequences can be acquired by at least two neural systems: the caudate nucleus and the hippocampus, having different operating principles. The caudate nucleus contributed to the implicit sequence learning process for perceptual as well as motor information in a similar and gradual way. The hippocampus, on the other hand, was engaged in a much faster learning process which was more pronounced for the motor compared to the perceptual task. Interestingly, the perceptual and motor learning process occurred on a comparable implicit level, suggesting that consciousness is not the main determinant factor dissociating the hippocampal from the caudate learning system. This study is not only the first to successfully and unambiguously compare brain activation between perceptual and motor levels of implicit sequence learning, it also provides new insights into the specific hippocampal and caudate learning function.

Journal ArticleDOI
01 Jan 2011-Infancy
TL;DR: A theoretical framework is described that addresses perceptual learning in infancy and the manner in which it affects visual organization and development and identifies five kinds of experiences that induce learning, and suggests that they work via attentional and unitization mechanisms to modify visual organization.
Abstract: Pattern perception and organization are critical functions of the visual cognition system. Many organizational processes are available early in life, such that infants as young 3 months of age are able to readily utilize a variety of cues to organize visual patterns. However, other processes are not readily evident in young infants, and their development involves perceptual learning. We describe a theoretical framework that addresses perceptual learning in infancy and the manner in which it affects visual organization and development. It identifies five kinds of experiences that induce learning, and suggests that they work via attentional and unitization mechanisms to modify visual organization. In addition, the framework proposes that this kind of learning is abstract, domain general, functional at different ages in a qualitatively similar manner, and has a long-term impact on development through a memory reactivation process. Although most models of development assume that experience is fundamental to development, very little is actually known about the process by which experience affects development. The proposed framework is an attempt to account for this process in the domain of perception.

Journal ArticleDOI
31 Oct 2011-PLOS ONE
TL;DR: It is found that learning reduces crowding and improves contrast sensitivity, but has no effect on visual acuity (VA), which has important implications for the rehabilitation of low-vision patients who must use peripheral vision to perform tasks, such as reading and refined figure-ground segmentation.
Abstract: We investigated whether lateral masking in the near-periphery, due to inhibitory lateral interactions at an early level of central visual processing, could be weakened by perceptual learning and whether learning transferred to an untrained, higher-level lateral masking known as crowding. The trained task was contrast detection of a Gabor target presented in the near periphery (4°) in the presence of co-oriented and co-aligned high contrast Gabor flankers, which featured different target-to-flankers separations along the vertical axis that varied from 2λ to 8λ. We found both suppressive and facilitatory lateral interactions at target-to-flankers distances (2λ - 4λ and 8λ, respectively) that were larger than those found in the fovea. Training reduces suppression but does not increase facilitation. Most importantly, we found that learning reduces crowding and improves contrast sensitivity, but has no effect on visual acuity (VA). These results suggest a different pattern of connectivity in the periphery with respect to the fovea as well as a different modulation of this connectivity via perceptual learning that not only reduces low-level lateral masking but also reduces crowding. These results have important implications for the rehabilitation of low-vision patients who must use peripheral vision to perform tasks, such as reading and refined figure-ground segmentation, which normal sighted subjects perform in the fovea.

Journal ArticleDOI
TL;DR: This paper examines whether adults can adapt to novel accents of their native language that contain unfamiliar context-dependent phonological alternations, and explores the mechanism underlying this type of phonological learning.

Journal ArticleDOI
TL;DR: The results speak for an architecture with prelexical analysis of phonological categories to achieve both lexical access and episodic storage of exemplars.

Journal ArticleDOI
TL;DR: The present study pioneers the use of response time distributions in perceptual learning research with 27 observers practiced a visual motion-direction discrimination task with filtered-noise textures for four sessions with feedback.
Abstract: Performance on perceptual tasks improves with practice. Most theories address only accuracy data and tacitly assume that perceptual learning is a monolithic phenomenon. The present study pioneers the use of response time distributions in perceptual learning research. The 27 observers practiced a visual motion-direction discrimination task with filtered-noise textures for four sessions with feedback. Session 5 tested whether the learning effects transferred to the orthogonal direction. The diffusion model (Ratcliff, Psychological Review, 85, 59–108, 1978) achieved good fits to the individual response time distributions from each session and identified two distinct learning mechanisms with markedly different specificities. A stimulus-specific increase in the drift-rate parameter indicated improved sensory input to the decision process, and a stimulus-general decrease in nondecision time variability suggested improved timing of the decision process onset relative to stimulus onset (which was preceded by a beep). A traditional d’ analysis would miss the latter effect, but the diffusion-model analysis identified it in the response time data.

Journal ArticleDOI
TL;DR: In this article, the influence of visual learning style and the ideal L2 self on motivated L2 behavior was found to contribute strongly to the forming of a vivid ideal L 2 self, which in turn results in a higher level of motivated L 2 behavior.
Abstract: In this study, 495 Korean secondary school students' visual, auditory, and kinesthetic preferences, ideal L2 self, motivated L2 behavior, and English proficiency were analyzed based on questionnaire surveys. Identifying possible effect of the participants' perceptual learning styles and ideal L2 self on their motivated L2 behavior was followed by an investigation of all variables' impact on English proficiency. The influence of the visual learning style and the ideal L2 self on motivated L2 behavior indicates that the students' visual style preference contributes strongly to the forming of a vivid ideal L2 self, which in turn results in a higher level of motivated L2 behavior. As for the effect of the variables on English proficiency, the more motivated L2 behavior the students exhibited, the higher achievement they obtained, but the impact was not notably powerful. Based on these findings, it is suggested that Korean secondary school students’ motivation to learn English can be increased by enabling them to realize and develop their ideal L2 self through group discussion or individual journal writing.