scispace - formally typeset
Search or ask a question
Journal ArticleDOI

Sensorimotor synchronization: A review of recent research (2006–2012)

09 Feb 2013-Psychonomic Bulletin & Review (Springer-Verlag)-Vol. 20, Iss: 3, pp 403-452
TL;DR: It is evident that much new knowledge about SMS has been acquired in the last 7 years, and more recent research in what appears to be a burgeoning field is surveyed.
Abstract: Sensorimotor synchronization (SMS) is the coordination of rhythmic movement with an external rhythm, ranging from finger tapping in time with a metronome to musical ensemble performance. An earlier review (Repp, 2005) covered tapping studies; two additional reviews (Repp, 2006a, b) focused on music performance and on rate limits of SMS, respectively. The present article supplements and extends these earlier reviews by surveying more recent research in what appears to be a burgeoning field. The article comprises four parts, dealing with (1) conventional tapping studies, (2) other forms of moving in synchrony with external rhythms (including dance and nonhuman animals’ synchronization abilities), (3) interpersonal synchronization (including musical ensemble performance), and (4) the neuroscience of SMS. It is evident that much new knowledge about SMS has been acquired in the last 7 years.

Content maybe subject to copyright    Report

Citations
More filters
Journal ArticleDOI
TL;DR: The conceptual basis and architecture of ADAM is described, which combines reactive error correction processes (adaptation) with predictive temporal extrapolation processes (anticipation) inspired by the computational neuroscience concept of internal models and creates a novel and promising approach for exploring adaptation and anticipation in SMS.
Abstract: A constantly changing environment requires precise yet flexible timing of movements. Sensorimotor synchronization (SMS) —the temporal coordination of an action with events in a predictable external rhythm— is a fundamental human skill that contributes to optimal sensory-motor control in daily life. A large body of research related to SMS has focused on adaptive error correction mechanisms that support the synchronization of periodic movements (e.g., finger taps) with events in regular pacing sequences. The results of recent studies additionally highlight the importance of anticipatory mechanisms that support temporal prediction in the context of SMS with sequences that contain tempo changes. To investigate the role of adaptation and anticipatory mechanisms in SMS we introduce ADAM: an ADaptation and Anticipation Model. ADAM combines reactive error correction processes (adaptation) with predictive temporal extrapolation processes (anticipation) inspired by the computational neuroscience concept of internal models. The combination of simulations and experimental manipulations based on ADAM creates a novel and promising approach for exploring adaptation and anticipation in SMS. The current paper describes the conceptual basis and architecture of ADAM.

560 citations

Journal ArticleDOI
TL;DR: It is argued that beat perception is a complex brain function involving temporally-precise communication between auditory regions and motor planning regions of the cortex (even in the absence of overt movement), and it is proposed that simulation of periodic movement inMotor planning regions provides a neural signal that helps the auditory system predict the timing of upcoming beats.
Abstract: Every human culture has some form of music with a beat: a perceived periodic pulse that structures the perception of musical rhythm and which serves as a framework for synchronized movement to music. What are the neural mechanisms of musical beat perception, and how did they evolve? One view, which dates back to Darwin and implicitly informs some current models of beat perception, is that the relevant neural mechanisms are relatively general and are widespread among animal species. On the basis of recent neural and cross-species data on musical beat processing, this paper argues for a different view. Here we argue that beat perception is a complex brain function involving temporally-precise communication between auditory regions and motor planning regions of the cortex (even in the absence of overt movement). More specifically, we propose that simulation of periodic movement in motor planning regions provides a neural signal that helps the auditory system predict the timing of upcoming beats. This “action simulation for auditory prediction” (ASAP) hypothesis leads to testable predictions. We further suggest that ASAP relies on dorsal auditory pathway connections between auditory regions and motor planning regions via the parietal cortex, and suggest that these connections may be stronger in humans than in nonhuman primates due to the evolution of vocal learning in our lineage. This suggestion motivates cross-species research to determine which species are capable of human-like beat perception, i.e., beat perception that involves accurate temporal prediction of beat times across a fairly broad range of tempi.

323 citations

Journal ArticleDOI
TL;DR: In this paper, Leman examines how these developments might be unified into something that is simultaneously a theory of music cognition and a blueprint for the music mediation technology of the future, and the main mediating principle elaborated on in the monograph, which is more intellectual discourse than textbook, is rooted in the belief that musical interactions are socially charged, embodied affairs.
Abstract: O VER THE PAST 25 YEARS OUR UNDERSTANDING of how music interacts with the human mind and brain has grown rapidly, and multimedia technologies have augmented the ways in which we engage with music. In Embodied Music Cognition and Mediation Technology, Marc Leman examines how these developments might be unified into something that is simultaneously a theory of music cognition and a blueprint for the music mediation technology of the future. Mediation refers to the mappings between the intentions and desires on the part of active musical participants and the technology that renders the music. The main mediating principle elaborated on in the monograph, which is more intellectual discourse than textbook, is rooted in the belief that musical interactions are socially charged, embodied affairs. Thus, individuals understand music in the same way that they understand others’ intentions during social interaction, and expressive intentions are attributed to music because patterns of sonic energy evoke bodily gestures that are meaningful to an individual due to his or her personal history as an active participant within a cultural environment. The first three chapters contextualize the embodied music cognition approach. In Chapter 1, Leman sets the scene by making clear the challenges that face those who are concerned with how subjective musical experiences are linked to physical sound patterns. Chapter 2 then deals with the diversity of paradigms that are relevant to the business of interdisciplinary music research. Here Leman adeptly identifies relationships between trends in music research, such as the emergence of systematic musicology, and landmark developments in the discipline of psychology, such as the advent of the Gestalt school and cognitivism. He also charts the progress made in the fields of technology, information theory, and computational modeling, with well selected philosophical matter visited along the way. This serves as a historical prelude to the birth of the modern embodied cognition paradigm, which asserts that, “knowledge does not emerge from passive perception, but from the need to act in an environment” (p. 43). In Chapter 3, Leman expands upon this ecological theme with a view toward music mediation technology, developing the premise that mediating technology should exploit the way in which individuals naturally engage themselves with music. The ensuing chapters delve into the details of what embodied music cognition means. Chapters 4 and 5 build a case for why it makes sense to think about engagement with music in terms of corporeal articulations and action-based ontologies, and how these lead to pleasurable experiences with music, whereas Chapters 6 and 7 describe how this type of framework might play itself out in musical instrument and music retrieval technologies. Throughout these chapters, Leman articulates a framework in which performer/music/listener interactions can be structured/mediated. The framework contends with the formidable challenges inherent in mapping between the intentions, actions, and percepts of individuals and very specific musical signals. Ultimately, the problem is one of identifying relationships between semantics and musical structure, and then specifying the technological requirements for accomplishing the translation from one to the other. Leman breaks the problem down into three interacting conceptual levels, which he talks about as first-person, second-person, and third-person descriptions. Thirdperson descriptions are objective representations of the structural features of the music, whereas first-person descriptions are subjectively assigned semantic labels that refer to expressive intentions. According to Leman, previous approaches to understanding music (e.g., traditional musicology) have fixated upon these two levels of description without giving adequate treatment to the “rules” that govern the mapping between objective representations and subjective interpretations. Such rules are needed to achieve his scientific goal of developing a complete theory of music, as well as his practical goal of developing a successful mediation technology. The key to Leman’s solution is the proposal that an understanding of musical intentions requires third-person and first-person descriptions to be linked via second-person descriptions, which are corporeal in nature. At this intermediate level, expressive bodily gestures from an individual’s repertoire of actions are used to describe moving sonic forms in a manner that the individual can interpret based on his or

321 citations

Journal ArticleDOI
TL;DR: It is suggested that a cross-species comparison of behaviours and the neural circuits supporting them sets the stage for a new generation of neurally grounded computational models for beat perception and synchronization.
Abstract: Humans possess an ability to perceive and synchronize movements to the beat in music (‘beat perception and synchronization’), and recent neuroscientific data have offered new insights into this beat-finding capacity at multiple neural levels. Here, we review and compare behavioural and neural data on temporal and sequential processing during beat perception and entrainment tasks in macaques (including direct neural recording and local field potential (LFP)) and humans (including fMRI, EEG and MEG). These abilities rest upon a distributed set of circuits that include the motor cortico-basal-ganglia–thalamo-cortical (mCBGT) circuit, where the supplementary motor cortex (SMA) and the putamen are critical cortical and subcortical nodes, respectively. In addition, a cortical loop between motor and auditory areas, connected through delta and beta oscillatory activity, is deeply involved in these behaviours, with motor regions providing the predictive timing needed for the perception of, and entrainment to, musical rhythms. The neural discharge rate and the LFP oscillatory activity in the gamma- and beta-bands in the putamen and SMA of monkeys are tuned to the duration of intervals produced during a beat synchronization–continuation task (SCT). Hence, the tempo during beat synchronization is represented by different interval-tuned cells that are activated depending on the produced interval. In addition, cells in these areas are tuned to the serial-order elements of the SCT. Thus, the underpinnings of beat synchronization are intrinsically linked to the dynamics of cell populations tuned for duration and serial order throughout the mCBGT. We suggest that a cross-species comparison of behaviours and the neural circuits supporting them sets the stage for a new generation of neurally grounded computational models for beat perception and synchronization.

275 citations


Cites background from "Sensorimotor synchronization: A rev..."

  • ...For example, a myriad of studies have demonstrated that humans rhythmic entrain to isochronous stimuli with almost perfect tempo and phase matching [15]....

    [...]

Journal ArticleDOI
TL;DR: This review article addresses the psychological processes and brain mechanisms that enable rhythmic interpersonal coordination and highlights musical ensemble performance as an ecologically valid yet readily controlled domain for investigating rhythm in joint action.
Abstract: Human interaction often requires simultaneous precision and flexibility in the coordination of rhythmic behaviour between individuals engaged in joint activity, for example, playing a musical duet or dancing with a partner. This review article addresses the psychological processes and brain mechanisms that enable such rhythmic interpersonal coordination. First, an overview is given of research on the cognitive-motor processes that enable individuals to represent joint action goals and to anticipate, attend and adapt to other's actions in real time. Second, the neurophysiological mechanisms that underpin rhythmic interpersonal coordination are sought in studies of sensorimotor and cognitive processes that play a role in the representation and integration of self- and other-related actions within and between individuals' brains. Finally, relationships between social–psychological factors and rhythmic interpersonal coordination are considered from two perspectives, one concerning how social-cognitive tendencies (e.g. empathy) affect coordination, and the other concerning how coordination affects interpersonal affiliation, trust and prosocial behaviour. Our review highlights musical ensemble performance as an ecologically valid yet readily controlled domain for investigating rhythm in joint action.

270 citations

References
More filters
Journal ArticleDOI
TL;DR: A dual-stream model of speech processing is outlined that assumes that the ventral stream is largely bilaterally organized — although there are important computational differences between the left- and right-hemisphere systems — and that the dorsal stream is strongly left- Hemisphere dominant.
Abstract: Despite decades of research, the functional neuroanatomy of speech processing has been difficult to characterize. A major impediment to progress may have been the failure to consider task effects when mapping speech-related processing systems. We outline a dual-stream model of speech processing that remedies this situation. In this model, a ventral stream processes speech signals for comprehension, and a dorsal stream maps acoustic speech signals to frontal lobe articulatory networks. The model assumes that the ventral stream is largely bilaterally organized--although there are important computational differences between the left- and right-hemisphere systems--and that the dorsal stream is strongly left-hemisphere dominant.

4,234 citations

Book
01 Jun 1990
TL;DR: Auditory Scene Analysis as discussed by the authors addresses the problem of hearing complex auditory environments, using a series of creative analogies to describe the process required of the human auditory system as it analyzes mixtures of sounds to recover descriptions of individual sounds.
Abstract: Auditory Scene Analysis addresses the problem of hearing complex auditory environments, using a series of creative analogies to describe the process required of the human auditory system as it analyzes mixtures of sounds to recover descriptions of individual sounds. In a unified and comprehensive way, Bregman establishes a theoretical framework that integrates his findings with an unusually wide range of previous research in psychoacoustics, speech perception, music theory and composition, and computer modeling.

2,968 citations

Book
29 Oct 1993
TL;DR: This book presents a meta-modelling framework for analysing two or more samples of unimodal data from von Mises distributions, and some modern Statistical Techniques for Testing and Estimation used in this study.
Abstract: Preface 1. The purpose of the book 2. Survey of contents 3. How to use the book 4. Notation, terminology and conventions 5. Acknowledgements Part I. Introduction: Part II. Descriptive Methods: 2.1. Introduction 2.2. Data display 2.3. Simple summary quantities 2.4. Modifications for axial data Part III. Models: 3.1. Introduction 3.2. Notation trigonometric moments 3.3. Probability distributions on the circle Part IV. Analysis of a Single Sample of Data: 4.1. Introduction 4.2. Exploratory analysis 4.3. Testing a sample of unit vectors for uniformity 4.4. Nonparametric methods for unimodal data 4.5. Statistical analysis of a random sample of unit vectors from a von Mises distribution 4.6. Statistical analysis of a random sample of unit vectors from a multimodal distribution 4.7. Other topics Part V. Analysis of Two or More Samples, and of Other Experimental Layouts: 5.1. Introduction 5.2. Exploratory analysis 5.3. Nonparametric methods for analysing two or more samples of unimodal data 5.4. Analysis of two or more samples from von Mises distributions 5.5. Analysis of data from more complicated experimental designs Part VI. Correlation and Regression: 6.1. Introduction 6.2. Linear-circular association and circular-linear association 6.3. Circular-circular association 6.4. Regression models for a circular response variable Part VII. Analysis of Data with Temporal or Spatial Structure: 7.1. Introduction 7.2. Analysis of temporal data 7.3. Spatial analysis Part VIII. Some Modern Statistical Techniques for Testing and Estimation: 8.1. Introduction 8.2. Bootstrap methods for confidence intervals and hypothesis tests: general description 8.3. Bootstrap methods for circular data: confidence regions for the mean direction 8.4. Bootstrap methods for circular data: hypothesis tests for mean directions 8.5. Randomisation, or permutation, tests Appendix A. Tables Appendix B. Data sets References Index.

2,323 citations


"Sensorimotor synchronization: A rev..." refers background or methods in this paper

  • ...Using a significant Rayleigh test (Fisher, 1993) as their criterion, Kirschner and Tomasello (2009) showed that children 2.5– 4.5 years of age were more likely to spontaneously synchronize with a drumbeat produced by a real person than with one produced by a computer-controlled stick (each in view)…...

    [...]

  • ...The basic mechanisms of SMS are still studied most conveniently with the finger-tapping paradigm, and the discrete nature of the taps makes the results particularly relevant to music performance....

    [...]

  • ...Some studies instead employ circular statistics that yield a mean vector (angular deviation) and a circular variance (see Fisher, 1993); this approach is useful when synchronization is poor....

    [...]

Journal ArticleDOI
TL;DR: A theoretical model, using concepts central to the interdisciplinary field of synergetics and nonlinear oscillator theory, is developed, which reproduces the dramatic change in coordinative pattern observed between the hands.
Abstract: Earlier experimental studies by one of us (Kelso, 1981a, 1984) have shown that abrupt phase transitions occur in human hand movements under the influence of scalar changes in cycling frequency. Beyond a critical frequency the originally prepared out-of-phase, antisymmetric mode is replaced by a symmetrical, in-phase mode involving simultaneous activation of homologous muscle groups. Qualitavely, these phase transitions are analogous to gait shifts in animal locomotion as well as phenomena common to other physical and biological systems in which new “modes” or spatiotemporal patterns arise when the system is parametrically scaled beyond its equilibrium state (Haken, 1983). In this paper a theoretical model, using concepts central to the interdisciplinary field of synergetics and nonlinear oscillator theory, is developed, which reproduces (among other features) the dramatic change in coordinative pattern observed between the hands.

2,144 citations

Journal ArticleDOI
TL;DR: It is proposed that the brain represents time in a distributed manner and tells the time by detecting the coincidental activation of different neural populations.
Abstract: Time is a fundamental dimension of life. It is crucial for decisions about quantity, speed of movement and rate of return, as well as for motor control in walking, speech, playing or appreciating music, and participating in sports. Traditionally, the way in which time is perceived, represented and estimated has been explained using a pacemaker-accumulator model that is not only straightforward, but also surprisingly powerful in explaining behavioural and biological data. However, recent advances have challenged this traditional view. It is now proposed that the brain represents time in a distributed manner and tells the time by detecting the coincidental activation of different neural populations.

1,814 citations