scispace - formally typeset
Search or ask a question
Author

Neil Mennie

Bio: Neil Mennie is an academic researcher from University of Nottingham Malaysia Campus. The author has contributed to research in topics: Eye movement & Smooth pursuit. The author has an hindex of 10, co-authored 20 publications receiving 1338 citations. Previous affiliations of Neil Mennie include University of Giessen & University of Nottingham.

Papers
More filters
Journal ArticleDOI
TL;DR: Although the actions of tea-making are ‘automated’ and proceed with little conscious involvement, the eyes closely monitor every step of the process, suggesting that this type of unconscious attention must be a common phenomenon in everyday life.
Abstract: The aim of this study was to determine the pattern of fixations during the performance of a well-learned task in a natural setting (making tea), and to classify the types of monitoring action that the eyes perform. We used a head-mounted eye-movement video camera, which provided a continuous view of the scene ahead, with a dot indicating foveal direction with an accuracy of about 1 deg. A second video camera recorded the subject's activities from across the room. The videos were linked and analysed frame by frame. Foveal direction was always close to the object being manipulated, and very few fixations were irrelevant to the task. The first object-related fixation typically led the first indication of manipulation by 0.56 s, and vision moved to the next object about 0.61 s before manipulation of the previous object was complete. Each object-related act that did not involve a waiting period lasted an average of 3.3 s and involved about 7 fixations. Roughly a third of all fixations on objects could be definitely identified with one of four monitoring functions: locating objects used later in the process, directing the hand or object in the hand to a new location, guiding the approach of one object to another (e.g. kettle and lid), and checking the state of some variable (e.g. water level). We conclude that although the actions of tea-making are 'automated' and proceed with little conscious involvement, the eyes closely monitor every step of the process. This type of unconscious attention must be a common phenomenon in everyday life.

997 citations

Journal ArticleDOI
TL;DR: Evidence of the anticipatory use of gaze in acquiring information about objects for future manipulation is provided, suggesting that visual information on the temporal and spatial structure of the scene was retained across intervening fixations and influenced subsequent movement programming.
Abstract: During performance of natural tasks sub- jects sometimes Wxate objects that are manipulated several seconds later. Such early looks are known as "look-ahead Wxations" (Pelz and Canosa in Vision Res 41(25-26):3587-3596, 2001). To date, little is known about their function. To investigate the possible role of these Wxations, we measured Wxation patterns in a model-building task. Subjects assembled models in two sequences where reaching and grasping were inter- rupted in one sequence by an additional action. Results show look-ahead Wxations prior to 20% of the reaching and grasping movements, occurring on average 3 s before the reach. Their frequency was inXuenced by task sequence, suggesting that they are purposeful and have a role in task planning. To see if look-aheads inXuenced the subsequent eye movement during the reach, we measured eye-hand latencies and found they increased by 122 ms following a look-ahead to the tar- get. The initial saccades to the target that accompanied a reach were also more accurate following a look- ahead. These results demonstrate that look-aheads inXuence subsequent visuo-motor coordination, and imply that visual information on the temporal and spa- tial structure of the scene was retained across interven- ing Wxations and inXuenced subsequent movement programming. Additionally, head movements that accompanied look-aheads were signiWcantly smaller in amplitude (by 10°) than those that accompanied reaches to the same locations, supporting previous evi- dence that head movements play a role in the control of hand movements. This study provides evidence of the anticipatory use of gaze in acquiring information about objects for future manipulation.

156 citations

Journal Article
TL;DR: This paper used a head-mounted eye-movement video camera, which provided a continuous view of the scene ahead, with a dot indicating foveal direction with an accuracy of about 1 deg.
Abstract: The aim of this study was to determine the pattern of fixations during the performance of a well-learned task in a natural setting (making tea), and to classify the types of monitoring action that the eyes perform.We used a head-mounted eye-movement video camera, which provided a continuous view of the scene ahead, with a dot indicating foveal direction with an accuracy of about 1 deg. A second video camera recorded the subject's activities from across the room. The videos were linked and analysed frame by frame. Foveal direction was always close to the object being manipulated, and very few fixations were irrelevant to the task. The first object-related fixation typically led the first indication of manipulation by 0.56 s, and vision moved to the next object about 0.61 s before manipulation of the previous object was complete. Each object-related act that did not involve a waiting period lasted an average of 3.3 s and involved about 7 fixations. Roughly a third of all fixations on objects could be definitely identified with one of four monitoring functions: locating objects used later in the process, directing the hand or object in the hand to a new location, guiding the approach of one object to another (eg kettle and lid), and checking the state of some variable (eg water level). We conclude that although the actions of tea-making are `automated' and proceed with little conscious involvement, the eyes closely monitor every step of the process. This type of unconscious attention must be a common phenomenon in everyday life. DOI:10.1068/p2935

106 citations

Proceedings Article
01 Dec 2005
TL;DR: It is suggested that observers maintain an internal model of the dynamic properties of the world, and rapidly update this model when errors occur, and that such models are used to predict upcoming events and plan movements in anticipation of those events.
Abstract: There is considerable evidence for the role of internal models of the body’s dynamics in the control of movement. However, the existence of internal models of the environment is less well established. The present work provides further evidence of the existence of sophisticated internal models of the structure of the environment. We suggest that such models are used to predict upcoming events and plan movements in anticipation of those events. We recorded eye, head, and hand movements while subjects caught balls thrown with a bounce. Subjects initially fixate the hands of the thrower, then saccade to the anticipated bounce point, and then pursue the ball until it is close to the hands. However, ability to pursue the ball depends on experience with the ball’s dynamic properties. When the ball was unexpectedly replaced with a more elastic ball, subjects were unable to track the ball, and instead made a series of saccades. Within 2 or 3 trials, subjects were once again able to accurately pursue the ball. Subjects displayed a different pattern of movements when throwing or watching other players. The observer's head movements from thrower towards the catcher often begin as much as half a sec before the ball leaves the thrower’s hands. All these observations suggest that observers position their bodies in anticipation of expected events, in order to gather critical information. In addition, they suggest that observers maintain an internal model of the dynamic properties of the world, and rapidly update this model when errors occur.

46 citations

Journal ArticleDOI
TL;DR: Parallel psychophysical experiments revealed that different from speed judgments of moving isoluminant stimuli made during fixation, judgments during pursuit are veridical for the same stimuli at all speeds, therefore information about target speed seems to be available for pursuit eye movements and speed judgment during pursuit but is degraded for perceptual speed judgments.
Abstract: At slow speeds, chromatic isoluminant stimuli are perceived to move much slower than comparable luminance stimuli. We investigated whether smooth pursuit eye movements to isoluminant stimuli show a...

41 citations


Cited by
More filters
Journal Article
TL;DR: In this article, the authors propose that the brain produces an internal representation of the world, and the activation of this internal representation is assumed to give rise to the experience of seeing, but it leaves unexplained how the existence of such a detailed internal representation might produce visual consciousness.
Abstract: Many current neurophysiological, psychophysical, and psychological approaches to vision rest on the idea that when we see, the brain produces an internal representation of the world. The activation of this internal representation is assumed to give rise to the experience of seeing. The problem with this kind of approach is that it leaves unexplained how the existence of such a detailed internal representation might produce visual consciousness. An alternative proposal is made here. We propose that seeing is a way of acting. It is a particular way of exploring the environment. Activity in internal representations does not generate the experience of seeing. The outside world serves as its own, external, representation. The experience of seeing occurs when the organism masters what we call the governing laws of sensorimotor contingency. The advantage of this approach is that it provides a natural and principled way of accounting for visual consciousness, and for the differences in the perceived quality of sensory experience in the different sensory modalities. Several lines of empirical evidence are brought forward in support of the theory, in particular: evidence from experiments in sensorimotor adaptation, visual \"filling in,\" visual stability despite eye movements, change blindness, sensory substitution, and color perception.

2,271 citations

Journal ArticleDOI
TL;DR: In this article, the authors propose that the brain produces an internal representation of the world, and the activation of this internal representation is assumed to give rise to the experience of seeing, but it leaves unexplained how the existence of such a detailed internal representation might produce visual consciousness.
Abstract: Many current neurophysiological, psychophysical, and psychological approaches to vision rest on the idea that when we see, the brain produces an internal representation of the world. The activation of this internal representation is assumed to give rise to the experience of seeing. The problem with this kind of approach is that it leaves unexplained how the existence of such a detailed internal representation might produce visual consciousness. An alternative proposal is made here. We propose that seeing is a way of acting. It is a particular way of exploring the environment. Activity in internal representations does not generate the experience of seeing. The outside world serves as its own, external, representation. The experience of seeing occurs when the organism masters what we call the governing laws of sensorimotor contingency. The advantage of this approach is that it provides a natural and principled way of accounting for visual consciousness, and for the differences in the perceived quality of sensory experience in the different sensory modalities. Several lines of empirical evidence are brought forward in support of the theory, in particular: evidence from experiments in sensorimotor adaptation, visual “filling in,” visual stability despite eye movements, change blindness, sensory substitution, and color perception.

2,264 citations

Journal ArticleDOI
TL;DR: Analysis of signals in tactile afferent neurons and central processes in humans reveals how contact events are encoded and used to monitor and update task performance.
Abstract: During object manipulation tasks, the brain selects and implements action-phase controllers that use sensory predictions and afferent signals to tailor motor output to the physical properties of the objects involved. Analysis of signals in tactile afferent neurons and central processes in humans reveals how contact events are encoded and used to monitor and update task performance.

1,569 citations

Book ChapterDOI
01 Jan 2003
TL;DR: This chapter discusses the application of eye movements to user interfaces, both for analyzing interfaces (measuring usability) and as an actual control medium within a human–computer dialogue.
Abstract: Publisher Summary This chapter discusses the application of eye movements to user interfaces, both for analyzing interfaces (measuring usability) and as an actual control medium within a human–computer dialogue. For usability analysis, the user's eye movements are recorded during system use and later analyzed retrospectively; however, the eye movements do not affect the interface in real time. As a direct control medium, the eye movements are obtained and used in real time as an input to the user–computer dialogue. The eye movements might be the sole input, typically for disabled users or hands-busy applications, or might be used as one of several inputs, combining with mouse, keyboard, sensors, or other devices. From the perspective of mainstream eye-movement research, human–computer interaction, together with related work in the broader field of communications and media research, appears as a new and very promising area of applied work. Both basic and applied work can profit from integration within a unified field of eye­-movement research. Application of eye tracking in human–computer interaction remains a very promising approach; its technological and market barriers are finally being reduced.

1,421 citations

Journal ArticleDOI
TL;DR: Three separate advances have greatly expanded the understanding of the intricate role of eye movements in cognitive function, especially in neurophysiological studies, and are proving crucial for understanding how behavioral programs control the selection of visual information.

1,183 citations