scispace - formally typeset
Search or ask a question
Author

John G. Taylor

Bio: John G. Taylor is an academic researcher from King's College London. The author has contributed to research in topics: Artificial neural network & Consciousness. The author has an hindex of 36, co-authored 303 publications receiving 7387 citations. Previous affiliations of John G. Taylor include University of Cambridge & International Centre for Theoretical Physics.


Papers
More filters
Journal ArticleDOI
TL;DR: Basic issues in signal processing and analysis techniques for consolidating psychological and linguistic analyses of emotion are examined, motivated by the PKYSTA project, which aims to develop a hybrid system capable of using information from faces and voices to recognize people's emotions.
Abstract: Two channels have been distinguished in human interaction: one transmits explicit messages, which may be about anything or nothing; the other transmits implicit messages about the speakers themselves. Both linguistics and technology have invested enormous efforts in understanding the first, explicit channel, but the second is not as well understood. Understanding the other party's emotions is one of the key tasks associated with the second, implicit channel. To tackle that task, signal processing and analysis techniques have to be developed, while, at the same time, consolidating psychological and linguistic analyses of emotion. This article examines basic issues in those areas. It is motivated by the PKYSTA project, in which we aim to develop a hybrid system capable of using information from faces and voices to recognize people's emotions.

2,255 citations

Journal ArticleDOI
TL;DR: A neural network architecture is constructed to be able to handle the fusion of different modalities (facial features, prosody and lexical content in speech) and results are given and their implications discussed.

427 citations

Journal ArticleDOI
TL;DR: It is shown that the modification ought to allow a Kohonen network to map sequences of inputs without having to resort to external time delay mechanisms.

272 citations

Journal ArticleDOI
01 Feb 1999-Brain
TL;DR: It is demonstrated that the precuneus shows consistent activation during episodic memory retrieval and occurred in visual and auditory presentation modalities and for both highly imaginable and abstract words.
Abstract: The aim of this study was to evaluate further the role of the precuneus in episodic memory retrieval. The specific hypothesis addressed was that the precuneus is involved in episodic memory retrieval irrespective of the imagery content. Two groups of six right-handed normal male volunteers took part in the study. Each subject underwent six [15O]butanol-PET scans. In each of the six trials, the memory task began with the injection of a bolus of 1500 MBq of [15O]butanol. For Group 1, 12 word pair associates were presented visually, for Group 2 auditorily. The subjects of each group had to learn and retrieve two sets of 12 word pairs each. One set consisted of highly imaginable words and another one of abstract words. Words of both sets were not related semantically, representing 'hard' associations. The presentations of nonsense words served as reference conditions. We demonstrate that the precuneus shows consistent activation during episodic memory retrieval. Precuneus activation occurred in visual and auditory presentation modalities and for both highly imaginable and abstract words. The present study therefore provides further evidence that the precuneus has a specific function in episodic memory retrieval as a multimodal association area.

177 citations

Journal ArticleDOI
TL;DR: An extension to two dimensions of recent results in continuum neural field theory (CNFT) in one dimension is presented, focusing on the treatment of receptive fields and of learning on afferent synapses to obtain topographic maps.
Abstract: An extension to two dimensions of recent results in continuum neural field theory (CNFT) in one dimension is presented here. Focus is placed on the treatment of receptive fields and of learning on afferent synapses to obtain topographic maps.

170 citations


Cited by
More filters
Journal ArticleDOI
TL;DR: Machine learning addresses many of the same research questions as the fields of statistics, data mining, and psychology, but with differences of emphasis.
Abstract: Machine Learning is the study of methods for programming computers to learn. Computers are applied to a wide range of tasks, and for most of these it is relatively easy for programmers to design and implement the necessary software. However, there are many tasks for which this is difficult or impossible. These can be divided into four general categories. First, there are problems for which there exist no human experts. For example, in modern automated manufacturing facilities, there is a need to predict machine failures before they occur by analyzing sensor readings. Because the machines are new, there are no human experts who can be interviewed by a programmer to provide the knowledge necessary to build a computer system. A machine learning system can study recorded data and subsequent machine failures and learn prediction rules. Second, there are problems where human experts exist, but where they are unable to explain their expertise. This is the case in many perceptual tasks, such as speech recognition, hand-writing recognition, and natural language understanding. Virtually all humans exhibit expert-level abilities on these tasks, but none of them can describe the detailed steps that they follow as they perform them. Fortunately, humans can provide machines with examples of the inputs and correct outputs for these tasks, so machine learning algorithms can learn to map the inputs to the outputs. Third, there are problems where phenomena are changing rapidly. In finance, for example, people would like to predict the future behavior of the stock market, of consumer purchases, or of exchange rates. These behaviors change frequently, so that even if a programmer could construct a good predictive computer program, it would need to be rewritten frequently. A learning program can relieve the programmer of this burden by constantly modifying and tuning a set of learned prediction rules. Fourth, there are applications that need to be customized for each computer user separately. Consider, for example, a program to filter unwanted electronic mail messages. Different users will need different filters. It is unreasonable to expect each user to program his or her own rules, and it is infeasible to provide every user with a software engineer to keep the rules up-to-date. A machine learning system can learn which mail messages the user rejects and maintain the filtering rules automatically. Machine learning addresses many of the same research questions as the fields of statistics, data mining, and psychology, but with differences of emphasis. Statistics focuses on understanding the phenomena that have generated the data, often with the goal of testing different hypotheses about those phenomena. Data mining seeks to find patterns in the data that are understandable by people. Psychological studies of human learning aspire to understand the mechanisms underlying the various learning behaviors exhibited by people (concept learning, skill acquisition, strategy change, etc.).

13,246 citations

Journal ArticleDOI
TL;DR: A wide variety of data on capacity limits suggesting that the smaller capacity limit in short-term memory tasks is real is brought together and a capacity limit for the focus of attention is proposed.
Abstract: Miller (1956) summarized evidence that people can remember about seven chunks in short-term memory (STM) tasks. How- ever, that number was meant more as a rough estimate and a rhetorical device than as a real capacity limit. Others have since suggested that there is a more precise capacity limit, but that it is only three to five chunks. The present target article brings together a wide vari- ety of data on capacity limits suggesting that the smaller capacity limit is real. Capacity limits will be useful in analyses of information processing only if the boundary conditions for observing them can be carefully described. Four basic conditions in which chunks can be identified and capacity limits can accordingly be observed are: (1) when information overload limits chunks to individual stimulus items, (2) when other steps are taken specifically to block the recoding of stimulus items into larger chunks, (3) in performance discontinuities caused by the capacity limit, and (4) in various indirect effects of the capacity limit. Under these conditions, rehearsal and long-term memory cannot be used to combine stimulus items into chunks of an unknown size; nor can storage mechanisms that are not capacity- limited, such as sensory memory, allow the capacity-limited storage mechanism to be refilled during recall. A single, central capacity limit averaging about four chunks is implicated along with other, noncapacity-limited sources. The pure STM capacity limit expressed in chunks is distinguished from compound STM limits obtained when the number of separately held chunks is unclear. Reasons why pure capacity estimates fall within a narrow range are discussed and a capacity limit for the focus of attention is proposed.

5,677 citations

Journal ArticleDOI
01 Mar 2006-Brain
TL;DR: A useful conceptual framework is provided for matching the functional imaging findings with the specific role(s) played by this structure in the higher-order cognitive functions in which it has been implicated, and activation patterns appear to converge with anatomical and connectivity data in providing preliminary evidence for a functional subdivision within the precuneus.
Abstract: Functional neuroimaging studies have started unravelling unexpected functional attributes for the posteromedial portion of the parietal lobe, the precuneus. This cortical area has traditionally received little attention, mainly because of its hidden location and the virtual absence of focal lesion studies. However, recent functional imaging findings in healthy subjects suggest a central role for the precuneus in a wide spectrum of highly integrated tasks, including visuo-spatial imagery, episodic memory retrieval and self-processing operations, namely first-person perspective taking and an experience of agency. Furthermore, precuneus and surrounding posteromedial areas are amongst the brain structures displaying the highest resting metabolic rates (hot spots) and are characterized by transient decreases in the tonic activity during engagement in non-self-referential goal-directed actions (default mode of brain function). Therefore, it has recently been proposed that precuneus is involved in the interwoven network of the neural correlates of self-consciousness, engaged in self-related mental representations during rest. This hypothesis is consistent with the selective hypometabolism in the posteromedial cortex reported in a wide range of altered conscious states, such as sleep, drug-induced anaesthesia and vegetative states. This review summarizes the current knowledge about the macroscopic and microscopic anatomy of precuneus, together with its wide-spread connectivity with both cortical and subcortical structures, as shown by connectional and neurophysiological findings in non-human primates, and links these notions with the multifaceted spectrum of its behavioural correlates. By means of a critical analysis of precuneus activation patterns in response to different mental tasks, this paper provides a useful conceptual framework for matching the functional imaging findings with the specific role(s) played by this structure in the higher-order cognitive functions in which it has been implicated. Specifically, activation patterns appear to converge with anatomical and connectivity data in providing preliminary evidence for a functional subdivision within the precuneus into an anterior region, involved in self-centred mental imagery strategies, and a posterior region, subserving successful episodic memory retrieval.

4,342 citations

Journal ArticleDOI
TL;DR: As with previous analyses of effective connectivity, the focus is on experimentally induced changes in coupling, but unlike previous approaches in neuroimaging, the causal model ascribes responses to designed deterministic inputs, as opposed to treating inputs as unknown and stochastic.

4,182 citations