scispace - formally typeset
Search or ask a question

Showing papers in "Advances in psychology in 1983"


Book ChapterDOI
TL;DR: A number of problems associated with non-compensatory and compensatory decision rules are discussed and it is suggested that these problems could be avoided if the rules are seen as operators in a search for a dominance structure, that is, a cognitive structure in which one choice alternative can be seen as dominant over the others.
Abstract: A number of problems associated with non-compensatory and compensatory decision rules are discussed It is suggested that these problems could be avoided if the rules are seen as operators in a search for a dominance structure, that is, a cognitive structure in which one choice alternative can be seen as dominant over the others The search for a dominance structure is assumed to go through four stages, viz, pre-editing, finding a promising alternative, dominance testing, and dominance structuring Each of these stages is related to particular decision rules Finally, directions are suggested for future research based on the framework presented in the paper

482 citations


Book ChapterDOI
TL;DR: In this article, a theoretical framework is offered that interprets Battig's conceptualization of contextual interference in terms of the multiple and variable processing that result from the concurrent presence of tasks in working memory.
Abstract: A theoretical framework is offered that interprets Battig's (1979) conceptualization of contextual interference in terms of the multiple and variable processing that result from the concurrent presence of tasks in working memory. The central role of cognitive processes in the learning of motor tasks and the influence of these processes on motor performance is emphasized. This theoretical interpretation switches emphasis away from a memorial representation of a motor act comprised of sensory attributes to an active, operationally defined representation.

315 citations


Book ChapterDOI
TL;DR: The N2 deflection of the ERP is proposed which the authors suggest is closely related to the orienting reflex and the classical full-scale OR is only likely to emerge when the N2b-P3a complex occurs.
Abstract: This paper focusses on the N2 deflection of the ERP which the authors suggest is closely related to the orienting reflex. The available literature suggests that there are two negative components in the N2 time range which both contribute to the total N2 deflection. The earlier component, the so-called mismatch negativity (MMN), reflects an automatic pre-perceptual cerebral mismatch process, which occurs after a stimulus change in a repetitive homogeneous stimulus stream. This negativity is modality-specific and probably generated in the secondary sensory areas. This process is not dependent on, or modified by, the direction of attention. The second negative component, ‘N2b’, is modality unspecific and is superimposed on the MMN. The N2b occurs when the stimulus input is attended to. The N2b forms a wave complex together with the positive component ‘P3a’. Both negative components appear to bear some relationship to the orienting reflex. The MMN reflects the neuronal mismatch process elicited by stimulus change but this does not appear as a sufficient condition for OR elicitation. The classical full-scale OR is only likely to emerge when the N2b-P3a complex occurs.

292 citations


Book ChapterDOI
TL;DR: In this article, evidence supporting a multi-component interpretation of the Contingent Negative Variation (CNV) is reviewed, and evidence for a true CNV, having the traditionally ascribed anticipatory features, is considered to be amenable to interpretation in terms of individual 0 wave and terminal CNV components.
Abstract: Evidence supporting a multi-component interpretation of the Contingent Negative Variation (CNV) is reviewed. Two major components are considered to form the CNV: an early component (herein called the “O wave”), and a terminal wave. It is argued that the 0 wave is a general response to salient or novel stimuli, which can be elicited in both CNV and non-CNV situations. The terminal wave is interpreted as being related to motor response processes, and is identified with the readiness potential component of the motor potential complex. Evidence for a “true CNV,” having the traditionally ascribed anticipatory features, is considered to be amenable to interpretation in terms of the individual 0 wave and terminal CNV components.

276 citations


Book ChapterDOI
TL;DR: This chapter reviews the studies that have utilized the movement precuing technique and finds that the parameters of movements can be specified in a variable rather than fixed order and serially rather than in parallel, although some notable exceptions have been found.
Abstract: The movement precuing technique is designed to reveal the major information-processing steps that lead up to the execution of voluntary movements. The main idea in the technique is to supply the subject with partial information about the defining characteristics of a motor response and then observe how long it takes the subject to perform the response when its corresponding reaction signal is presented. On the assumption that the time to perform the response includes the time to specify those parameters that were not precued in advance, times to perform the response in different precue conditions can be used to find out whether its defining parameter values are specified in a fixed or variable order, serially or in parallel, etc. This chapter reviews the studies that have utilized the movement precuing technique. These studies have focused on aimed hand movements, finger movements, and aimed foot movements. A common finding of the studies is that the parameters of movements can be specified in a variable rather than fixed order and serially rather than in parallel, although some notable exceptions have been found. Much of the chapter is concerned with methodological variations of the precuing technique which a number of investigators have introduced.

199 citations


Book ChapterDOI
TL;DR: In this article, the authors focus on the base rate fallacy controversy and the importance of considering base rates before making causal attributions in other inferential formats, especially in Bayesian ones.
Abstract: Publisher Summary This chapter focuses on the base rate fallacy controversy. The importance of considering base rates before making causal attributions is one that is instilled in all humans in the course of training in experimental methodology. But base rates play an important role in other inferential formats too, especially in Bayesian ones. There is evidence aplenty that in those contexts, they are largely ignored in favor of the diagnostic information at hand. This is the phenomenon known as the “base rate fallacy.” The probability of uncertain outcomes is often judged by the extent to which they represent their source or generating process. The features by which this similarity or representativeness is assessed are not necessarily those that figure in the normative derivation of the requested probability. Kahneman and Tversky derived the prediction that if an event is to be judged vis a vis several alternative possible sources or several alternative possible outcomes, these will be ranked by the similarity between them and the event and the ranking will not be affected by how likely each source or outcome is initially.

140 citations


Book ChapterDOI
TL;DR: In this paper, dual process attentional research and theory are related to the development of skilled performance, and the changing interactions between automatic and controlled processing in the developing of skill are discussed.
Abstract: Current attentional research and theory are related to the development of skilled performance. Emphasis is given to how performance changes with practice. Dual process attention theory is reviewed examining the distinctions between automatic and controlled processing. The changing interactions between automatic and controlled processing in the development of skill are discussed. It is proposed that consistent practice produces automatic productions which perform consistent transformations in a heterarchial system. Automatic productions are proposed to: be modular; show high transfer; become resource free; not be under direct control, and be fast, accurate, and coordinated. Controlled processing is assumed to develop automatic processing, maintain strategy and time varying information, and perform problem solving activities. Perceptual data, some motor data, and several motor performance examples are presented to illustrate automatic/controlled processing effects. The relationship to current theories of motor skill is discussed. New research paradigms suggested by the current approach are discussed.

136 citations


Book ChapterDOI
TL;DR: The meaning of common verbal expressions for uncertain events is analyzed and the results gained by interpreting verbal expressions of uncertainty as possibility functions can be compared to the results of the studies, where subjects provide the numerical expressions themselves.
Abstract: Publisher Summary This chapter examines individual's ability to express numerically what is internally represented. The chapter examines whether they represented in (1) a verbal propositional mode, (2) a numerical propositional mode, or (3) in an analogue mode of automatic frequency monitoring. It seems unlikely that the mathematically appropriate procedures with numerical estimates of uncertainty have become automatized. It is more likely that people handle uncertainty by customary verbal expressions and the implicit and explicit rules of conversation connected with them. The chapter analyzes the meaning of common verbal expressions for uncertain events. These expressions are interpreted as possibility functions and the procedures applicable to them are modeled in the possibility theory. This theory allows for a numerical interpretation by means of determining the elastic constraints on the usage of such expressions. The results gained by interpreting verbal expressions of uncertainty as possibility functions can be compared to the results of the studies, where subjects provide the numerical expressions themselves.

127 citations


Book ChapterDOI
TL;DR: In this article, the relationship between normal development and cognitive components of event-related brain potentials (ERPs) has been discussed, and some of the suggestions that researchers have made about the cognitive correlates of these components are discussed.
Abstract: This report addresses some aspects of the relationship between normal development and cognitive components of event-related brain potentials (ERPs). It includes a description of the ERP components which have been associated with cognitive processes in normal infants, children and adolescents and a description of the way that these components might be changing with development. Also presented are some of the suggestions that researchers have made about the cognitive correlates of these components.

125 citations


Book ChapterDOI
TL;DR: This chapter distinguishes two camps, one that points to the deficiency and one that argues for the efficiency of human judgment and decision, which are pessimistic and optimists, which claim that judgement and decision are highly efficient and functional even in complex situations.
Abstract: Publisher Summary Rationality is not a genuine term of scientific psychology but rather a concept of philosophy and economics. The most common and most relevant definition says that an action is rational if it is in line with the values and beliefs of the individual concerned; or more precisely, if it is logical or consistent as stated in a set of axioms. This definition specifies rational behavior normatively. Empirical research can study whether actual human behavior is rational in the sense that it obeys the norm. This chapter distinguishes two camps, one that points to the deficiency and one that argues for the efficiency of human judgment and decision. The members of the first camp—pessimists—claim that judgment and decision making under uncertainty often show systematic and serious errors because of in-built characteristics of the human cognitive system. The optimists of the other camp claim that judgment and decision are highly efficient and functional even in complex situations.

115 citations


Book ChapterDOI
TL;DR: This chapter is, to review and attempt to evaluate the similarities and differences between Slow Wave and P3b.
Abstract: Publisher Summary Slow Wave and P3b are members of a group of long latency, positive polarity, and endogenous event-related potential (ERP) components now known as the late positive complex (LPC). In the earliest experiments in which Slow Wave was found, it appeared that Slow Wave related to experimental variables in much the same manner as did P3b. As a result, during the early period there was little focus on Slow Wave. However, in recent years evidence for a behavioral dissociation between Slow Wave and P3b has been accumulating. The purpose of this chapter is, to review and attempt to evaluate the similarities and differences between Slow Wave and P3b. The initial reports of LPC activity described a single prominent component with a peak latency of about 300 msec. It was referred to as the late positive component or P3 or P300. Subsequent experiments established that P300 amplitude is generally largest over parietal scalp. P300 potentials have generally been elicited by events that are made relevant by serving various purposes: (1) to provide a subject with feedback information concerning the outcome of a prior task; (2) to be the object of a discrimination or counting task; (3) to be an imperative signal requiring performance of a motor response.

Book ChapterDOI
TL;DR: In this paper, the decision variable partition model of calibration is reviewed tutorially and an experiment is conducted to test the model's predictions for true-false items in two-alternative multiple-choice questions and to questions to which the respondent supplies the answer.
Abstract: The degree of calibration of subjective probabilities of events is the extent to which the observed proportion of events that occur agrees with the assigned probability values. The decision variable partition model of calibration is reviewed tutorially. It shows how numerical subjective probabilities for discrete events can be related to the perceived truth of propositions. It has been able to explain a number of experimental findings about calibration of subjective probabilities of correct response to two-alternative multiple-choice questions and to questions to which the respondent supplies the answer. In this paper, the model's predictions for true-false items axe derived, and an experiment testing them is reported. The model predicts that when the subjective probability that items are true is assessed, there will be a specific effect of the base rate, the proportion of true items, but that there will be no effect when the respondent decides true or false and then reports a subjective probability of being correct. A systematic effect of task difficulty is predicted in both cases. The experimental results are in close agreement with the model's predictions.

Book ChapterDOI
TL;DR: In this chapter, recent findings are described that further support the local generation of the endogenous limbic potentials, the distributions of the limbics potentials are compared in auditory and visual tasks to evaluate the possibility of multiple endogenous components in the depth potentials.
Abstract: Field potentials can be recorded from the human limbic system under the same task conditions that produce endogenous event-related potentials at the human scalp. In this chapter, recent findings are described that further support the local generation of the endogenous limbic potentials, the distributions of the limbic potentials are compared in auditory and visual tasks to evaluate the possibility of multiple endogenous components in the depth potentials, and clinical data are reviewed that bear on the issue of the limbic system as the neural source of the endogenous scalp potentials.

Book ChapterDOI
TL;DR: In this paper, rating scales reflect two basic tendencies of judgment: (1) categories divide the subjective rang into equal subranges, and (2) the same number of contextual stimuli are repre-sented by each category.
Abstract: Category ratings express the relational character of judgment, communicating the place of each stimulus in a context of related stimuli. Rating scales reflect two basic tendencies of judgment: (1) categories divide the subjective rang into equal subranges, and (2) the same number of contextual stimuli are repre-sented by each category. The rating scale can be predicted from a simple weighted average of range and frequence values, and the overall mean of the ratings can be predicted from the skewing of the contextual values. However, even in psychophysical experiments, the subjective range may extend beyond the end values of the stimulus series. Various rating-scale phenomena provide examples of the relational character of judgment.

Book ChapterDOI
TL;DR: The ability to recognize self in a mirror appears to be subject to maturational and experiential constraints as mentioned in this paper, and the ability to correctly interpret mirrored information about the self presupposes prior experience with mirrors.
Abstract: Publisher Summary In humans, self-recognition of own reflection is commonplace but not universal. The capacity to recognize self in a mirror appears to be subject to maturational and experiential constraints. In the first place, the ability to correctly interpret mirrored information about the self presupposes prior experience with mirrors. For instance, people born with congenital visual defects who undergo operations in later life—which provide for normal sight—respond just like nonhumans and initially react to themselves in mirrors as though they were seeing other people. Social behavior in response to mirrors begins at about six months of age, but the average child does not start to show reliable signs of self-recognition until 18 to 24 months. A prevailing view of self-concept formation in humans is that the sense of self emerges out of a social milieu. Self-awareness according to this view is a by-product of social interaction with others. For instance, according to G.H. Mead, in order for the self to emerge as an object of conscious inspection, it requires the opportunity to see yourself as you are seen by others.

Book ChapterDOI
TL;DR: In this paper, a case is made that motor performance and learning can be understood through hierarchical organization where an action plan is responsible for organizing relatively autonomous, lower order, units of activity.
Abstract: A case is made that motor performance and learning can be understood through hierarchical organization where an action plan is responsible for organizing relatively autonomous, lower order, units of activity. The action plan for movement is seen as consisting of information that systematically changes over learning as a function of the type of information the learner is attuned to. It is postulated that feedback about movement execution, specifically kinematic information, is the crucial source of information that supports the learning process. Data from two subjects, who practiced a sequential movement for 800 trials, did not support the idea that relatively autonomous units of activity serve as the basic building blocks for hierarchical motor control. Support was found for the idea that the subjects used displacement or spatial information early in acquisition, and then progressed to the use of velocity and perhaps acceleration information for the organization and control of their movements. Finally, a Fourier analysis of the data, as a function of practice, indicated subjects gradually modified their movements by progressively adding higher order harmonics to the fundamental harmonic of the movement which was established early in practice. Discussion of the results centered around the implications these results had for the hierarchical model of movement organization and the utility of adopting a Fourier synthesis/analysis approach to the study of motor learning.

Book ChapterDOI
TL;DR: This chapter describes recent experimental results obtained in laboratory on two ERP components that may reflect two stages of processing—namely, pattern recognition and stimulus categorization.
Abstract: Publisher Summary The successful-attempt to relate event-related potential (ERP) components to specific stages of information processing would provide a means of understanding some aspects of the ways in which the brain works, as well as yield data relevant to the theories of cognitive processing that have been based on behavioral observations. The ERP and behavioral sources of data both have inherent limitations, but taken together they might allow for converging operations to test theories common to both areas. This chapter describes recent experimental results obtained in laboratory on two ERP components thatmay reflect two stages of processing—namely, pattern recognition and stimulus categorization. Although RT data were obtained, the manner by which stages were identified was different than in behavioral studies. The first step was to identify a class of variables that affect the amplitude or latency of a particular component, or complex of components, and on the basis of the nature of the variables infer the functional significance of the associated physiological activity.

Book ChapterDOI
TL;DR: Two major means of improving search analysis are discussed: the use of task-specific simulations to establish the search characteristics expected from different strategies; and the analysis of additional search characteristics, such as the extent to which future information search is controlled by prior information (contingency), and different types of search variability.
Abstract: Recent interest in complex and varied decision strategies has highlighted the need for more sophisticated process tracing analyses, e.g., in analyzing information gathering patterns. Earlier studies have classified strategies as high/low proportion of available information used, constant/variable amount of search across alternatives, and intra-/interdimensional direction of search. However, more powerful analyses are needed, since the search characteristics of a given strategy may be variable and highly task-dependent. Two major means of improving search analysis are discussed: (a) the use of task-specific simulations to establish the search characteristics expected from different strategies; and (b) the analysis of additional search characteristics, such as the extent to which future information search is controlled by prior information (contingency), and different types of search variability. An experimental example of the use of these techniques is presented. Applications are proposed in three areas: (a) the study of sequential combinations of decision rules, and multi-phase decision making; (b) exploration of the possibility that there exists continuous variation among strategies along various parameters, rather than a set of discrete rules; and (c) the investigation of how decision makers adapt strategy to task.

Book ChapterDOI
TL;DR: In this paper, a theory of the prominence of numbers of the decimal system is discussed. But it is restricted to the case of the powers of 10 and the scale of a decision problem is chosen according to the rule that it is maximal subject to the condition that the range of reasonable alternatives contains at least three prominent numbers.
Abstract: Publisher Summary This chapter discusses a theory of the prominence of numbers of the decimal system. Basic components of the theory are the limited rational principles of rule construction by iterated addition or subtraction of a given amount and the refinement of a scale by adding the means of any two neighbors and its coarsening by omitting all uneven elements. Applying these principles to the powers of 10, various scales with different degrees of exactness can be constructed. Empirical observations suggest that among different scales with about the same exactness generally the one that can be constructed in an easier way is chosen. Empirical results indicate that the prominence of the set of numbers resulting from a specific decision situation is in many cases about one tenth of the numbers in question. The prominence of a specific decision problem seems to be selected according to the rule that it is maximal subject to the condition that the range of reasonable alternatives contains at least three prominent numbers.

Book ChapterDOI
TL;DR: Scalp recorded cerebral potentials related to self-paced voluntary movements in man have been studied mainly in ballistic movements, especially in brisk isotonic or isometric contractions of limb muscles.
Abstract: Publisher Summary In recent neurophysiological studies of motor performance, two classes of voluntary movements are often distinguished: a “ballistic” and “ramp” type of movement. Ballistic movements are brief (about 200 msec and less), fast, and thought to be “preprogramed,” that is launched without peripheral guidance. Ramp movements are slow, smooth, carried out in more than 500 msec, and are highly responsive to be controlled by peripheral sensory feedback. Scalp recorded cerebral potentials related to self-paced voluntary movements in man have been studied mainly in ballistic movements, especially in brisk isotonic or isometric contractions of limb muscles. The most obvious features of the potentials are a gradually increasing negative shift over both hemispheres beginning 1 sec or more prior to EMG onset, which subsides rapidly with a complex positive deflection after the initiation of movement. Subdural recordings have verified the cortical origin of these movement-related potentials in man.

Book ChapterDOI
TL;DR: The covariation of P300 latency and reaction time was assessed in two sets of experiments designed to test the hypothesis that p300 latency reflects a subset of the information processes whose durations are reflected in RT.
Abstract: The covariation of P300 latency and reaction time was assessed in two sets of experiments designed to test the hypothesis that P300 latency reflects a subset of the information processes whose durations are reflected in RT. In the first experiments, P300 latency and RT were shown to covary as a function of the time required to categorize a stimulus. However, this relationship could be dissociated by requiring subjects to make highly speeded responses which also resulted in increased error rates. In the second experiments two factors, stimulus discriminability and stimulus-response compatibility, were shown to have additive and independent effects upon RT. The latency of P300 was strongly affected by stimulus discriminability but only minimally affected by S-R compatibility. These results are interpreted to support the hypothesis that P300 latency reflects primarily the durations of processes concerned with stimulus evaluation and is relatively unaffected by processes concerned with response selection and execution.

Book ChapterDOI
TL;DR: Although P300 and SCR are very different biologically, there is as yet a lack of evidence for their having different psychological correlates.
Abstract: P300 and skin conductance response (SCR) show many similarities. Empirically, both are elicited by stimuli of any modality that are surprising, task-relevant, or intrinsically salient. Theoretically, both P300 and SCR have been explained in terms of neuronal, probabilistic, cognitive, or computer models of the orienting reflex (OR). However, the OR is not a unitary reflex, but a group of related processes that presumably correspond to different psychological functions. Although P300 and SCR are very different biologically, there is as yet a lack of evidence for their having different psychological correlates.

Book ChapterDOI
TL;DR: Although the theory originally was not designed to account for perceptual inference, it suggests a basic mechanism of inference which conforms to the structure of interpretations of objects which implies that abstract interpretations rather than representations of concrete objects or events are processed.
Abstract: The most important formal aspects of “Structural Information Theory” are discussed. The theory consists of a formal language in which physical objects and events can be described and of a set of rules with which one can predict the outcome of experiments in perception on the basis of these formal descriptions The relation between expressions in the language and physical objects is given by the so-called semantic mapping. This mapping depends on the perceptual domain and maps rows of symbols onto physical objects. These rows of symbols, called “primitive codes”, are meaningless. Syntactic rules allow a reformulation of these primitive codes into other formally less redundant expressions of which the least redundant are called “end-codes”. To each end-code a primitive code, and thus an object, is added uniquely An object has several end-codes. End-codes correspond with interpretations of objects. End-codes can be ordered according to their redundancy. This order is in accordance with the order of the preferences of human beings for the related interpretations. The strength of the preference for one interpretation over another is the quotient of the redundancy of both related end-codes.Although the theory originally was not designed to account for perceptual inference, it suggests a basic mechanism of inference which conforms to the structure of interpretations of objects. This mechanism, called structural inference, implies that abstract interpretations rather than representations of concrete objects or events are processed. Associative inference is formulated as a special case of structuralinference; object constancy is conceived as interpretation constancy.

Book ChapterDOI
TL;DR: In this article, the importance, legitimacy and role of second-order probabilities are discussed and two descriptive models of the use of secondorder probabilities in decisions are presented, and the results of two empirical studies of the effects of second order probabilities upon the rank orderings of bets are summarized briefly.
Abstract: The importance, legitimacy and role of second-order probabilities are discussed. Two descriptive models of the use of second-order probabilities in decisions are presented. The results of two empirical studies of the effects of second-order probabilities upon the rank orderings of bets are summarized briefly. The bets were of three basic types and involved a wide variety of first- and second-order probabilities as subjectively assessed by the subjects. Support was obtained for the assumption that the majority of subjects make use of the one model or the other. It is suggested that greater attention should be paid to second-order probabilities, both from a normative and descriptive standpoint.

Book ChapterDOI
Frank Rösier1
TL;DR: In this paper, some epistemological problems of ERP research are discussed and the question of how the functional state of the brain, which manifests itself in an endogenous ERP component, can be defined exactly is addressed.
Abstract: The first part of this chapter deals with some epistemological problems of ERP research and tackles the question of how the functional state of the brain, which manifests itself in an endogenous ERP component, can be defined exactly. In the second part some methodological issues are discussed which are critical for the definition and interpretation of endogenous ERP components. Both themes, which are closely interrelated, are elaborated by reviewing material accumulated on “P300”.

Book ChapterDOI
TL;DR: In this paper, three paradigms are identified as crucial to interpretations of results in most studies of heuristics and biases in probabilistic thinking, including the view of people as "intellectual cripples", who exhibit severe and systematic biases in making judgements, is shown to be a value judgement on the part of the investigator.
Abstract: Three paradigms are identified as crucial to interpretations of results in most studies of heuristics and biases in probabilistic thinking. The paradigms are criticised as being so limited and inadequate that generalisations from current research on heuristics and biases cannot be justified. In particular, the view of people as ‘intellectual cripples’, who exhibit severe and systematic biases in making judgements, is shown to be a value judgement on the part of the investigator. The implicit acceptance of the paradigms is shown to have created four problems in current research. An alternative perspective, the generation paradigm, emphasizes the role of problem structuring, in particular the subject's internal representation of the task, within which information is processed. The generation paradigm sees decision making and the forming of judgements as dynamic, generative processes, conducted interactively between people within a social and cultural context. From this perspective, research would attempt to discover the conditions under which people can do well, aiding rather than de-biasing procedures would be investigated, and expected utility theory would be seen as neither a normative nor descriptive model, but rather as a guide to thinking that polices consistency within a small-world representation of a problem.

Book ChapterDOI
TL;DR: The authors reviewed the literature to summarize some of the effects for which any complete theory must account and to indicate problems with some experiments that become apparent when considering theoretical explanations, and concluded that a lack of rigorous theory development is the most serious problem.
Abstract: Publisher Summary The recent marked shift in empirical research on judgment from testing normative models to investigating people's biases and heuristics has been extremely valuable in many respects. However, it also has certain shortcomings as would be expected with any major change in research focus. There are three interrelated problems: (1) an almost excessive concern with the three heuristics of representativeness, availability, anchoring, and adjustment, (2) a tendency toward treating the classification of heuristics as explanation, and (3) a lack of rigorous theory development. A lack of rigorous theory development is the most serious problem. This chapter reviews the literature to summarize some of the effects for which any complete theory must account and to indicate problems with some experiments that become apparent when considering theoretical explanations.

Book ChapterDOI
TL;DR: In this article, three kinds of endogenous brain event-related potentials are described: an N200 wave peaking in the parieto-occipital region, the duration of which increases with perceptual processing, a biphasic N2-P3a complex peaking with the end of stimulus categorization, and a parietal P3b wave generated mostly after motor response.
Abstract: Three kinds of endogenous brain event-related potentials are described: an N200 wave peaking in the parieto-occipital region, the duration of which increases with perceptual processing; a biphasic N2-P3a complex peaking in the central areas, probably related to “active” orienting and overlapping in onset with the terminal phase of the parieto-occipital N200; a parietal P3b wave generated mostly after the motor response These findings suggest that the process reflected by N2-P3a is dependent upon cognitive decision indexed on line by the parieto-occipital N200 and develops concurrently with the end of stimulus categorization, the parietal P3b being generated after these stages

Book ChapterDOI
TL;DR: In this article, the initial phase of decision processes is conceptualized as the development of a structural representation of relevant knowledge and goals are viewed as playing an important role in representing decision problems when they have some specific content and are not purely formal.
Abstract: In this paper the initial phase of decision processes is conceptualized as the development of a structural representation of relevant knowledge. Goals are viewed as playing an important role in representing decision problems when they have some specific content and are not purely formal (e. g., maximize SEU). A network model is proposed far the representation of goals and actions, and several assumptions are made regarding the spread of activation through the network. In an experiment, hypotheses about the effects of two factors were investigated: Goal explidtness (E) was varied by presenting to Ss goal hierarchies of different specificity (one to three levels), and goal importance (R) was varied by letting Ss either rank-order goals with respect to their personal priorities, or not The results show that the number of actions generated increases with the degree of goal explidtness, thus supporting the Ss creative search process, whereas the number of actions is lower for Ss who focus on their own values compared to Ss who do not, thus pointing to ego involvement as a factor restricting creativity. On the other hand, the actions generated by the personally involved group were rated higher on goal achievement scales than the actions generated by the other group. The results are in accordance with the model which, however, needs elaboration.

Book ChapterDOI
Merlin Donald1
TL;DR: This chapter represents an attempt to create a model of selective attention which accounts for recent evidence derived from human electrophysiological experiments.
Abstract: Publisher Summary This chapter represents an attempt to create a model of selective attention which accounts for recent evidence derived from human electrophysiological experiments. The definition of attention adopted here follows that of Hebb and Broadbent in emphasizing selective processing of the input as the critical variable. Attention in this definition is concerned with selective perception and not with general alertness, physiological activation, motor readiness, or response preference. It is concerned with how the organism can apparently exclude certain stimuli from awareness and memory, while focusing on other equally salient aspects of the environment. Stimuli singled out by attention are analyzed, rehearsed, integrated into the current perceptual interpretation of the environment, and stored in memory in a selective manner. Rejected or unattended channels of input receive more limited processing, and in a sense the problem of selective attention can be reduced to one of comparing the processing of attended and rejected inputs. How are inputs excluded, at what level, and what does attention add?