scispace - formally typeset
Search or ask a question

Showing papers by "Carnegie Mellon University published in 1995"


Journal ArticleDOI
TL;DR: The account presented here suggests that memories are first stored via synaptic changes in the hippocampal system, that these changes support reinstatement of recent memories in the neocortex, that neocortical synapses change a little on each reinstatement, and that remote memory is based on accumulated neocorticals changes.
Abstract: Damage to the hippocampal system disrupts recent memory but leaves remote memory intact. The account presented here suggests that memories are first stored via synaptic changes in the hippocampal system, that these changes support reinstatement of recent memories in the neocortex, that neocortical synapses change a little on each reinstatement, and that remote memory is based on accumulated neocortical changes. Models that learn via changes to connections help explain this organization. These models discover the structure in ensembles of items if learning of each item is gradual and interleaved with learning about other items. This suggests that the neocortex learns slowly to discover the structure in ensembles of experiences. The hippocampal system permits rapid learning of new items without disrupting this structure, and reinstatement of new memories interleaves them with others to integrate them into structured neocortical memory systems.

4,288 citations


Journal ArticleDOI
TL;DR: In this article, an alternative approach, which relies on the assumption that areas of true neural activity will tend to stimulate signal changes over contiguous pixels, is presented, which can improve statistical power by as much as fivefold over techniques that rely solely on adjusting per pixel false positive probabilities.
Abstract: The typical functional magnetic resonance (fMRI) study presents a formidable problem of multiple statistical comparisons (i.e., > 10,000 in a 128 x 128 image). To protect against false positives, investigators have typically relied on decreasing the per pixel false positive probability. This approach incurs an inevitable loss of power to detect statistically significant activity. An alternative approach, which relies on the assumption that areas of true neural activity will tend to stimulate signal changes over contiguous pixels, is presented. If one knows the probability distribution of such cluster sizes as a function of per pixel false positive probability, one can use cluster-size thresholds independently to reject false positives. Both Monte Carlo simulations and fMRI studies of human subjects have been used to verify that this approach can improve statistical power by as much as fivefold over techniques that rely solely on adjusting per pixel false positive probabilities.

3,094 citations


Journal ArticleDOI
TL;DR: In this article, the authors present a modern dilemma contract making the contract makers, the contract making process, and the contract business strategy and contracts trends in the new social contract, which is based on the modern dilemma.
Abstract: Introduction Contracting A Modern Dilemma Contract Making The Contract Makers Contemporary Contracts Violating the Contract Changing the Contract Business Strategy and Contracts Trends in the New Social Contract

2,550 citations


Journal ArticleDOI
TL;DR: It is shown that if Σ i(i - 2)λi > 0, then such graphs almost surely have a giant component, while if λ0, λ1… which sum to 1, then almost surely all components in such graphs are small.
Abstract: Given a sequence of nonnegative real numbers λ0, λ1… which sum to 1, we consider random graphs having approximately λi n vertices of degree i. Essentially, we show that if Σ i(i - 2)λi > 0, then such graphs almost surely have a giant component, while if Σ i(i -2)λ. < 0, then almost surely all components in such graphs are small. We can apply these results to Gn,p,Gn.M, and other well-known models of random graphs. There are also applications related to the chromatic number of sparse random graphs. © 1995 Wiley Periodicals, Inc.

2,494 citations


Book ChapterDOI
09 Jul 1995
TL;DR: The results show that a learning algorithm based on the Minimum Description Length (MDL) principle was able to raise the percentage of interesting articles to be shown to users from 14% to 52% on average.
Abstract: A significant problem in many information filtering systems is the dependence on the user for the creation and maintenance of a user profile, which describes the user's interests. NewsWeeder is a netnews-filtering system that addresses this problem by letting the user rate his or her interest level for each article being read (1-5), and then learning a user profile based on these ratings. This paper describes how NewsWeeder accomplishes this task, and examines the alternative learning methods used. The results show that a learning algorithm based on the Minimum Description Length (MDL) principle was able to raise the percentage of interesting articles to be shown to users from 14% to 52% on average. Further, this performance significantly outperformed (by 21%) one of the most successful techniques in Information Retrieval (IR), term-frequency/inverse-document-frequency (tf-idf) weighting.

2,234 citations


Journal ArticleDOI
TL;DR: The PVM system, a heterogeneous network computing trends in distributed computing PVM overview other packages, and troubleshooting: geting PVM installed getting PVM running compiling applications running applications debugging and tracing debugging the system.
Abstract: Part 1 Introduction: heterogeneous network computing trends in distributed computing PVM overview other packages. Part 2 The PVM system. Part 3 Using PVM: how to obtain the PVM software setup to use PVM setup summary starting PVM common startup problems running PVM programs PVM console details host file options. Part 4 Basic programming techniques: common parallel programming paradigms workload allocation porting existing applications to PVM. Part 5 PVM user interface: process control information dynamic configuration signalling setting and getting options message passing dynamic process groups. Part 6 Program examples: fork-join dot product failure matrix multiply one-dimensional heat equation. Part 7 How PVM works: components messages PVM daemon libpvm library protocols message routing task environment console program resource limitations multiprocessor systems. Part 8 Advanced topics: XPVM porting PVM to new architectures. Part 9 Troubleshooting: geting PVM installed getting PVM running compiling applications running applications debugging and tracing debugging the system. Appendices: history of PVM versions PVM 3 routines.

2,060 citations


Proceedings Article
20 Aug 1995
TL;DR: A new approach to planning in STRIPS-like domains based on constructing and analyzing a compact structure the authors call a Planning Graph is introduced, and a new planner, Graphplan, is described that uses this paradigm.
Abstract: We introduce a new approach to planning in STRIPS-like domains based on constructing and analyzing a compact structure we call a Planning Graph. We describe a new planner, Graphplan, that uses this paradigm. Graphplan always returns a shortest-possible partial-order plan, or states that no valid plan exists. We provide empirical evidence in favor of this approach, showing that Graphplan outperforms the total-order planner, Prodigy, and the partial-order planner, UCPOP, on a variety of interesting natural and artificial planning problems. We also give empirical evidence that the plans produced by Graphplan are quite sensible. Since searches made by this approach are fundamentally different from the searches of other common planning methods, they provide a new perspective on the planning problem.

1,911 citations


Journal ArticleDOI
TL;DR: The 10-year history of tutor development based on the advanced computer tutoring (ACT) theory is reviewed, finding that a new system for developing and deploying tutors is being built to achieve the National Council of Teachers of Mathematics (NCTM) standards for high-school mathematics in an urban setting.
Abstract: This paper review the 10-year history of tutor development based on the ACT theory (Anderson, 1983,1993). We developed production system models in ACT ofhow students solved problems in LISP, geometry, and algebra. Computer tutors were developed around these cognitive models. Construction ofthese tutors was guided by a set of eight principles loosely based on the ACT theory. Early evaluations of these tutors usually but not always showed significant achievement gains. Best-case evaluations showed that students could achieve at least the same level of proficiency as conventional instruction in one third the time. Empirical studies showed that students were learning skills in production-rule units and that the best tutorial interaction style was one in which the tutor provides immediate feedback, consisting of short and directed error messages. The tutors appear to work better if they present themselves to students as nonhuman tools to assist learning rather than as emulations of human tutors. Students working with these tutors display transfer to other environments to the degree that they can map the tutor environment into the test environment. These experiences have coalesced into a new system for developing and deploying tutors. This system involves first selecting a problem-solving interface, then constructing a curriculum under the guidance of a domain expert, then designing a cognitive model for solving problems in that environment, then building instruction around the productions in that model, and finally deploying the tutor in the classroom. New tutors are being built in this system to achieve the NCTM standards for high school mathematics in an urban setting. (http://www.dtic.mil/cgi-bin/GetTRDoc?AD=ADA312246)

1,826 citations


Journal ArticleDOI
TL;DR: An effort to model students' changing knowledge state during skill acquisition and a series of studies is reviewed that examine the empirical validity of knowledge tracing and has led to modifications in the process.
Abstract: This paper describes an effort to model students' changing knowledge state during skill acquisition. Students in this research are learning to write short programs with the ACT Programming Tutor (APT). APT is constructed around a production rule cognitive model of programming knowledge, called theideal student model. This model allows the tutor to solve exercises along with the student and provide assistance as necessary. As the student works, the tutor also maintains an estimate of the probability that the student has learned each of the rules in the ideal model, in a process calledknowledge tracing. The tutor presents an individualized sequence of exercises to the student based on these probability estimates until the student has ‘mastered’ each rule. The programming tutor, cognitive model and learning and performance assumptions are described. A series of studies is reviewed that examine the empirical validity of knowledge tracing and has led to modifications in the process. Currently the model is quite successful in predicting test performance. Further modifications in the modeling process are discussed that may improve performance levels.

1,668 citations


Journal ArticleDOI
TL;DR: In this article, the authors examined the acquisition, depreciation and transfer of knowledge acquired through learning by doing in service organizations and found evidence of learning in these service organizations: as the organizations gain experience in production, the unit cost of production declines significantly.
Abstract: The paper examines the acquisition, depreciation and transfer of knowledge acquired through learning by doing in service organizations. The analysis is based on weekly data collected over a one and a half year period from 36 pizza stores located in Southwestern Pennsylvania. The 36 stores, which are franchised from the same corporation, are owned by 10 different franchisees. We find evidence of learning in these service organizations: as the organizations gain experience in production, the unit cost of production declines significantly. Knowledge acquired through learning by doing is found to depreciate rapidly in these organizations. Knowledge acquired through learning by doing is found to depreciate rapidly in these organizations. Knowledge is found to transfer across stores owned by the same franchisee but not across stores owned by different franchisees. Theoretical and practical implications of the work are discussed.

1,448 citations


Journal ArticleDOI
TL;DR: This paper proposes and test a new process-oriented methodology for ex post measurement to audit IT impacts on a strategic business unit SBU or profit center's performance and shows significant positive impacts of IT at the intermediate level.
Abstract: An important management question today is whether the anticipated economic benefits of Information Technology IT are being realized. In this paper, we consider this problem to be measurement related, and propose and test a new process-oriented methodology for ex post measurement to audit IT impacts on a strategic business unit SBU or profit center's performance. The IT impacts on a given SBU are measured relative to a group of SBUs in the industry. The methodology involves a two-stage analysis of intermediate and higher level output variables that also accounts for industry and economy wide exogenous variables for tracing and measuring IT contributions. The data for testing the proposed model were obtained from SBUs in the manufacturing sector. Our results show significant positive impacts of IT at the intermediate level. The theoretical contribution of the study is a methodology that attempts to circumvent some of the measurement problems in this domain. It also provides a practical management tool to address the question of why or why not certain IT impacts occur. Additionally, through its process orientation, the suggested approach highlights key variables that may require managerial attention and subsequent action.

Journal ArticleDOI
TL;DR: In this article, the authors apply the Schwarz criterion to find an approximate solution to Bayesian testing problems, at least when the hypotheses are nested when the prior on ψ is Normal.
Abstract: To compute a Bayes factor for testing H 0: ψ = ψ0 in the presence of a nuisance parameter β, priors under the null and alternative hypotheses must be chosen As in Bayesian estimation, an important problem has been to define automatic, or “reference,” methods for determining priors based only on the structure of the model In this article we apply the heuristic device of taking the amount of information in the prior on ψ equal to the amount of information in a single observation Then, after transforming β to be “null orthogonal” to ψ, we take the marginal priors on β to be equal under the null and alternative hypotheses Doing so, and taking the prior on ψ to be Normal, we find that the log of the Bayes factor may be approximated by the Schwarz criterion with an error of order O p (n −½), rather than the usual error of order O p (1) This result suggests the Schwarz criterion should provide sensible approximate solutions to Bayesian testing problems, at least when the hypotheses are nested When

Journal ArticleDOI
01 Oct 1995
TL;DR: Several service disciplines that are proposed in the literature to provide per-connection end-to-end performance guarantees in packet-switching networks are surveyed and a general framework for studying and comparing these disciplines is presented.
Abstract: While today's computer networks support only best-effort service, future packet-switching integrated-services networks will have to support real-time communication services that allow clients to transport information with performance guarantees expressed in terms of delay, delay jitter, throughput, and loss rate. An important issue in providing guaranteed performance service is the choice of the packet service discipline at switching nodes. In this paper, we survey several service disciplines that are proposed in the literature to provide per-connection end-to-end performance guarantees in packet-switching networks. We describe their mechanisms, their similarities and differences and the performance guarantees they can provide. Various issues and tradeoffs in designing service disciplines for guaranteed performance service are discussed, and a general framework for studying and comparing these disciplines are presented. >

Posted Content
TL;DR: The random field models and techniques introduced in this paper differ from those common to much of the computer vision literature in that the underlying random fields are non-Markovian and have a large number of parameters that must be estimated.
Abstract: We present a technique for constructing random fields from a set of training samples. The learning paradigm builds increasingly complex fields by allowing potential functions, or features, that are supported by increasingly large subgraphs. Each feature has a weight that is trained by minimizing the Kullback-Leibler divergence between the model and the empirical distribution of the training data. A greedy algorithm determines how features are incrementally added to the field and an iterative scaling algorithm is used to estimate the optimal values of the weights. The statistical modeling techniques introduced in this paper differ from those common to much of the natural language processing literature since there is no probabilistic finite state or push-down automaton on which the model is built. Our approach also differs from the techniques common to the computer vision literature in that the underlying random fields are non-Markovian and have a large number of parameters that must be estimated. Relations to other learning approaches including decision trees and Boltzmann machines are given. As a demonstration of the method, we describe its application to the problem of automatic word classification in natural language processing. Key words: random field, Kullback-Leibler divergence, iterative scaling, divergence geometry, maximum entropy, EM algorithm, statistical learning, clustering, word morphology, natural language processing

Journal ArticleDOI
TL;DR: This article examined the usefulness of placing risk propensity and risk perception in a more central role in models of risky decision making than has been done previously, and found that the importance of risk perception and propensity for risk in decision making has been overlooked.
Abstract: The reported research examined the usefulness of placing risk propensity and risk perception in a more central role in models of risky decision making than has been done previously. Specifically, t...

Journal ArticleDOI
TL;DR: A personal (even confessional) history of the field over this period is offered and a series of developmental stages are identified, which involves consolidating the skills needed to execute it and learning its limitations.
Abstract: Over the past twenty years, risk communication researchers and practitioners have learned some lessons, often at considerable personal price. For the most part, the mistakes that they have made have been natural, even intelligent ones. As a result, the same pitfalls may tempt newcomers to the field. This essay offers a personal (even confessional) history of the field over this period. It identifies a series of developmental stages. Progress through the stages involves consolidating the skills needed to execute it and learning its limitations. Knowing about their existence might speed the learning process and alert one to how much there still is to learn.

Journal ArticleDOI
TL;DR: Since its inception, the software industry has been in crisis and problems with software systems are common and highly-publicized occurrences.
Abstract: Since its inception, the software industry has been in crisis. As Blazer noted 20 years ago, “[Software] is unreliable, delivered late, unresponsive to change, inefficient, and expensive … and has been for the past 20 years” [4]. In a survey of software contractors and government contract officers, over half of the respondents believed that calendar overruns, cost overruns, code that required in-house modifications before being usable, and code that was difficult to modify were common problems in the software projects they supervised [22]. Even today, problems with software systems are common and highly-publicized occurrences.

Proceedings Article
20 Aug 1995
TL;DR: An extension to D* that focusses the repairs to significantly reduce the total time required for the initial path calculation and subsequent replanning operations for dynamic environments where arc costs can change during the traverse of the solution path.
Abstract: Finding the lowest-cost path through a graph is central to many problems including route planning for a mobile robot If arc costs change during the traverse then the remainder of the path may need to be replanned. This is the case for a sensor-equipped mobile robot with imperfect information about its environment. As the robot acquires additional information via its sensors it can revise its plan to reduce the total cost of the traverse. If the prior information is grossly incomplete the robot may discover useful information in every piece of sensor data. During replanning, the robot must either wait for the new path to be computed or move in the wrong direction therefore rapid replanning is essential The D* algorithm (Dynamic A*) plans optimal traverses ID real-time by incrementally repairing paths to the robots state as new information is discovered. This paper describes an extension to D* that focusses the repairs to significantly reduce the total time required for the initial path calculation and subsequent replanning operations. This extension completes the development of the D* algorithm as a full generalizatin of A* for dynamic environments where arc costs can change during the traverse of the solution path.

Journal ArticleDOI
TL;DR: In this paper, the task performance of laboratory work groups whose members were trained together or alone was investigated, and the mediating effects of various cognitive and social factors on the relationship between group training and performance were explored.
Abstract: The task performance of laboratory work groups whose members were trained together or alone was investigated. At an initial training session, subjects were taught to assemble transistor radios. Some were trained in groups, others individually. A week later, subjects were asked to recall the assembly procedure and actually assemble a radio. Everyone performed these tasks in small work groups, each containing three persons of the same gender. Subjects in the group training condition worked in the same groups where they were trained, whereas subjects in the individual training condition worked in newly formed groups. Groups whose members were trained together recalled more about the assembly procedure and produced better-quality radios than groups whose members were trained alone. Through an analysis of videotape data, the mediating effects of various cognitive and social factors on the relationship between group training and performance were explored. The results indicated that group training improved group...

Journal ArticleDOI
TL;DR: This study estimates the dollar benefits of improved information exchanges between Chrysler and its suppliers that result from using EDI and concludes that system wide, this translates to annual savings of $220 million for the company.
Abstract: A great deal of controversy exists about the impact of information technology on firm performance. While some authors have reported positive impacts, others have found negative or no impacts. This study focuses on Electronic Data Interchange (EDI) technology. Many of the problems in this line of research are over-come in this study by conducting a careful analysis of the performance data of the past decade gathered from the assembly centers of Chrysler Corporation. This study estimates the dollar benefits of improved information exchanges between Chrysler and its suppliers that result from using EDI. After controlling for variations in operational complexity arising from mix, volume, parts complexity, model, and engineering changes, the savings per vehicle that result from improved information exchanges are estimated to be about $60. Including the additional savings from electronic document preparation and transmission, the total benefits of EDI per vehicle amount to over $100. System wide, this translates to annual savings of $220 million for the company.

Journal ArticleDOI
01 Jun 1995-Futures
TL;DR: Regions are becoming focal points for knowledge creation and learning in the new age of global, knowledge-intensive capitalism, as they in effect become learning regions as discussed by the authors, and these learning regions function as collectors and repositories of knowledge and ideas.

Journal ArticleDOI
TL;DR: The purpose is to support the abstractions used in practice by software designers, and sketches a model for defining architectures and presents an implementation of the basic level of that model.
Abstract: Architectures for software use rich abstractions and idioms to describe system components, the nature of interactions among the components, and the patterns that guide the composition of components into systems. These abstractions are higher level than the elements usually supported by programming languages and tools. They capture packaging and interaction issues as well as computational functionality. Well-established (if informal) patterns guide the architectural design of systems. We sketch a model for defining architectures and present an implementation of the basic level of that model. Our purpose is to support the abstractions used in practice by software designers. The implementation provides a testbed for experiments with a variety of system construction mechanisms. It distinguishes among different types of components and different ways these components can interact. It supports abstract interactions such as data flow and scheduling on the same footing as simple procedure call. It can express and check appropriate compatibility restrictions and configuration constraints. It accepts existing code as components, incurring no runtime overhead after initialization. It allows easy incorporation of specifications and associated analysis tools developed elsewhere. The implementation provides a base for extending the notation and validating the model. >

Posted Content
TL;DR: This article developed a formal grammatical system called a link grammar and showed how English grammar can be encoded in such a system, and gave algorithms for efficiently parsing with a link grammars.
Abstract: We develop a formal grammatical system called a link grammar, show how English grammar can be encoded in such a system, and give algorithms for efficiently parsing with a link grammar. Although the expressive power of link grammars is equivalent to that of context free grammars, encoding natural language grammars appears to be much easier with the new system. We have written a program for general link parsing and written a link grammar for the English language. The performance of this preliminary system -- both in the breadth of English phenomena that it captures and in the computational resources used -- indicates that the approach may have practical uses as well as linguistic significance. Our program is written in C and may be obtained through the internet.

Proceedings ArticleDOI
03 Dec 1995
TL;DR: This paper shows how to use application-disclosed access patterns (hints) to expose and exploit I/O parallelism and to allocate dynamically file buffers among three competing demands: prefetching hinted blocks, caching hinted blocks for reuse, and caching recently used data for unhinted accesses.
Abstract: The underutilization of disk parallelism and file cache buffers by traditional file systems induces I/O stall time that degrades the performance of modern microprocessor-based systems. In this paper, we present aggressive mechanisms that tailor file system resource management to the needs of I/O-intensive applications. In particular, we show how to use application-disclosed access patterns (hints) to expose and exploit I/O parallelism and to allocate dynamically file buffers among three competing demands: prefetching hinted blocks, caching hinted blocks for reuse, and caching recently used data for unhinted accesses. Our approach estimates the impact of alternative buffer allocations on application execution time and applies a cost-benefit analysis to allocate buffers where they will have the greatest impact. We implemented informed prefetching and caching in DEC''s OSF/1 operating system and measured its performance on a 150 MHz Alpha equipped with 15 disks running a range of applications including text search, 3D scientific visualization, relational database queries, speech recognition, and computational chemistry. Informed prefetching reduces the execution time of the first four of these applications by 20% to 87%. Informed caching reduces the execution time of the fifth application by up to 30%.


Journal ArticleDOI
TL;DR: The recently formulated WHAM method is an extension of Ferrenberg and Swendsen's multiple histogram technique for free‐energy and potential of mean force calculations and provides an analysis of the statistical accuracy of the potential ofmean force as well as a guide to the most efficient use of additional simulations to minimize errors.
Abstract: The recently formulated weighted histogram analysis method (WHAM)1 is an extension of Ferrenberg and Swendsen's multiple histogram technique for free-energy and potential of mean force calculations. As an illustration of the method, we have calculated the two-dimensional potential of mean force surface of the dihedrals gamma and chi in deoxyadenosine with Monte Carlo simulations using the all-atom and united-atom representation of the AMBER force fields. This also demonstrates one of the major advantages of WHAM over umbrella sampling techniques. The method also provides an analysis of the statistical accuracy of the potential of mean force as well as a guide to the most efficient use of additional simulations to minimize errors. © 1995 John Wiley & Sons, Inc.

Journal ArticleDOI
TL;DR: In this article, a detailed analysis of the distinguishing individual characteristics, behaviors, and social circumstances from ages 10 through 32 of these four groups was carried out and the most salient findings concern the adolescence-limiteds.
Abstract: The point of departure for this paper is Nagin and Land (1993), who identified four distinctive offending trajectories in a sample of 403 British males—a group without any convictions, “adolescence-limiteds,”“high-level chronics,” and “low-level chronics.” We build upon that study with a detailed analysis of the distinguishing individual characteristics, behaviors, and social circumstances from ages 10 through 32 of these four groups. The most salient findings concern the adolescence-limiteds. By age 32 the work records of the adolescence-limiteds were indistinguishable from the never-convicted and substantially better than those of the chronic offenders. The adolescence-limiteds also seem to have established better relationships with their spouses than the chronics. The seeming reformation of the adolescence-limiteds, however, was less than complete. They continued to drink heavily and use drugs, get into fights, and commit criminal acts (according to self-reports).

Proceedings ArticleDOI
15 Sep 1995
TL;DR: It is shown that whole families of realistic motions can be derived from a single captured motion sequence using only a few keyframes to specify the motion warp.
Abstract: We describe a simple technique for editing captured or keyframed animation based on warping of the motion parameter curves. The animator interactively defines a set of keyframe-like constraints which are used to derive a smooth deformation that preserves the fine structure of the original motion. Motion clips are combined by overlapping and blending of the parameter curves. We show that whole families of realistic motions can be derived from a single captured motion sequence using only a few keyframes to specify the motion warp. Our technique makes it feasible to create libraries of reusable “clip motion.”

Book ChapterDOI
01 May 1995
TL;DR: An abstraction of the genetic algorithm, termed population-based incremental learning (PBIL), that explicitly maintains the statistics contained in a GA''s population, but which abstracts away the crossover operator and redefines the role of the population results in PBIL being simpler, both computationally and theoretically, than the GA.
Abstract: We present an abstraction of the genetic algorithm (GA), termed population-based incremental learning (PBIL), that explicitly maintains the statistics contained in a GA''s population, but which abstracts away the crossover operator and redefines the role of the population. This results in PBIL being simpler, both computationally and theoretically, than the GA. Empirical results reported elsewhere show that PBIL is faster and more effective than the GA on a large set of commonly used benchmark problems. Here we present results on a problem custom designed to benefit both from the GA''s crossover operator and from its use of a population. The results show that PBIL performs as well as, or better than, GAs carefully tuned to do well on this problem. This suggests that even on problems custom designed for GAs, much of the power of the GA may derive from the statistics maintained implicitly in its population, and not from the population itself nor from the crossover operator.

Journal ArticleDOI
TL;DR: This paper found strong support for an on-line model of the candidate evaluation process that in contrast to memory-based models shows that citizens are responsive to campaign information, adjusting their overall evaluation of the candidates in response to their immediate assessment of campaign messages and events.
Abstract: We find strong support for an on-line model of the candidate evaluation process that in contrast to memory-based models shows that citizens are responsive to campaign information, adjusting their overall evaluation of the candidates in response to their immediate assessment of campaign messages and events. Over time people forget most of the campaign information they are exposed to but are nonetheless able to later recollect their summary affective evaluation of candidates which they then use to inform their preferences and vote choice. These findings have substantive, methodological, and normative implications for the study of electoral behavior. Substantively, we show how campaign information affects voting behavior. Methodologically, we demonstrate the need to measure directly what campaign information people actually attend to over the course of a campaign and show that after controling for the individual's on-line assessment of campaign messages, National Election Study-type recall measures prove to be spurious as explanatory variables. Finally, we draw normative implications for democratic theory of on-line processing, concluding that citizens appear to be far more responsive to campaign messages than conventional recall models suggest.