scispace - formally typeset
Search or ask a question

Showing papers by "Georgia Institute of Technology published in 1989"


Journal ArticleDOI
TL;DR: In this article, the authors evaluate the use of goodness-of-fit indices for the assessment of the fit of structural equation models to data and assess their strengths and weaknesses, and discuss less biased estimates of goodness of fit and a relative normedfit index for testing fit of a structural model exclusive of the measurement model.
Abstract: Discusses how current goodness-of-fit indices fail to assess parsimony and hence disconfirmability of a model and are insensitive to misspecifications of causal relations (a) among latent variables when measurement model with many indicators is correct and (b) when causal relations corresponding to free parameters expected to be nonzero turn out to be zero or near zero. A discussion of philosophy of parsimony elucidates relations of parsimony to parameter estimation, disconfirmability, and goodness of fit. AGFI in LISREL is rejected. A method of adjusting goodness-of-fit indices by a parsimony ratio is described. Also discusses less biased estimates of goodness of fit and a relative normedfit index for testing fit of structural model exclusive of the measurement model. By a goodness-of-fit index, in structural equations modeling, we mean an index for assessing the fit of a model to data that ranges in possible value between zero and unity, with zero indicating a complete lack of fit and unity indicating perfect fit. Although chi-square statistics are often used as goodness-of-fit indices, they range between zero and infinity, with zero indicating perfect fit and a large number indicating extreme lack of fit. We prefer to call chi-square and other indices with this property lack-of-fit indices. For a recent discussion of both lack-of-fit and goodness-of-fit indices, see Wheaton (I 988). In this article we evaluate the use of goodness-of-fit indices for the assessment of the fit of structural equation models to data. Our aim is to review their rationales and to assess their strengths and weaknesses. We also consider other aspects of the problem of evaluating a structural equation model with goodness-of-fit indices. For example, are certain goodness-of-fit indices to be used only in certain stages of research (a contention of Sobel & Bohrnstedt, 1985)? Or, how biased are estimates of goodness of fit in small samples? What bearing does parsimony have on assessing the goodness of fit of the model? Can goodness-of-fit indices focus on the fit of certain aspects of a model as opposed to the fit of the overall model? For example, to what extent do current goodness-of-fit indices fail to reveal poor fits in the structural submodel among the latent variables because of good fits in the measurement model relating latent variables to manifest indicators? We describe a goodness-of-fit index now

2,582 citations


Journal ArticleDOI
TL;DR: A detailed theoretical and numerical study of stochastic resonance, based on a rate equation approach, results in an equation for the output signal-to-noise ratio as a function of the rate at which noise induces hopping between the two states.
Abstract: The concept of stochastic resonance has been introduced previously to describe a curious phenomenon in bistable systems subject to both periodic and random forcing: an increase in the input noise can result in an improvement in the output signal-to-noise ratio. In this paper we present a detailed theoretical and numerical study of stochastic resonance, based on a rate equation approach. The main result is an equation for the output signal-to-noise ratio as a function of the rate at which noise induces hopping between the two states. The manner in which the input noise strength determines this hopping rate depends on the precise nature of the bistable system. For this reason, the theory is applied to two classes of bistable systems, the double-well (continuous) system and the two-state (discrete) system. The theory is tested in detail against digital simulations.

1,231 citations


Proceedings ArticleDOI
01 Oct 1989
TL;DR: This tutorial surveys the state of the art in executing discrete event simulation programs on a parallel computer, and focuses attention on asynchronous simulation programs where few events occur at any single point in simulated time.
Abstract: This tutorial surveys the state of the art in executing discrete event simulation programs on a parallel computer. Specifically, we will focus attention on asynchronous simulation programs where few events occur at any single point in simulated time, necessitating the concurrent execution of events occurring at different points in time. We first describe the parallel discrete event simulation problem, and examine why it so difficult. We review several simulation strategies that have been proposed, and discuss the underlying ideas on which they are based. We critique existing approaches in order to clarify their respective strengths and weaknesses.

1,201 citations


Journal ArticleDOI
TL;DR: A variant of the potential field method is used to produce the appropriate velocity and steering commands for the robot and demonstrates the feasibility of this approach.
Abstract: Motor schemas serve as the basic unit of behavior specifica tion for the navigation of a mobile robot. They are multiple concurrent processes that operate in conjunction with asso ciated perceptual...

1,072 citations


Proceedings Article
01 Jan 1989
TL;DR: This book presents the state of the art in case-based reasoning, with special emphasis on applying case- based reasoning to complex real-world problem-solving tasks such as medical diagnosis, design, conflict resolution, and planning.
Abstract: Case-based reasoning is one of the fastest growing areas in the field of knowledge-based systems and this book, authored by a leader in the field, is the first comprehensive text on the subject. Case-based reasoning systems are systems that store information about situations in their memory. As new problems arise, similar situations are searched out to help solve these problems. Problems are understood and inferences are made by finding the closest cases in memory, comparing and contrasting the problem with those cases, making inferences based on those comparisons, and asking questions when inferences can't be made. This book presents the state of the art in case-based reasoning. The author synthesizes and analyzes a broad range of approaches, with special emphasis on applying case-based reasoning to complex real-world problem-solving tasks such as medical diagnosis, design, conflict resolution, and planning. The author's approach combines cognitive science and engineering, and is based on analysis of both expert and common-sense tasks. Guidelines for building case-based expert systems are provided, such as how to represent knowledge in cases, how to index cases for accessibility, how to implement retrieval processes for efficiency, and how to adapt old solutions to fit new situations. This book is an excellent text for courses and tutorials on case-based reasoning. It is also a useful resource for computer professionals and cognitive scientists interested in learning more about this fast-growing field.

874 citations


Journal ArticleDOI
TL;DR: In this article, a hierarchical cognitive model was proposed to assess the meaning of work environment attributes for individuals, and the results of confirmatory factor analyses on multiple samples supported the hypothesis that a unifying theme exists for integrating diverse measures of meaning.
Abstract: Many of the perceptual variables used in industrial/organizational psychology assess the meaning that work environment attributes have for individuals (e.g., the ambiguity of role prescriptions). This study represents an initial attempt to test the hypothesis that a unifying theme exists for integrating diverse measures of meaning. The unifying theme is based on a hierarchical cognitive model wherein each assessment of meaning reflects a general appraisal of the degree to which the overall work environment is personally beneficial versus personally detrimental to the organizational wellbeing of the individual. Results of confirmatory factor analyses on multiple samples supported a hierarchical cognitive model with a single, general factor underlying measures of meaning. These results are used to explain the substantive impact of work environment perceptions on individual outcomes. Industrial/organizational (I/O) psychologists use many variables to assess perceptions of work environments. Examples include perceptual indicators of job attributes (e.g., job challenge, job autonomy), characteristics of leaders and leadership processes (e.g., leader consideration and support, leader work facilitation), workgroup characteristics and processes (workgroup cooperation, workgroup esprit), and interfaces between individuals and subsystems or organizations (e.g., role ambiguity, fairness and equity of reward system). The following two principles have guided many applied psychologists' efforts to measure work environment perceptions: (a) Individuals respond to environments in terms of how they perceive them and (b) the most important component of perception is the meaning or meanings imputed to the environment by the individual (Ekehammer, 1974; Endler & Magnusson, 1976; Lewin, 1938, 1951; Mischel, 1968).

688 citations


Journal ArticleDOI
TL;DR: A voting rule is exhibited that efficiently computes winners but is computationally resistant to strategic manipulation, showing how computational complexity might protect the integrity of social choice.
Abstract: We show how computational complexity might protect the integrity of social choice. We exhibit a voting rule that efficiently computes winners but is computationally resistant to strategic manipulation. It is NP-complete for a manipulative voter to determine how to exploit knowledge of the preferences of others. In contrast, many standard voting schemes can be manipulated with only polynomial computational effort.

602 citations


Journal ArticleDOI
TL;DR: It is shown that a voting scheme suggested by Lewis Carroll can be impractical in that it can be computationally prohibitive to determine whether any particular candidate has won an election, and a class of "impracticality theorems" are suggested which say that any fair voting scheme must, in the worst-case, require excessive computation to determine a winner.
Abstract: We show that a voting scheme suggested by Lewis Carroll can be impractical in that it can be computationally prohibitive (specifically, NP-hard) to determine whether any particular candidate has won an election. We also suggest a class of “impracticality theorems” which say that any fair voting scheme must, in the worst-case, require excessive computation to determine a winner.

600 citations


Journal ArticleDOI
TL;DR: This article investigated transfer from multiple analogs to a superficially dissimilar target problem and found that spontaneous transfer between small numbers of dissimilar analogs is difficult to obtain, but it can be achieved by manipulations that foster abstraction of a problem schema from the training examples.
Abstract: Five experiments investigated transfer from multiple analogs to a superficially dissimilar target problem. When subjects explicitly compared the analogs and then immediately attempted to solve the target problem in the context of a single experiment, transfer was obtained with significant frequency even without a hint that the analogs and target were related. Prehint transfer was sharply reduced or eliminated when the source analogs and the target were presented in different contexts, even when the transfer test was immediate. However, prehint transfer was enhanced, even after a context shift and a week-long delay between reading the source analogs and solving the problem, when the following conditions were met: The target problem was reworded slightly to emphasize a structural feature that it shared with the analogs; three rather than two source analogs were provided; and detailed, schema-oriented questions were used to help subjects focus on the problem-relevant aspects of the stories. Although spontaneous transfer between small numbers of dissimilar analogs is difficult to obtain, it can be achieved by manipulations that foster abstraction of a problem schema from the training examples.

543 citations


Journal ArticleDOI
TL;DR: In this paper, an adaptive update law which counteracts the effects of unknown parameters is shown to be robust to the unmodeled dynamics, and conditions are given for global stability of an adaptive control law designed for the reduced-order model of a class of nonlinear plants.
Abstract: A feedback linearization design is presented which includes unknown parameters and unmodeled dynamics. An adaptive update law which counteracts the effects of unknown parameters is shown to be robust to the unmodeled dynamics. The proposed design methodology is based on a conceptually simple stability analysis. Conditions are given for global stability of an adaptive control law designed for the reduced-order model of a class of nonlinear plants. In the presence of unmodeled dynamics, the regulation property is preserved in a stability region. The size of the region is estimated using bounds that not only prove robustness, but also allow a comparison between adaptive and nonadaptive nonlinear controls. >

400 citations


Book ChapterDOI
TL;DR: In this paper, it was proved that under average contractivity, a convergence and ergodic theorem obtains, which extends the results of Barnsley and Elton [BE], and also proved that a Collage Theorem is true.
Abstract: Recurrent iterated function systems generalize iterated function systems as introduced by Barnsley and Demko [BD] in that a Markov chain (typically with some zeros in the transition probability matrix) is used to drive a system of maps w j : K→ K, j = 1, 2,…, N, where K is a complete metric space. It is proved that under “average contractivity,” a convergence and ergodic theorem obtains, which extends the results of Barnsley and Elton [BE]. It is also proved that a Collage Theorem is true, which generalizes the main result of Barnsley et al. [BEHL] and which broadens the class of images which can be encoded using iterated map techniques. The theory of fractal interpolation functions [B] is extended, and the fractal dimensions for certain attractors is derived, extending the technique of Hardin and Massopust [HM]. Applications to Julia set theory and to the study of the boundary of IFS attractors are presented.

Journal ArticleDOI
TL;DR: Observations suggested that esters at C-2' and/or C-7, which would tend to promote water solubility, might serve as useful prodrugs of taxol, and salts had improved antitumor activity as compared to the free acids.
Abstract: Taxol has shown good in vivo antitumor activity in a number of test systems. The formulation of taxol for antitumor testing has been difficult. Esterification at either C-2' or C-7 resulted in loss of in vitro tubulin assembly activity but not cytotoxicity. These observations suggested that esters at C-2' and/or C-7, which would tend to promote water solubility, might serve as useful prodrugs of taxol. The reaction of taxol with either succinic anhydride or glutaric anhydride in pyridine solution at room temperature gave the crystalline mono 2'-adducts 1b and 1f, respectively. Salts of these acids (1b, 1f, 1i) were formed by the addition of 1 equiv of the corresponding base, followed by evaporation and/or freeze-drying of the solvent(s). The salts had improved antitumor activity as compared to the free acids. The triethanolamine and N-methylglucamine salts showed greatly improved aqueous solubility and were more active than the sodium salts. The glutarate series was preferred because of the higher activity and the higher yields obtained. 2'-Glutaryltaxol (1f) was coupled with 3-(dimethylamino)-1-propylamine, using CDI, to form in excellent yield the amino amide 1o. The hydrochloride salt (1p) showed good solubility and was extremely potent and active. At 10 mg/kg, in the B16 screen, 1p gave a T/C of 352 with 5 out of 10 cures. In the MX-1 breast xenograft assay, this prodrug gave values of -100 at doses of 40 and 20 mg/kg, with all live animals being tumor free.

Journal ArticleDOI
TL;DR: In this article, the coexistence of two phases at the transition temperature is kept under observation for a long time, and it is observed that the system is not exactly in equilibrium and a very slow evolution driven by surface tension is taking place.
Abstract: If the coexistence of two phases at the transition temperature is kept under observation for a long time, then one observes that the system is not exactly in equilibrium and a very slow evolution driven by surface tension is taking place. Theoretically, one should eventually see a spatially homogeneous state, but the time for settling down is so long that what one actually observes is “motion towards a stable state.” The complexity of the spatial distribution of the two phases keeps decreasing but appears to be stable for very long periods of time with intermittent periods of fast motion when there are small inclusions of one of the two regions embedded in the other phase. For a simple reaction diffusion model, it is shown that this phenomenon can be explained by investigating the flow on the attractor and the unstable manifolds of equilibria.

Journal ArticleDOI
TL;DR: It was concluded that further progress in understanding the mechanisms of the relation between age and cognitive functioning will require improved conceptualizations of the nature of working memory or other hypothesized mediating constructs.
Abstract: Three predictions were derived from the hypothesis that adult age differences in certain measures of cognitive functioning are attributable to age-related reductions in a processing resource such as working-memory capacity. Each prediction received at least some degree of empirical support in a study involving 120 males ranging between 20 and 79 years of age. First, older adults exhibited greater impairments of performance than did young adults when task complexity increased and more demands were placed on the limited processing resources; second, the magnitudes of these complexity effects were highly correlated across verbal (reasoning) and spatial (paper folding) tasks. Finally, statistical control of an index of a working-memory processing resource attenuated the effects of age on the measures of cognitive performance. It was concluded that further progress in understanding the mechanisms of the relation between age and cognitive functioning will require improved conceptualizations of the nature of working memory or other hypothesized mediating constructs.

Journal ArticleDOI
TL;DR: It is shown that the use of fuzzy sets will allow an analyst to communicate degrees of risk of individual project elements to people in readily understood language terms, which would permit an evaluation of the overall risk of a construction project.
Abstract: The authors introduce the concept of construction project analysis by fuzzy set theory and provide a methodology for risk assessment by linguistics. It is shown that the use of fuzzy sets will allow an analyst to communicate degrees of risk of individual project elements to people in readily understood language terms. Once these individual risk elements are communicated, fuzzy set theory would then permit an evaluation of the overall risk of a construction project. An illustrative example is presented. >

Journal ArticleDOI
TL;DR: An extensive computational analysis of several initial solution algorithms is presented, which identifies the tradeoffs between solution quality and computational requirements and concludes that the greedy procedure reduces the required number of trucks and increases the truck utilization.

Journal ArticleDOI
TL;DR: In this paper, the authors explore managerial perceptions about project characteristics and conclude that practitioners may derive benefit from paying attention to normative project-management theory, but that theoreticians must also descend from the level of broad generalizations to take into account the peculiarities of various classes of projects.

Journal ArticleDOI
TL;DR: There remain many lessons that can be learned about case-based reasoning by analyzing the MEDIATOR's behavior, and the reasons why it behaves the way it does are analyzed.

Journal ArticleDOI
TL;DR: In this article, a special class of hybrid systems is considered, whose state space is a cross-product space of a euclidean space and a finite-state space, and a necessary and sufficient algebraic condition, a simple algebraic criterion and a computationally simple sufficient test for controllability and observability are derived.
Abstract: A special class of hybrid systems is considered, whose state space is a cross-product space of a euclidean space and a finite-state space. Such models may be used to represent systems subject to known abrupt parameter variations, such as commu- tated networks. They may also be used to approximate some types of time-varying systems. Controllability, observability and stability of hybrid systems are inves- tigated. In particular, a necessary and sufficient algebraic condition, a simple algebraic criterion and a computationally simple algebraic sufficient test for controllability and observability are derived. Moreover, a simple sufficient stability condition is provided.

Proceedings ArticleDOI
01 Jul 1989
TL;DR: It is shown that errors occur more frequently in progressive refinement radiosity than in the originally proposed full matrix radiosity solution.
Abstract: Current radiosity methods rely on the calculation of geometric factors, known as form-factors, which describe energy exchange between pairs of surfaces in the environment. The most computationally efficient method for form-factor generation is a numerical technique known as the hemi-cube algorithm. Use of the hemi-cube is based on assumptions about the geometry of the surfaces involved. First, this paper examines the types of errors and visual artifacts that result when these assumptions are violated. Second, the paper shows that these errors occur more frequently in progressive refinement radiosity than in the originally proposed full matrix radiosity solution. Next, a new analytical technique for determining form-factors that is immune to the errors of the hemi-cube algorithm is introduced. Finally, a hybrid progressive refinement method that invokes the new technique to correctly compute form-factors when hemi-cube assumptions are violated is presented.


Journal ArticleDOI
TL;DR: In this article, the authors describe the convex hull of the incidence vectors of stable matchings and propose a linear program to solve the optimal stable marriage problem as a linear programming problem.

Journal ArticleDOI
TL;DR: To construct a short tour through points in the plane, the points are sequenced as they appear along a spacefilling curve, so it is easily coded and requires only memory and log operations.
Abstract: To construct a short tour through points in the plane, the points are sequenced as they appear along a spacefilling curve. This heuristic consists essentially of sorting, so it is easily coded and requires only O(N) memory and O(N log N) operations. Its performance is competitive with that of other fast methods.

Journal ArticleDOI
TL;DR: A survey of computer algorithms used for the detection of student plagiarism is presented and common features of the different plagiarism detection algorithms are described.
Abstract: A survey of computer algorithms used for the detection of student plagiarism is presented. A summary of several algorithms is provided. Common features of the different plagiarism detection algorithms are described. Ethical and administrative issues involving detected plagiarism are discussed. >

Journal ArticleDOI
TL;DR: The present studies examined age differences in memory when attention was divided during encoding, retrieval, or at both times, and found older Ss' free recall and clustering performance declined more than that of young Ss when Attention was divided at encoding, but there was no similar age interaction when divided attention occurred at retrieval.
Abstract: The present studies were designed to examine age differences in memory when attention was divided during encoding, retrieval, or at both times. In Experiment 1, Ss studied categorized words while performing a number-monitoring task during encoding, retrieval, or at both times. Older Ss' free recall and clustering performance declined more than that of young Ss when attention was divided at encoding, but there was no similar age interaction when divided attention occurred at retrieval. In Experiment 2, the task demands at retrieval were increased by using a fast-paced, cued-recall task. The results remained unchanged from Experiment 1. Again, an age interaction occurred with divided attention at encoding but not at retrieval. These results were unexpected, given the emphasis in the memory-aging literature on increased difficulty of retrieval by older adults. The findings pose difficulties for limited processing resource views of age differences in memory.

Journal ArticleDOI
TL;DR: It is shown, by presenting a protocol and proving its correctness, that there is a self-stabilizing system with no distinguished processor if the size of the ring is prime.
Abstract: A self-stabilizing system has the property that, no matter how it is perturbed, it eventually returns to a legitimate configuration. Dijkstra originally introduced the self-stabilization problem and gave several solutions for a ring of processors in his 1974 Communications of the ACM paper. His solutions use a distinguished processor in the ring, which effectively acts as a controlling element to drive the system toward stability. Dijkstra has observed that a distinguished processor is essential if the number of processors in the ring is composite. We show, by presenting a protocol and proving its correctness, that there is a self-stabilizing system with no distinguished processor if the size of the ring is prime. The basic protocol uses T (n2) states in each processor when n is the size of the ring. We modify the basic protocol to obtain one that uses T (n2/ln n) states.

Journal ArticleDOI
TL;DR: In this paper, the authors examined the validity of the NO2 measurements from the stratospheric aerosol and gas experiment (SAGE) II by comparing the data with climatological distributions of NO2 and by examining the consistency of the observations themselves.
Abstract: The validity of NO2 measurements from the stratospheric aerosol and gas experiment (SAGE) II is examined by comparing the data with climatological distributions of NO2 and by examining the consistency of the observations themselves. The precision at high altitudes is found to be 5 percent, which is also the case at specific low altitudes for certain latitudes where the mixing ratio is 4 ppbv, and the precision is 0.2 ppbv at low altitudes. The autocorrelation distance of the smoothed profile measurement noise is 3-5 km and 10 km for 1-km and 5-km smoothing, respectively. The SAGE II measurements agree with spectroscopic measurements to within 10 percent, and the SAGE measurements are about 20 percent smaller than average limb monitor measurements at the mixing ratio peak. SAGE I and SAGE II measurements are slightly different, but the difference is not attributed to changes in atmospheric NO2.

Journal ArticleDOI
TL;DR: In this article, the deformation of a beam-column, the upper and lower surfaces of which are bonded in segments with piezo-ceramic liners, is studied for the purpose of obtaining appropriate expressions for the force transferred to the structural member by the piezoactuator.
Abstract: The deformation of a beam-column, the upper and lower surfaces of which are bonded in segments with piezo-ceramic liners, is studied for the purpose of obtaining appropriate expressions for the force transferred to the structural member by the piezo-actuator. This concept may be employed for the control of large dynamic deformations of a lattice-type flexible space-structure. The present model, which is based upon a static analysis, accounts for the effects of transverse shear and axial forces in addition to a bending moment on the beam in formulating the governing equilibrium equations. The present model provides more complete expressions for the force transmitted to the structural member than a model reported earlier in literature, in which the shear and axial forces are neglected.


Journal ArticleDOI
TL;DR: The preparation of amorphous silicon by molecular-dynamics simulationsemploying the Stillinger-Weber Si potential, via direct slow cooling from themelt, is described and it is shown that previous failures to obtainAmorphous Si using these interaction potentials are of kinetic origin, i.e., related to the quench rate employed.
Abstract: The preparation of amorphous silicon by molecular-dynamics simulations employing the Stillinger-Weber Si potential, via direct slow cooling from the melt, is described. It is shown that previous failures to obtain amorphous Si using these interaction potentials are of kinetic origin, i.e., related to the quench rate employed. The amorphous silicon sample which we prepared exhibits structural and dynamical properties in good agreement with available experimental data for the static structure factor and phonon density of states. Detailed analyses of the structure, including distributions of bond and dihedral angles and ring statistics, and energetics, including the determination of effective temperatures for n-fold-coordinated atoms (n=3--5) and estimates of the formation energy of coordination defects (i.e., n\ensuremath{ e}4) are presented. The lack of medium-range order, measured via correlation between dihedral angles associated with adjacent bonds, is discussed.