scispace - formally typeset
Search or ask a question

Showing papers by "Stanford University published in 1990"


Book ChapterDOI
TL;DR: In this article, the authors trace the violations of the rational theory of choice to the rules that govern the framing of decision and to the psychological principles of evaluation embodied in prospect theory, and argue that these rules are normatively essential but descriptively invalid.
Abstract: Alternative descriptions of a decision problem often give rise to different preferences, contrary to the principle of invariance that underlines the rational theory of choice. Violations of this theory are traced to the rules that govern the framing of decision and to the psychological principles of evaluation embodied in prospect theory. Invariance and dominance are obeyed when their application is transparent and often violated in other situations. Because these rules are normatively essential but descriptively invalid, no theory of choice can be both normatively adequate and descriptively accurate.

4,243 citations


Journal ArticleDOI
TL;DR: In this article, two studies were conducted to obtain insights on how consumers form attitudes toward brand extensions, i.e., use of an established brand name to enter a new product category.
Abstract: Two studies were conducted to obtain insights on how consumers form attitudes toward brand extensions, (i.e., use of an established brand name to enter a new product category). In one study, reacti...

2,902 citations


Journal ArticleDOI
04 Jun 1990
TL;DR: In this paper, a model-checking algorithm for mu-calculus formulas which uses R.E. Bryant's (1986) binary decision diagrams to represent relations and formulas symbolically is described.
Abstract: A general method that represents the state space symbolically instead of explicitly is described. The generality of the method comes from using a dialect of the mu-calculus as the primary specification language. A model-checking algorithm for mu-calculus formulas which uses R.E. Bryant's (1986) binary decision diagrams to represent relations and formulas symbolically is described. It is then shown how the novel mu-calculus model checking algorithm can be used to derive efficient decision procedures for CTL model checking, satisfiability of linear-time temporal logic formulas, strong and weak observational equivalence of finite transition systems, and language containment of finite omega -automata. This eliminates the need to describe complicated graph-traversal or nested fixed-point computations for each decision procedure. The authors illustrate the practicality of their approach to symbolic model checking by discussing how it can be used to verify a simple synchronous pipeline. >

2,698 citations


Journal ArticleDOI
TL;DR: Clinical criteria for the classification of symptomatic idiopathic (primary) osteoarthritis of the hands were developed from data collected in a multicenter study and required that at least 3 of these 4 criteria be present to classify a patient as having OA of the hand.
Abstract: Clinical criteria for the classification of patients with hip pain associated with osteoarthritis (OA) were developed through a multicenter study. Data from 201 patients who had experienced hip pain for most days of the prior month were analyzed. The comparison group of patients had other causes of hip pain, such as rheumatoid arthritis or spondylarthropathy. Variables from the medical history, physical examination, laboratory tests, and radiographs were used to develop different sets of criteria to serve different investigative purposes. Multivariate methods included the traditional "number of criteria present" format and "classification tree" techniques. Clinical criteria: A classification tree was developed, without radiographs, for clinical and laboratory criteria or for clinical criteria alone. A patient was classified as having hip OA if pain was present in combination with either 1) hip internal rotation greater than or equal to 15 degrees, pain present on internal rotation of the hip, morning stiffness of the hip for less than or equal to 60 minutes, and age greater than 50 years, or 2) hip internal rotation less than 15 degrees and an erythrocyte sedimentation rate (ESR) less than or equal to 45 mm/hour; if no ESR was obtained, hip flexion less than or equal to 115 degrees was substituted (sensitivity 86%; specificity 75%). Clinical plus radiographic criteria: The traditional format combined pain with at least 2 of the following 3 criteria: osteophytes (femoral or acetabular), joint space narrowing (superior, axial, and/or medial), and ESR less than 20 mm/hour (sensitivity 89%; specificity 91%). The radiographic presence of osteophytes best separated OA patients and controls by the classification tree method (sensitivity 89%; specificity 91%). The "number of criteria present" format yielded criteria and levels of sensitivity and specificity similar to those of the classification tree for the combined clinical and radiographic criteria set. For the clinical criteria set, the classification tree provided much greater specificity. The value of the radiographic presence of an osteophyte in separating patients with OA of the hip from those with hip pain of other causes is emphasized.

2,447 citations


Journal ArticleDOI
01 Sep 1990
TL;DR: The history, origination, operating characteristics, and basic theory of several supervised neural-network training algorithms (including the perceptron rule, the least-mean-square algorithm, three Madaline rules, and the backpropagation technique) are described.
Abstract: Fundamental developments in feedforward artificial neural networks from the past thirty years are reviewed. The history, origination, operating characteristics, and basic theory of several supervised neural-network training algorithms (including the perceptron rule, the least-mean-square algorithm, three Madaline rules, and the backpropagation technique) are described. The concept underlying these iterative adaptation algorithms is the minimal disturbance principle, which suggests that during training it is advisable to inject new information into a network in a manner that disturbs stored information to the smallest extent possible. The two principal kinds of online rules that have developed for altering the weights of a network are examined for both single-threshold elements and multielement networks. They are error-correction rules, which alter the weights of a network to correct error in the output response to the present input pattern, and gradient rules, which alter the weights of a network during each pattern presentation by gradient descent with the objective of reducing mean-square error (averaged over all training patterns). >

2,297 citations


Journal ArticleDOI
TL;DR: A critical question has been why brain, more than mostother tissues, is so vulnerable to hypoxic-ischemic insults, and at least some of this special vulnerability may be accounted for by the central neurotoxicity of the endogenous excitatory
Abstract: Dennis W. ChoiDepartment of Neurology, Stanford University, Stanford,California 94305Steven M. RothmanDepartments of Pediatrics, Neurology, and Anatomy and Neurobiology,Washington University, St. Louis, Missouri 63110The human brain depends on its blood supply for a continuous supply ofoxygen and glucose. Irreversible brain damage occurs if blood flow isreduced below about 10 ml/100 g tissue/min and if blood flow is completelyinterrupted, damage will occur in only a few minutes. Unfortunately, suchreductions (ischemia) are common in disease states: either localized individual vascular territories, as in stroke; or globally, as in cardiac arrest.Cerebral hypoxia can also occur in isolation, for example in respiratoryarrest, carbon monoxide poisoning, or near-drowning; pure glucose depri-vation can occur in insulin overdose or a variety of metabolic disorders.As a group, these disorders are a leading cause of neurological disabilityand death; stroke alone is the third most common cause of death in NorthAmerica.Despite its clinical importance, little is known about the cellular patho-genesis of hypoxic-ischemic brain damage, and at present there is noeffective therapy. A critical question has been why brain, more than mostother tissues, is so vulnerable to hypoxic-ischemic insults. In particular,certain neuronal subpopulations, such as hippocampal field CA1 andneocortical layers 3, 5, and 6, are characteristically destroyed after sub-maximal hypoxic-ischenfic exposure. A possible answer has emerged inthe last few years: At least some of this special vulnerability may beaccounted for by the central neurotoxicity of the endogenous excitatory1710147-006X/90/0301 ~:1171 $02.00www.annualreviews.org/aronline Annual Reviews

2,245 citations


Journal ArticleDOI
TL;DR: Using an appropriate random model, this work presents a theory that provides precise numerical formulas for assessing the statistical significance of any region with high aggregate score and examples are given of applications to a variety of protein sequences, highlighting segments with unusual biological features.
Abstract: An unusual pattern in a nucleic acid or protein sequence or a region of strong similarity shared by two or more sequences may have biological significance. It is therefore desirable to know whether such a pattern can have arisen simply by chance. To identify interesting sequence patterns, appropriate scoring values can be assigned to the individual residues of a single sequence or to sets of residues when several sequences are compared. For single sequences, such scores can reflect biophysical properties such as charge, volume, hydrophobicity, or secondary structure potential; for multiple sequences, they can reflect nucleotide or amino acid similarity measured in a wide variety of ways. Using an appropriate random model, we present a theory that provides precise numerical formulas for assessing the statistical significance of any region with high aggregate score. A second class of results describes the composition of high-scoring segments. In certain contexts, these permit the choice of scoring systems which are "optimal" for distinguishing biologically relevant patterns. Examples are given of applications of the theory to a variety of protein sequences, highlighting segments with unusual biological features. These include distinctive charge regions in transcription factors and protooncogene products, pronounced hydrophobic segments in various receptor and transport proteins, and statistically significant subalignments involving the recently characterized cystic fibrosis gene.

1,972 citations


Journal ArticleDOI
TL;DR: In this article, it was shown that probabilistic inference using belief networks is NP-hard and that it seems unlikely that an exact algorithm can be developed to perform inference efficiently over all classes of belief networks and that research should be directed toward the design of efficient special-case, average-case and approximation algorithms.

1,877 citations


Book
01 Jan 1990
TL;DR: This book is an updated version of the information theory classic, first published in 1990, with expanded treatment of stationary or sliding-block codes and their relations to traditional block codes and discussion of results from ergodic theory relevant to information theory.
Abstract: This book is an updated version of the information theory classic, first published in 1990. About one-third of the book is devoted to Shannon source and channel coding theorems; the remainder addresses sources, channels, and codes and on information and distortion measures and their properties. New in this edition:Expanded treatment of stationary or sliding-block codes and their relations to traditional block codesExpanded discussion of results from ergodic theory relevant to information theoryExpanded treatment of B-processes -- processes formed by stationary coding memoryless sourcesNew material on trading off information and distortion, including the Marton inequalityNew material on the properties of optimal and asymptotically optimal source codesNew material on the relationships of source coding and rate-constrained simulation or modeling of random processesSignificant material not covered in other information theory texts includes stationary/sliding-block codes, a geometric view of information theory provided by process distance measures, and general Shannon coding theorems for asymptotic mean stationary sources, which may be neither ergodic nor stationary, and d-bar continuous channels.

1,810 citations


Journal ArticleDOI
TL;DR: It is argued that behavioral differentiation of the sexes is minimal when children are observed or tested individually, and sex differences emerge primarily in social situations, and their nature varies with the gender composition of dyads and groups.
Abstract: This article argues that behavioral differentiation of the sexes is minimal when children are observed or tested individually Sex differences emerge primarily in social situations, and their nature varies with the gender composition of dyads and groups Children find same-sex play partners more compatible, and they segregate themselves into same-sex groups, in which distinctive interaction styles emerge These styles are described As children move into adolescence, the patterns they developed in their childhood same-sex groups are carried over into cross-sex encounters in which girls' styles put them at a disadvantage Patterns of mutual influence can become more symmetrical in intimate male-female dyads, but the distinctive styles of the two sexes can still be seen in such dyads and are subsequently manifested in the roles and relationships of parenthood The implications of these continuities are considered Language: en

1,758 citations




Journal ArticleDOI
TL;DR: In this paper, a three-field mixed formulation in terms of displacements, stresses and an enhanced strain field is presented which encompasses, as a particular case, the classical method of incompatible modes.
Abstract: A three-field mixed formulation in terms of displacements, stresses and an enhanced strain field is presented which encompasses, as a particular case, the classical method of incompatible modes. Within this frame-work, incompatible elements arise as particular ‘compatible’ mixed approximations of the enhanced strain field. The conditions that the stress interpolation contain piece-wise constant functions and be L2-ortho-gonal to the enhanced strain interpolation, ensure satisfaction of the patch test and allow the elimination of the stress field from the formulation. The preceding conditions are formulated in a form particularly convenient for element design. As an illustration of the methodology three new elements are developed and shown to exhibit good performance: a plane 3D elastic/plastic QUAD, an axisymmetric element and a thick plate bending QUAD. The formulation described herein is suitable for non-linear analysis.

Journal ArticleDOI
TL;DR: It is shown that an integral {ital S} over the spectral function of spin-1 states of the Higgs sector is constrained by precision weak-interaction measurements.
Abstract: We show that an integral S over the spectral function of spin-1 states of the Higgs sector is constrained by precision weak-interaction measurements. Current data exclude large technicolor models; asymmetry measurements at the CERN ${\mathit{e}}^{+}$${\mathit{e}}^{\mathrm{\ensuremath{-}}}$ collider LEP and the SLAC Linear Collider will soon provide more stringent limits on Higgs-boson strong interactions.

Journal ArticleDOI
26 Oct 1990-Science
TL;DR: A region near the amino terminus with an important role in inactivation has been identified and the results suggest a model where this region forms a cytoplasmic domain that interacts with the open channel to cause inactivation.
Abstract: The potassium channels encoded by the Drosophila Shaker gene activate and inactivate rapidly when the membrane potential becomes more positive. Site-directed mutagenesis and single-channel patch-clamp recording were used to explore the molecular transitions that underlie inactivation in Shaker potassium channels expressed in Xenopus oocytes. A region near the amino terminus with an important role in inactivation has now been identified. The results suggest a model where this region forms a cytoplasmic domain that interacts with the open channel to cause inactivation.

Journal ArticleDOI
TL;DR: A good reputation can be an effective bond for honest behavior in a community of traders if members of the community know how others have behaved in the past, even if any particular pair of traders meets only infrequently.
Abstract: A good reputation can be an effective bond for honest behavior in a community of traders if members of the community know how others have behaved in the past – even if any particular pair of traders meets only infrequently. In a large community, it would be impossibly costly for traders to be perfectly informed about each other's behavior, but there exist institutions that can restore the effectiveness of a reputation system using much less extensive information. The system of judges used to enforce commercial law before the rise of the state was such an institution, and it successfully encouraged merchants (1) to behave honestly, (2) to impose sanctions on violators, (3) to become adequately informed about how others had behaved, (4) to provide evidence against violators of the code, and (5) to pay any judgments assessed against them, even though each of these behaviors might be personally costly.

Proceedings ArticleDOI
17 Jun 1990
TL;DR: The authors describe how a two-layer neural network can approximate any nonlinear function by forming a union of piecewise linear segments and a method is given for picking initial weights for the network to decrease training time.
Abstract: The authors describe how a two-layer neural network can approximate any nonlinear function by forming a union of piecewise linear segments. A method is given for picking initial weights for the network to decrease training time. The authors have used the method to initialize adaptive weights over a large number of different training problems and have achieved major improvements in learning speed in every case. The improvement is best when a large number of hidden units is used with a complicated desired response. The authors have used the method to train the truck-backer-upper and were able to decrease the training time from about two days to four hours

Journal ArticleDOI
TL;DR: A method for producing amplified heterogeneous populations of RNA from limited quantities of cDNA and sequences for cyclophilin and guanine nucleotide-binding protein (G-protein) alpha subunits have been detected in aRNA derived from single cerebellar tissue sections.
Abstract: The heterogeneity of neural gene expression and the spatially limited expression of many low-abundance messenger RNAs in the brain has made cloning and analysis of such messages difficult. To generate amounts of nucleic acids sufficient for use in standard cloning strategies, we have devised a method for producing amplified heterogeneous populations of RNA from limited quantities of cDNA. Whole cerebellar RNA was primed with a synthetic oligonucleotide containing the T7 RNA polymerase promoter sequence 5' to a polythymidylate region. After second-strand cDNA synthesis, T7 RNA polymerase was used to generate amplified antisense RNA (aRNA). Up to 80-fold molar amplification has been achieved from nanogram quantities of cDNA. The amplified material is similar in size distribution to the parent cDNA and shows sequence heterogeneity as assessed by Southern and Northern blot analysis. Specific messages for moderate-abundance mRNAs for actin and guanine nucleotide-binding protein (G-protein) alpha subunits have been detected in the amplified material. By using in situ transcription to generate cDNA, sequences for cyclophilin have been detected in aRNA derived from single cerebellar tissue sections. cDNA derived from a single cerebellar Purkinje cell also has been amplified and yields material that hybridizes to cognate whole RNA and mRNA but not to Escherichia coli RNA.

Journal ArticleDOI
TL;DR: In this paper, the authors specify extensions to two common internetwork routing algorithms (distancevector routing and link-state routing) to support low-delay datagram multicasting beyond a single LAN, and discuss how the use of multicast scope control and hierarchical multicast routing allows the multicast service to scale up to large internetworks.
Abstract: Multicasting, the transmission of a packet to a group of hosts, is an important service for improving the efficiency and robustness of distributed systems and applications. Although multicast capability is available and widely used in local area networks, when those LANs are interconnected by store-and-forward routers, the multicast service is usually not offered across the resulting internetwork. To address this limitation, we specify extensions to two common internetwork routing algorithms—distance-vector routing and link-state routing—to support low-delay datagram multicasting beyond a single LAN. We also describe modifications to the single-spanning-tree routing algorithm commonly used by link-layer bridges, to reduce the costs of multicasting in large extended LANs. Finally, we discuss how the use of multicast scope control and hierarchical multicast routing allows the multicast service to scale up to large internetworks.

Journal ArticleDOI
TL;DR: In this paper, a simple model of peer monitoring in a competitive credit market is presented, where the transfer of risk from the bank to the cosigner leads to an improvement in borrowers' welfare.
Abstract: A major problem for institutional lenders is ensuring that borrowers exercise prudence in the use of the funds so that the likelihood of repayment is enhanced. One partial solution is peer monitoring: having neighbors who are in a good position to monitor the borrower be required to pay a penalty if the borrower goes bankrupt. Peer monitoring is largely responsible for the successful financial performance of the Grameen Bank of Bangladesh and of similar group lending programs elsewhere. But peer monitoring has a cost. It transfers risk from the bank, which is in a better position to bear risk, to the cosigner. In a simple model of peer monitoring in a competitive credit market, this article demonstrates that the transfer of risk leads to an improvement in borrowers' welfare.

Journal ArticleDOI
TL;DR: In this paper, a quantification scheme is proposed to model the inexact reasoning processes of medical experts, which is essentially an approximation to conditional probability, but offers advantages over Bayesian analysis when they are utilized in a rule-based computer diagnostic system.
Abstract: Medical science often suffers from having so few data and so much imperfect knowledge that a rigorous probabilistic analysis, the ideal standard by which to judge the rationality of a physician's decision, is seldom possible Physicians nevertheless seem to have developed an ill-defined mechanism for reaching decisions despite a lack of formal knowledge regarding the interrelationships of all the variables that they are considering This report proposes a quantification scheme which attempts to model the inexact reasoning processes of medical experts The numerical conventions provide what is essentially an approximation to conditional probability, but offer advantages over Bayesian analysis when they are utilized in a rule-based computer diagnostic system One such system, a clinical consultation program named mycin , is described in the context of the proposed model of inexact reasoning

Journal ArticleDOI
TL;DR: In this paper, a strong coupling field between a metastable state and the upper state of an allowed transition to ground was proposed to obtain a resonantly enhanced third-order susceptibility while at the same time inducing transparency of the media.
Abstract: We show that by applying a strong-coupling field between a metastable state and the upper state of an allowed transition to ground one may obtain a resonantly enhanced third-order susceptibility while at the same time inducing transparency of the media. An improvement in conversion efficiency and parametric gain, as compared to weak-coupling field behavior, of many orders of magnitude is predicted.

Journal ArticleDOI
TL;DR: This paper presents a detailed description of the physical properties of Ca2+ Channels, and some of the properties of DHP-sensitive CaN Channels and their applications in dual entry entry systems.
Abstract: MULTIPLE CaH ENTRY MECHANISMS ... . . . . . . . . 716 Diversity of Voltage-gated Ca2+ Channels (VOCs) 7 17 Common Features of Voltage-gated CaN Channels .. . . 72 1 Subunit Composition of DHP-sensitive CaN Channels 722 Structural Properties of DHP-resistant CaN Channels ........ . . . . .... . . . . . . . . . . . . . . . . . . . . . . . . . . . . 725 Receptor-operated CaN Channels (ROCs) 725 Second Messenger-operated CaN Channels (SMOCs) 726 Mechanically Operated CaN Channels (MOCs) 727 Tonically Active (background) Ca2+ Channels. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 727 Gap Junction Channels ......... 727

Proceedings ArticleDOI
01 May 1990
TL;DR: A new model of memory consistency, called release consistency, that allows for more buffering and pipelining than previously proposed models is introduced and is shown to be equivalent to the sequential consistency model for parallel programs with sufficient synchronization.
Abstract: Scalable shared-memory multiprocessors distribute memory among the processors and use scalable interconnection networks to provide high bandwidth and low latency communication. In addition, memory accesses are cached, buffered, and pipelined to bridge the gap between the slow shared memory and the fast processors. Unless carefully controlled, such architectural optimizations can cause memory accesses to be executed in an order different from what the programmer expects. The set of allowable memory access orderings forms the memory consistency model or event ordering model for an architecture.This paper introduces a new model of memory consistency, called release consistency, that allows for more buffering and pipelining than previously proposed models. A framework for classifying shared accesses and reasoning about event ordering is developed. The release consistency model is shown to be equivalent to the sequential consistency model for parallel programs with sufficient synchronization. Possible performance gains from the less strict constraints of the release consistency model are explored. Finally, practical implementation issues are discussed, concentrating on issues relevant to scalable architectures.

Journal ArticleDOI
TL;DR: Zidovudine is safe and effective in persons with asymptomatic HIV infection and fewer than 500 CD4+ cells per cubic millimeter and additional study will be required to determine whether such treatment will ultimately improve survival for persons infected with HIV.
Abstract: Zidovudine (AZT) is a potent inhibitor of the replication of the human immunodeficiency virus (HIV), and it has been shown to improve survival in advanced HIV disease. We conducted a randomized, double-blind trial in adults with asymptomatic HIV infection who had CD4+ cell counts of fewer than 500 per cubic millimeter on entry into the study. The subjects (92 percent male) were randomly assigned to one of three treatment groups: placebo (428 subjects); zidovudine, 500 mg per day (453); or zidovudine, 1500 mg per day (457). After a mean follow-up of 55 weeks (range, 19 to 107), 33 of the subjects assigned to placebo had the acquired immunodeficiency syndrome (AIDS), as compared with 11 of those assigned to receive 500 mg of zidovudine (P = 0.002; relative risk, 2.8; 95 percent confidence interval, 1.4 to 5.6) and 14 of those assigned to receive 1500 mg of zidovudine (P = 0.05; relative risk, 1.9; 95 percent confidence interval, 1.0 to 3.5). In the three treatment groups, the rates of progression (per 100 person-years) to either AIDS or advanced AIDS-related complex were 7.6, 3.6, and 4.3, respectively. As compared with those assigned to placebo, the subjects in the zidovudine groups had significant increases in the number of CD4+ cells and significant declines in p24 antigen levels. In the 1500-mg zidovudine group, severe hematologic toxicity (anemia or neutropenia) was more frequent than in the other groups (P less than 0.0001). In the 500-mg zidovudine group, nausea was the only toxicity that was significantly more frequent (in 3.3 percent) than in the placebo group (P = 0.001). We conclude that zidovudine is safe and effective in persons with asymptomatic HIV infection and fewer than 500 CD4+ cells per cubic millimeter. Additional study will be required to determine whether such treatment will ultimately improve survival for persons infected with HIV.

Journal ArticleDOI
TL;DR: A secreted inhibitor of angiogenesis that is controlled by a tumor suppressor gene in hamster cells has been found to be similar to a fragment of the platelet and matrix protein thrombospondin, which demonstrates a function for the ubiquitous adhesive glycoprotein thromBosponin that is likely to be important in the normal physiological down-regulation of neovascularization.
Abstract: A secreted inhibitor of angiogenesis that is controlled by a tumor suppressor gene in hamster cells has been found to be similar to a fragment of the platelet and matrix protein thrombospondin. The two proteins were biochemically similar and immunologically crossreactive and could substitute for one another in two functional assays. Human thrombospondin inhibited neovascularization in vivo and endothelial cell migration in vitro, as does the hamster protein, gp140. gp140 sensitized smooth muscle cells to stimulation by epidermal growth factor, as does human thrombospondin. The thrombospondin gene has been localized on human chromosome 15. These results demonstrate a function for the ubiquitous adhesive glycoprotein thrombospondin that is likely to be important in the normal physiological down-regulation of neovascularization. In addition, they raise the possibility that thrombospondin may be one of a number of target molecules through which a tumor suppressor gene could act to restrain tumor growth.

Journal ArticleDOI
TL;DR: In the United States, the federal share has been declining in recent years, and although that share is at its lowest level in about 20 years, it still constitutes about two-thirds of the total as discussed by the authors.

Journal ArticleDOI
16 Aug 1990-Nature
TL;DR: Surprisingly, the nucleotide-binding 'core' of the ATPase fragment has a tertiary structure similar to that of hexokinase, although the remainder of the structures of the two proteins are completely dissimilar, suggesting that both the phosphotransferase mechanism and the substrate-induced conformational change intrinsic to the hexokinases may be used by the 70K heat shock-related proteins.
Abstract: The three-dimensional structure of the amino-terminal 44K ATPase fragment of the 70K bovine heat-shock cognate protein has been solved to a resolution of 22 A The ATPase fragment has two structural lobes with a deep cleft between them; ATP binds at the base of the cleft Surprisingly, the nucleotide-binding 'core' of the ATPase fragment has a tertiary structure similar to that of hexokinase, although the remainder of the structures of the two proteins are completely dissimilar, suggesting that both the phosphotransferase mechanism and the substrate-induced conformational change intrinsic to the hexokinases may be used by the 70K heat shock-related proteins

Journal ArticleDOI
TL;DR: The observed anatomical distribution of damage, and the cellular features of the damage agree with that observed in instances of GC-induced toxicity in the rodent hippocampus, and of stress-inducedoxicity in the primate hippocampus, suggesting that sustained GC exposure (whether due to stress, Cushings syndrome or exogenous administration) might damage the human hippocampus.
Abstract: In the laboratory rat and guinea pig, glucocorticoids (GCs), the adrenal steroids that are secreted during stress, can damage the hippocampus and exacerbate the hippocampal damage induced by various neurological insults. An open question is whether GCs have similar deleterious effects in the primate hippocampus. In fact, we showed that sustained and fatal stress was associated with preferential hippocampal damage in the vervet monkey; however, it was not possible to determine whether the excessive GC secretion that accompanied such stress was the damaging agent. The present study examines this possibility. Pellets of cortisol (the principal GC of primates) were stereotaxically implanted into hippocampi of 4 vervet monkeys; contralateral hippocampi were implanted with cholesterol pellets as a control. One year later at postmortem, preferential damage occurred in the cortisol-implanted side. In the cholesterol side, mild cell layer irregularity was noted in 2 of 4 cases. By contrast in the cortisol-exposed hippocampi, all cases had at least 2 of the following neuropathologic markers: cell layer irregularity, dendritic atrophy, soma shrinkage and condensation, or nuclear pyknosis. Damage was severe in some cases, and was restricted to the CA3/CA2 cellfield. This anatomical distribution of damage, and the cellular features of the damage agree with that observed in instances of GC-induced toxicity in the rodent hippocampus, and of stress-induced toxicity in the primate hippocampus. These observations suggest that sustained GC exposure (whether due to stress, Cushings syndrome or exogenous administration) might damage the human hippocampus.

Journal ArticleDOI
Larry Cuban1
TL;DR: In this article, the authors examine the dominant explanation presented by researchers and policymakers: the lack of rationality in proposing and implementing planned change and explore alternative explanations for recurring reforms, a political and an institutional perspective harnessed together.
Abstract: Why do reforms return again and again? To illustrate that the question is valid, I offer three examples drawn from instructional, curricular, and governance planned changes that have returned more than once. To answer the question, I first examine the dominant explanation presented by researchers and policymakers: the lack of rationality in proposing and implementing planned change. I explore why the dominant explanation is flawed in its frequently used metaphors and analysis. I then offer alternative explanations for recurring reforms—a political and an institutional perspective harnessed together—to explain the puzzle of why reforms return. The point of this analysis is to enlarge the repertoire of explanations that researchers and policymakers use to examine potential and past reforms. The policymaking stakes run high for expanding the range of explanations because the questions of why reforms failed in the past and why they return go to the heart of present policy debates over whether federal, state, ...