scispace - formally typeset
Search or ask a question

Showing papers by "Stanford University published in 1991"


Book
01 Jan 1991
TL;DR: The author examines the role of entropy, inequality, and randomness in the design of codes and the construction of codes in the rapidly changing environment.
Abstract: Preface to the Second Edition. Preface to the First Edition. Acknowledgments for the Second Edition. Acknowledgments for the First Edition. 1. Introduction and Preview. 1.1 Preview of the Book. 2. Entropy, Relative Entropy, and Mutual Information. 2.1 Entropy. 2.2 Joint Entropy and Conditional Entropy. 2.3 Relative Entropy and Mutual Information. 2.4 Relationship Between Entropy and Mutual Information. 2.5 Chain Rules for Entropy, Relative Entropy, and Mutual Information. 2.6 Jensen's Inequality and Its Consequences. 2.7 Log Sum Inequality and Its Applications. 2.8 Data-Processing Inequality. 2.9 Sufficient Statistics. 2.10 Fano's Inequality. Summary. Problems. Historical Notes. 3. Asymptotic Equipartition Property. 3.1 Asymptotic Equipartition Property Theorem. 3.2 Consequences of the AEP: Data Compression. 3.3 High-Probability Sets and the Typical Set. Summary. Problems. Historical Notes. 4. Entropy Rates of a Stochastic Process. 4.1 Markov Chains. 4.2 Entropy Rate. 4.3 Example: Entropy Rate of a Random Walk on a Weighted Graph. 4.4 Second Law of Thermodynamics. 4.5 Functions of Markov Chains. Summary. Problems. Historical Notes. 5. Data Compression. 5.1 Examples of Codes. 5.2 Kraft Inequality. 5.3 Optimal Codes. 5.4 Bounds on the Optimal Code Length. 5.5 Kraft Inequality for Uniquely Decodable Codes. 5.6 Huffman Codes. 5.7 Some Comments on Huffman Codes. 5.8 Optimality of Huffman Codes. 5.9 Shannon-Fano-Elias Coding. 5.10 Competitive Optimality of the Shannon Code. 5.11 Generation of Discrete Distributions from Fair Coins. Summary. Problems. Historical Notes. 6. Gambling and Data Compression. 6.1 The Horse Race. 6.2 Gambling and Side Information. 6.3 Dependent Horse Races and Entropy Rate. 6.4 The Entropy of English. 6.5 Data Compression and Gambling. 6.6 Gambling Estimate of the Entropy of English. Summary. Problems. Historical Notes. 7. Channel Capacity. 7.1 Examples of Channel Capacity. 7.2 Symmetric Channels. 7.3 Properties of Channel Capacity. 7.4 Preview of the Channel Coding Theorem. 7.5 Definitions. 7.6 Jointly Typical Sequences. 7.7 Channel Coding Theorem. 7.8 Zero-Error Codes. 7.9 Fano's Inequality and the Converse to the Coding Theorem. 7.10 Equality in the Converse to the Channel Coding Theorem. 7.11 Hamming Codes. 7.12 Feedback Capacity. 7.13 Source-Channel Separation Theorem. Summary. Problems. Historical Notes. 8. Differential Entropy. 8.1 Definitions. 8.2 AEP for Continuous Random Variables. 8.3 Relation of Differential Entropy to Discrete Entropy. 8.4 Joint and Conditional Differential Entropy. 8.5 Relative Entropy and Mutual Information. 8.6 Properties of Differential Entropy, Relative Entropy, and Mutual Information. Summary. Problems. Historical Notes. 9. Gaussian Channel. 9.1 Gaussian Channel: Definitions. 9.2 Converse to the Coding Theorem for Gaussian Channels. 9.3 Bandlimited Channels. 9.4 Parallel Gaussian Channels. 9.5 Channels with Colored Gaussian Noise. 9.6 Gaussian Channels with Feedback. Summary. Problems. Historical Notes. 10. Rate Distortion Theory. 10.1 Quantization. 10.2 Definitions. 10.3 Calculation of the Rate Distortion Function. 10.4 Converse to the Rate Distortion Theorem. 10.5 Achievability of the Rate Distortion Function. 10.6 Strongly Typical Sequences and Rate Distortion. 10.7 Characterization of the Rate Distortion Function. 10.8 Computation of Channel Capacity and the Rate Distortion Function. Summary. Problems. Historical Notes. 11. Information Theory and Statistics. 11.1 Method of Types. 11.2 Law of Large Numbers. 11.3 Universal Source Coding. 11.4 Large Deviation Theory. 11.5 Examples of Sanov's Theorem. 11.6 Conditional Limit Theorem. 11.7 Hypothesis Testing. 11.8 Chernoff-Stein Lemma. 11.9 Chernoff Information. 11.10 Fisher Information and the Cram-er-Rao Inequality. Summary. Problems. Historical Notes. 12. Maximum Entropy. 12.1 Maximum Entropy Distributions. 12.2 Examples. 12.3 Anomalous Maximum Entropy Problem. 12.4 Spectrum Estimation. 12.5 Entropy Rates of a Gaussian Process. 12.6 Burg's Maximum Entropy Theorem. Summary. Problems. Historical Notes. 13. Universal Source Coding. 13.1 Universal Codes and Channel Capacity. 13.2 Universal Coding for Binary Sequences. 13.3 Arithmetic Coding. 13.4 Lempel-Ziv Coding. 13.5 Optimality of Lempel-Ziv Algorithms. Compression. Summary. Problems. Historical Notes. 14. Kolmogorov Complexity. 14.1 Models of Computation. 14.2 Kolmogorov Complexity: Definitions and Examples. 14.3 Kolmogorov Complexity and Entropy. 14.4 Kolmogorov Complexity of Integers. 14.5 Algorithmically Random and Incompressible Sequences. 14.6 Universal Probability. 14.7 Kolmogorov complexity. 14.9 Universal Gambling. 14.10 Occam's Razor. 14.11 Kolmogorov Complexity and Universal Probability. 14.12 Kolmogorov Sufficient Statistic. 14.13 Minimum Description Length Principle. Summary. Problems. Historical Notes. 15. Network Information Theory. 15.1 Gaussian Multiple-User Channels. 15.2 Jointly Typical Sequences. 15.3 Multiple-Access Channel. 15.4 Encoding of Correlated Sources. 15.5 Duality Between Slepian-Wolf Encoding and Multiple-Access Channels. 15.6 Broadcast Channel. 15.7 Relay Channel. 15.8 Source Coding with Side Information. 15.9 Rate Distortion with Side Information. 15.10 General Multiterminal Networks. Summary. Problems. Historical Notes. 16. Information Theory and Portfolio Theory. 16.1 The Stock Market: Some Definitions. 16.2 Kuhn-Tucker Characterization of the Log-Optimal Portfolio. 16.3 Asymptotic Optimality of the Log-Optimal Portfolio. 16.4 Side Information and the Growth Rate. 16.5 Investment in Stationary Markets. 16.6 Competitive Optimality of the Log-Optimal Portfolio. 16.7 Universal Portfolios. 16.8 Shannon-McMillan-Breiman Theorem (General AEP). Summary. Problems. Historical Notes. 17. Inequalities in Information Theory. 17.1 Basic Inequalities of Information Theory. 17.2 Differential Entropy. 17.3 Bounds on Entropy and Relative Entropy. 17.4 Inequalities for Types. 17.5 Combinatorial Bounds on Entropy. 17.6 Entropy Rates of Subsets. 17.7 Entropy and Fisher Information. 17.8 Entropy Power Inequality and Brunn-Minkowski Inequality. 17.9 Inequalities for Determinants. 17.10 Inequalities for Ratios of Determinants. Summary. Problems. Historical Notes. Bibliography. List of Symbols. Index.

45,034 citations


Journal ArticleDOI
TL;DR: In this paper, the authors consider the relation between the exploration of new possibilities and the exploitation of old certainties in organizational learning and examine some complications in allocating resources between the two, particularly those introduced by the distribution of costs and benefits across time and space.
Abstract: This paper considers the relation between the exploration of new possibilities and the exploitation of old certainties in organizational learning. It examines some complications in allocating resources between the two, particularly those introduced by the distribution of costs and benefits across time and space, and the effects of ecological interaction. Two general situations involving the development and use of knowledge in organizations are modeled. The first is the case of mutual learning between members of an organization and an organizational code. The second is the case of learning and competitive advantage in competition for primacy. The paper develops an argument that adaptive processes, by refining exploitation more rapidly than exploration, are likely to become effective in the short run but self-destructive in the long run. The possibility that certain common organizational practices ameliorate that tendency is assessed.

16,377 citations


Book
01 Jan 1991
TL;DR: The author explains the design and implementation of the Levinson-Durbin Algorithm, which automates the very labor-intensive and therefore time-heavy and expensive process of designing and implementing a Quantizer.
Abstract: 1 Introduction- 11 Signals, Coding, and Compression- 12 Optimality- 13 How to Use this Book- 14 Related Reading- I Basic Tools- 2 Random Processes and Linear Systems- 21 Introduction- 22 Probability- 23 Random Variables and Vectors- 24 Random Processes- 25 Expectation- 26 Linear Systems- 27 Stationary and Ergodic Properties- 28 Useful Processes- 29 Problems- 3 Sampling- 31 Introduction- 32 Periodic Sampling- 33 Noise in Sampling- 34 Practical Sampling Schemes- 35 Sampling Jitter- 36 Multidimensional Sampling- 37 Problems- 4 Linear Prediction- 41 Introduction- 42 Elementary Estimation Theory- 43 Finite-Memory Linear Prediction- 44 Forward and Backward Prediction- 45 The Levinson-Durbin Algorithm- 46 Linear Predictor Design from Empirical Data- 47 Minimum Delay Property- 48 Predictability and Determinism- 49 Infinite Memory Linear Prediction- 410 Simulation of Random Processes- 411 Problems- II Scalar Coding- 5 Scalar Quantization I- 51 Introduction- 52 Structure of a Quantizer- 53 Measuring Quantizer Performance- 54 The Uniform Quantizer- 55 Nonuniform Quantization and Companding- 56 High Resolution: General Case- 57 Problems- 6 Scalar Quantization II- 61 Introduction- 62 Conditions for Optimality- 63 High Resolution Optimal Companding- 64 Quantizer Design Algorithms- 65 Implementation- 66 Problems- 7 Predictive Quantization- 71 Introduction- 72 Difference Quantization- 73 Closed-Loop Predictive Quantization- 74 Delta Modulation- 75 Problems- 8 Bit Allocation and Transform Coding- 81 Introduction- 82 The Problem of Bit Allocation- 83 Optimal Bit Allocation Results- 84 Integer Constrained Allocation Techniques- 85 Transform Coding- 86 Karhunen-Loeve Transform- 87 Performance Gain of Transform Coding- 88 Other Transforms- 89 Sub-band Coding- 810 Problems- 9 Entropy Coding- 91 Introduction- 92 Variable-Length Scalar Noiseless Coding- 93 Prefix Codes- 94 Huffman Coding- 95 Vector Entropy Coding- 96 Arithmetic Coding- 97 Universal and Adaptive Entropy Coding- 98 Ziv-Lempel Coding- 99 Quantization and Entropy Coding- 910 Problems- III Vector Coding- 10 Vector Quantization I- 101 Introduction- 102 Structural Properties and Characterization- 103 Measuring Vector Quantizer Performance- 104 Nearest Neighbor Quantizers- 105 Lattice Vector Quantizers- 106 High Resolution Distortion Approximations- 107 Problems- 11 Vector Quantization II- 111 Introduction- 112 Optimality Conditions for VQ- 113 Vector Quantizer Design- 114 Design Examples- 115 Problems- 12 Constrained Vector Quantization- 121 Introduction- 122 Complexity and Storage Limitations- 123 Structurally Constrained VQ- 124 Tree-Structured VQ- 125 Classified VQ- 126 Transform VQ- 127 Product Code Techniques- 128 Partitioned VQ- 129 Mean-Removed VQ- 1210 Shape-Gain VQ- 1211 Multistage VQ- 1212 Constrained Storage VQ- 1213 Hierarchical and Multiresolution VQ- 1214 Nonlinear Interpolative VQ- 1215 Lattice Codebook VQ- 1216 Fast Nearest Neighbor Encoding- 1217 Problems- 13 Predictive Vector Quantization- 131 Introduction- 132 Predictive Vector Quantization- 133 Vector Linear Prediction- 134 Predictor Design from Empirical Data- 135 Nonlinear Vector Prediction- 136 Design Examples- 137 Problems- 14 Finite-State Vector Quantization- 141 Recursive Vector Quantizers- 142 Finite-State Vector Quantizers- 143 Labeled-States and Labeled-Transitions- 144 Encoder/Decoder Design- 145 Next-State Function Design- 146 Design Examples- 147 Problems- 15 Tree and Trellis Encoding- 151 Delayed Decision Encoder- 152 Tree and Trellis Coding- 153 Decoder Design- 154 Predictive Trellis Encoders- 155 Other Design Techniques- 156 Problems- 16 Adaptive Vector Quantization- 161 Introduction- 162 Mean Adaptation- 163 Gain-Adaptive Vector Quantization- 164 Switched Codebook Adaptation- 165 Adaptive Bit Allocation- 166 Address VQ- 167 Progressive Code Vector Updating- 168 Adaptive Codebook Generation- 169 Vector Excitation Coding- 1610 Problems- 17 Variable Rate Vector Quantization- 171 Variable Rate Coding- 172 Variable Dimension VQ- 173 Alternative Approaches to Variable Rate VQ- 174 Pruned Tree-Structured VQ- 175 The Generalized BFOS Algorithm- 176 Pruned Tree-Structured VQ- 177 Entropy Coded VQ- 178 Greedy Tree Growing- 179 Design Examples- 1710 Bit Allocation Revisited- 1711 Design Algorithms- 1712 Problems

7,015 citations


Journal ArticleDOI
TL;DR: In this article, the authors present a reference-dependent theory of consumer choice, which explains such effects by a deformation of indifference curves about the reference point, in which losses and disadvantages have greater impact on preferences than gains and advantages.
Abstract: Much experimental evidence indicates that choice depends on the status quo or reference level: changes of reference point often lead to reversals of preference. We present a reference-dependent theory of consumer choice, which explains such effects by a deformation of indifference curves about the reference point. The central assumption of the theory is that losses and disadvantages have greater impact on preferences than gains and advantages. Implications of loss aversion for economic behavior are considered. The standard models of decision making assume that preferences do not depend on current assets. This assumption greatly simplifies the analysis of individual choice and the prediction of trades: indifference curves are drawn without reference to current holdings, and the Coase theorem asserts that, except for transaction costs, initial entitlements do not affect final allocations. The facts of the matter are more complex. There is substantial evidence that initial entitlements do matter and that the rate of exchange between goods can be quite different depending on which is acquired and which is given up, even in the absence of transaction costs or income effects. In accord with a psychological analysis of value, reference levels play a large role in determining preferences. In the present paper we review the evidence for this proposition and offer a theory that generalizes the standard model by introducing a reference state. The present analysis of riskless choice extends our treatment of choice under uncertainty [Kahneman and Tversky, 1979, 1984; Tversky and Kahneman, 1991], in which the outcomes of risky prospects are evaluated by a value function that has three essential characteristics. Reference dependence: the carriers of value are gains and losses defined relative to a reference point. Loss aversion: the function is steeper in the negative than in the positive domain; losses loom larger than corresponding gains. Diminishing sensitivity: the marginal value of both gains and losses decreases with their

5,864 citations


Journal ArticleDOI
TL;DR: In this article, a principal-agent model that can explain why employment is sometimes superior to independent contracting even when there are no productive advantages to specific physical or human capital and no financial market imperfections to limit the agent's borrowings is presented.
Abstract: Introduction In the standard economic treatment of the principal–agent problem, compensation systems serve the dual function of allocating risks and rewarding productive work. A tension between these two functions arises when the agent is risk averse, for providing the agent with effective work incentives often forces him to bear unwanted risk. Existing formal models that have analyzed this tension, however, have produced only limited results. It remains a puzzle for this theory that employment contracts so often specify fixed wages and more generally that incentives within firms appear to be so muted, especially compared to those of the market. Also, the models have remained too intractable to effectively address broader organizational issues such as asset ownership, job design, and allocation of authority. In this article, we will analyze a principal–agent model that (i) can account for paying fixed wages even when good, objective output measures are available and agents are highly responsive to incentive pay; (ii) can make recommendations and predictions about ownership patterns even when contracts can take full account of all observable variables and court enforcement is perfect; (iii) can explain why employment is sometimes superior to independent contracting even when there are no productive advantages to specific physical or human capital and no financial market imperfections to limit the agent's borrowings; (iv) can explain bureaucratic constraints; and (v) can shed light on how tasks get allocated to different jobs.

5,678 citations


Journal ArticleDOI
TL;DR: In social cognitive theory human behavior is extensively motivated and regulated by the ongoing exercise of self-influence as discussed by the authors, and the major self-regulative mechanism operates through three principal sub-functions: self-monitoring of one's behavior, its determinants, and its effects; judgment of behavior in relation to personal standards and environmental circumstances; and affective self-reaction.

4,222 citations


Book ChapterDOI
01 Jan 1991
TL;DR: The issues taken up here are: coordination of content, coordination of process, and how to update their common ground moment by moment.
Abstract: GROUNDING It takes two people working together to play a duet, shake hands, play chess, waltz, teach, or make love. To succeed, the two of them have to coordinate both the content and process of what they are doing. Alan and Barbara, on the piano, must come to play the same Mozart duet. This is coordination of content. They must also synchronize their entrances and exits, coordinate how loudly to play forte and pianissimo, and otherwise adjust to each other's tempo and dynamics. This is coordination of process. They cannot even begin to coordinate on content without assuming a vast amount of shared information or common ground-that is, mutual knowledge, mutual beliefs, and mutual assumptions And to coordinate on process, they need to update their common ground moment by moment. All collective actions are built on common ground and its accumulation. We thank many colleagues for discussion of the issues we take up here.

4,144 citations


Journal ArticleDOI
23 Aug 1991-Cell
TL;DR: The results suggest that calcineurin is involved in a common step associated with T cell receptor and IgE receptor signaling pathways and that cyclophilin and FKBP mediate the actions of CsA and Fk506 by forming drug-dependent complexes with and altering the activity of calcineURin-calmodulin.

3,968 citations


Journal ArticleDOI
TL;DR: Infection with H. pylori is associated with an increased risk of gastric adenocarcinoma and may be a cofactor in the pathogenesis of this malignant condition.
Abstract: Background. Infection with Helicobacter pylori has been linked with chronic atrophic gastritis, an inflammatory precursor of gastric adenocarcinoma. In a nested case–control study, we explored whether H. pylori infection increases the risk of gastric carcinoma. Methods. From a cohort of 128,992 persons followed since the mid-1960s at a health maintenance organization, 186 patients with gastric carcinoma were selected as case patients and were matched according to age, sex, and race with 186 control subjects without gastric carcinoma. Stored serum samples collected during the 1960s were tested for IgG antibodies to H. pylori by enzyme-linked immunosorbent assay. Data on cigarette use, blood group, ulcer disease, and gastric surgery were obtained from questionnaires administered at enrollment. Tissue sections and pathology reports were reviewed to confirm the histologic results. Results. The mean time between serum collection and the diagnosis of gastric carcinoma was 14.2 years. Of the 109 patient...

3,882 citations


Journal ArticleDOI
06 Dec 1991-Science
TL;DR: Transition metal-catalyzed methods that are both selective and economical for formation of cyclic structures, of great interest for biological purposes, represent an important starting point for this long-term goal.
Abstract: Efficient synthetic methods required to assemble complex molecular arrays include reactions that are both selective (chemo-, regio-, diastereo-, and enantio-) and economical in atom count (maximum number of atoms of reactants appearing in the products). Methods that involve simply combining two or more building blocks with any other reactant needed only catalytically constitute the highest degree of atom economy. Transition metal-catalyzed methods that are both selective and economical for formation of cyclic structures, of great interest for biological purposes, represent an important starting point for this long-term goal. The limited availability of raw materials, combined with environmental concerns, require the highlighting of these goals.

3,830 citations


Journal ArticleDOI
TL;DR: In this paper, the authors examine both how the biogeochemistry of the nitrogen cycle could cause limitation to develop, and how nitrogen limitation could persist as a consequence of processes that prevent or reduce nitrogen fixation.
Abstract: The widespread occurrence of nitrogen limitation to net primary production in terrestrial and marine ecosystems is something of a puzzle; it would seem that nitrogen fixers should have a substantial competitive advantage wherever nitrogen is limiting, and that their activity in turn should reverse limitation. Nevertheless, there is substantial evidence that nitrogen limits net primary production much of the time in most terrestrial biomes and many marine ecosystems. We examine both how the biogeochemistry of the nitrogen cycle could cause limitation to develop, and how nitrogen limitation could persist as a consequence of processes that prevent or reduce nitrogen fixation. Biogeochemical mechansism that favor nitrogen limitation include: A number of mechanisms could keep nitrogen fixation from reversing nitrogen limitation. These include: The possible importance of these and other processes is discussed for a wide range of terrestrial, freshwater, and marine ecosystems.

Journal ArticleDOI
TL;DR: The authors proposed that the ways people respond to their own symptoms of depression influence the duration of these symptoms and found that people who engage in ruminative responses to depression, focusing on their symptoms and the possible causes and consequences of their symptoms, will show longer depressions than people who take action to distract themselves from their symptoms.
Abstract: I propose that the ways people respond to their own symptoms of depression influence the duration of these symptoms. People who engage in ruminative responses to depression, focusing on their symptoms and the possible causes and consequences of their symptoms, will show longer depressions than people who take action to distract themselves from their symptoms. Ruminative responses prolong depression because they allow the depressed mood to negatively bias thinking and interfere with instrumental behavior and problem-solving. Laboratory and field studies directly testing this theory have supported its predictions. I discuss how response styles can explain the greater likelihood of depression in women than men. Then I intergrate this response styles theory with studies of coping with discrete events. The response styles theory is compared to other theories of the duration of depression. Finally, I suggest what may help a depressed person to stop engaging in ruminative responses and how response styles for depression may develop.


Journal ArticleDOI
TL;DR: Regression analysis showed that students who, before the earthquake, already had elevated levels of depression and stress symptoms and a ruminative style of responding to their symptoms had more depression andstress symptoms for both follow-ups.
Abstract: Measures of emotional health and styles of responding to negative moods were obtained for 137 students 14 days before the Loma Prieta earthquake. A follow-up was done 10 days again 7 weeks after the earthquake to test predictions about which of the students would show the most enduring symptoms of depression and posttraumatic stress. Regression analysis showed that students who, before the earthquake, already had elevated levels of depression and stress symptoms and a ruminative style of responding to their symptoms had more depression and stress symptoms for both follow-ups. Students who were exposed to more dangerous or difficult circumstances because of the earthquake also had elevated symptom levels 10 days after the earthquake. Similarly, students who, during the 10 days after the earthquake, had more ruminations about the earthquake were still more likely to have high levels of depressive and stress symptoms 7 weeks after the earthquake.

Journal ArticleDOI
TL;DR: In this paper, the authors report the first demonstration of a technique by which an optically thick medium may be rendered transparent by applying a temporally smooth coupling laser between a bound state of an atom and the upper state of the transition which is to be made transparent.
Abstract: We report the first demonstration of a technique by which an optically thick medium may be rendered transparent. The transparency results from a destructive interference of two dressed states which are created by applying a temporally smooth coupling laser between a bound state of an atom and the upper state of the transition which is to be made transparent. The transmittance of an autoionizing (ultraviolet) transition in Sr is changed from exp(-20) without a coupling laser present to exp(-1) in the presence of a coupling laser.

Journal ArticleDOI
TL;DR: Adolescents whose parents are characterized as authoritarian score reasonably well on measures indexing obedience and conformity to the standards of adults but have relatively poorer self-conceptions than other youngsters, while adolescents from indulgent homes evidence a strong sense of self-confidence but report a higher frequency of substance abuse and school misconduct and are less engaged in school.
Abstract: In order to test Maccoby and Martin's revision of Baumrind's conceptual framework, the families of approximately 4,100 14-18-year-olds were classified into 1 of 4 groups (authoritative, authoritarian, indulgent, or neglectful) on the basis of the adolescents' ratings of their parents on 2 dimensions: acceptance/involvement and strictness/supervision. The youngsters were then contrasted along 4 sets of outcomes: psychosocial development, school achievement, internalized distress, and problem behavior. Results indicate that adolescents who characterize their parents as authoritative score highest on measures of psychosocial competence and lowest on measures of psychological and behavioral dysfunction; the reverse is true for adolescents who describe their parents as neglectful. Adolescents whose parents are characterized as authoritarian score reasonably well on measures indexing obedience and conformity to the standards of adults but have relatively poorer self-conceptions than other youngsters. In contrast, adolescents from indulgent homes evidence a strong sense of self-confidence but report a higher frequency of substance abuse and school misconduct and are less engaged in school. The results provide support for Maccoby and Martin's framework and indicate the need to distinguish between two types of "permissive" families: those that are indulgent and those that are neglectful.

Proceedings Article
12 May 1991
TL;DR: The first demonstration of a technique by which an optically thick medium may be rendered transparent is reported, which results from a destructive interference of two dressed states created by applying a temporally smooth coupling laser between a bound state of an atom and the upper state of the transition which is to be made transparent.
Abstract: We report the results of an experiment showing how an opaque atomic transition in neutral Strontium may be rendered transparent to radiation at its resonance frequency. This is accomplished by applying an electromagnetic coupling field (Fig. 1) between the upper state 4d5d1D2 of the transition and another state 4d5p1D2; of the atom. When the Rabi frequency of the coupling field exceeds the inhomogeneous width of the 5s5p1P1–4d5d1D2; transition, the medium becomes transparent on line center.

Journal ArticleDOI
TL;DR: It is shown that the class of programs possessing a total well-founded model properly includes previously studied classes of "stratified" and "locally stratified" programs, and is compared with other proposals in the literature.
Abstract: A general logic program (abbreviated to "program" hereafter) is a set of roles that have both positive and negative subgoals. It is common to view a deductive database as a general logic program consisting of rules (IDB) slttmg above elementary relations (EDB, facts). It is desirable to associate one Herbrand model with a program and think of that model as the "meaning of the program, " or Its "declarative semantics. " Ideally, queries directed to the program would be answered in accordance with this model. Recent research indicates that some programs do not have a "satisfactory" total model; for such programs, the question of an appropriate partial model arises. Unfounded sets and well-founded partial models are introduced and the well-founded semantics of a program are defined to be its well-founded partial model. If the well-founded partial model is m fact a total model. it is called the well-founded model. It n shown that the class of programs possessing a total well-founded model properly includes previously studied classes of "stratified" and "locally stratified" programs, The method in this paper is also compared with other proposals in the literature, including Clark's "program completion, " Fitting's and Kunen's 3-vahred interpretations of it, and the "stable models" of Gelfond and Lifschitz.

Book
01 Jan 1991
TL;DR: The authors argue that the context we find ourselves in substantially affects our behavior in this timely reissue of one of social psychology's classic textbooks with a new foreword by Malcolm Gladwell, author of The Tipping Point.
Abstract: How does the situation we're in influence the way we behave and think? Professors Ross and Nisbett eloquently argue that the context we find ourselves in substantially affects our behavior in this timely reissue of one of social psychology's classic textbooks. With a new foreword by Malcolm Gladwell, author of The Tipping Point.

Journal ArticleDOI
TL;DR: The hippocampus is capable of mediating inhibition over a wide range of steroid levels, and is distinguished from most potential feedback sites, including the hypothalamus and pituitary, by its high content of both type I and II corticosteroid receptors.
Abstract: There is considerable, although not entirely consistent, evidence that the hippocampus inhibits most aspects of HPA activity, including basal (circadian nadir) and circadian peak secretion as well as the onset and termination of responses to stress. Although much of the evidence for these effects rests only on the measurement of corticosteroids, recent lesion and implant studies indicate that the hippocampus regulates adrenocortical activity at the hypothalamic level, via the expression and secretion of ACTH secretagogues. Such inhibition results largely from the mediation of corticosteroid feedback, although more work is required to determine whether the hippocampus supplies a tonic inhibitory input in the absence of corticosteroids. It must be noted that the hippocampus is not the only feedback site in the adrenocortical system, since removal of its input only reduces, but does not abolish, the efficacy of corticosteroid inhibition, and since other elements of the axis appear eventually to compensate for deficits in feedback regulation. The importance of other feedback sites is further suggested not only by the presence of corticosteroid receptors in other parts of the brain and pituitary, but also by the improved prediction of CRF levels by combined hypothalamic and hippocampal receptor occupancy. The likelihood of feedback mediated by nonhippocampal sites underscores the need for future work to characterize hippocampal influence on HPA activity in the absence of changes in corticosteroid secretion. However, despite the fact that the hippocampus is not the only feedback site, it is distinguished from most potential feedback sites, including the hypothalamus and pituitary, by its high content of both type I and II corticosteroid receptors. The hippocampus is therefore capable of mediating inhibition over a wide range of steroid levels. The low end of this range is represented by corticosteroid inhibition of basal (circadian nadir) HPA activity. The apparent type I receptor specificity of this inhibition and the elevation of trough corticosteroid levels after hippocampal damage support a role for hippocampal type I receptors in regulating basal HPA activity. It is possible that basal activity is controlled in part through hippocampal inhibition of vasopressin, since the inhibition of portal blood vasopressin correlates with lower levels of hippocampal receptor occupancy, and the expression of vasopressin by some CRF neurons is sensitive to very low corticosteroid levels. At the high end of the physiological range, stress-induced or circadian peak corticosteroid secretion correlates strongly with occupancy of the lower affinity hippocampal type II receptors.(ABSTRACT TRUNCATED AT 400 WORDS)

Journal ArticleDOI
TL;DR: This paper pointed out that the classic case studies referred to were not single case examinations and that the trade-off between good stories and good constructs has to be made when working in the journal format instead of publishing an entire book.
Abstract: In this article the author replies to comments made over an approach to building theory using case studies as previously presented by the author. She notes that the benefits of her method are that the researcher is granted the opportunity to view a variety of cases allowing one to draw conclusions from a wider base of knowledge. This allows the researcher to view patterns on a larger scale as well as the opportunity to eliminate chance associations. The author goes on to address the concerns raised in the critique of her work suggesting that the classic case studies referred to were not single case examinations and that the trade-off between good stories and good constructs has to be made when working in the journal format instead of publishing an entire book.

Journal ArticleDOI
TL;DR: This paper investigated the relation between judgments of probability and preferences between bets and found that people prefer betting on their own judgment over an equiprobable chance event when they consider themselves knowledgeable, but not otherwise.
Abstract: We investigate the relation between judgments of probability and preferences between bets. A series of experiments provides support for the competence hypothesis that people prefer betting on their own judgment over an equiprobable chance event when they consider themselves knowledgeable, but not otherwise. They even pay a significant premium to bet on their judgments. These data connot be explained by aversion to ambiguity, because judgmental probabilities are more ambiguous than chance events. We interpret the results in terms of the attribution of credit and blame. The possibility of inferring beliefs from preferences is questioned.1

Journal ArticleDOI
TL;DR: In this paper, a model of peasant household behavior under varying degrees of household-specific food and labor market failures is constructed to show that these structural features can explain several well known patterns of peasant response which have often been attributed to peculiar motives, presumed specific to peasants.
Abstract: A model of peasant household behavior, under varying degrees of household-specific food and labor market failures, is constructed to show that these structural features can explain several well known patterns of peasant response which have often been attributed to peculiar motives, presumed specific to peasants. The model explains sluggish response to cash crops prices and high instability in perceived food and labor scarcities; the key role of manufactured consumer goods prices in stimulating peasants' effort in cash crops production; the effectiveness of taxation as opposed to incentives in stimulating cash crops production; and the key role of technological change in food production to enhance cash crop production. Results are obtained analytically in the case of one market failure and by numerical simulation with more than one.

Journal ArticleDOI
TL;DR: A randomized, double-blind trial in patients with sepsis and a presumed diagnosis of gram-negative infection was conducted in this article, where the patients received either a single 100mg intravenous dose of HA-1A or placebo.
Abstract: Background HA-1A is a human monoclonal IgM antibody that binds specifically to the lipid A domain of endotoxin and prevents death in laboratory animals with gram-negative bacteremia and endotoxemia Methods To evaluate the efficacy and safety of HA-1A, we conducted a randomized, double-blind trial in patients with sepsis and a presumed diagnosis of gram-negative infection The patients received either a single 100-mg intravenous dose of HA-1A(in 35 g of albumin) or placebo (35 g of albumin) Other interventions, including the administration of antibiotics and fluids, were not affected by the study protocol Results Of 543 patients with sepsis who were treated, 200 (37 percent) had gram-negative bacteremia as proved by blood culture For the patients with gram-negative bacteremia followed to death or day 28, there were 45 deaths among the 92 recipients of placebo (49 percent) and 32 deaths among the 105 recipients of HA-1A (30 percent; P = 0014) For the patients with gram-negative bacteremia and sho

Book
30 Aug 1991
TL;DR: The role of heuristics in political reasoning is discussed in this paper, where a theory sketch of the role of the heuristic in reasoning is presented, along with a discussion of the principle-policy puzzle of American racial attitudes.
Abstract: List of tables and figures Preface 1. Introduction: major themes 2. The role of heuristics in political reasoning: a theory sketch 3. Values under pressure: AIDS and civil liberties 4. The principle-policy puzzle: the paradox of American racial attitudes 5. Reasoning chains 6. The likability heuristic 7. Democratic values and mass publics 8. Ideological reasoning 9. Information and electoral choice 10. Stability and change in party identification: presidential to off-years 11. The American dilemma: the role of law as a persuasive symbol 12. Ideology and issue persuasibility: dynamics of racial policy attitudes 13. The new racism and the American ethos 14. Retrospect and prospect Notes Bibliography Index

Journal ArticleDOI
TL;DR: To test whether attentional resources are automatically directed away from an attended task to undesirable stimuli, Ss named the colors in which desirable and undesirable traits appeared, and color-naming latencies were consistently longer for undesirable traits but did not differ within the desirable and desirable categories.
Abstract: One of the functions of automatic stimulus evaluation is to direct attention toward events that may have undesirable consequences for the perceiver's well-being. To test whether attentional resources are automatically directed away from an attended task to undesirable stimuli, Ss named the colors in which desirable and undesirable traits (e.g., honest, sadistic) appeared. Across 3 experiments, color-naming latencies were consistently longer for undesirable traits but did not differ within the desirable and undesirable categories. In Experiment 2, Ss also showed more incidental learning for undesirable traits, as predicted by the automatic vigilance (but not a perceptual defense) hypothesis. In Experiment 3, a diagnosticity (or base-rate) explanation of the vigilance effect was ruled out. The implications for deliberate processing in person perception and stereotyping are discussed. There is a fundamental asymmetry in people's evaluations of gains and losses, of joy and pain, and of positive and negative events. A considerable body of research, in fields as diverse as decision making, impression formation, and emotional communication, has shown that people exhibit loss aversion (Kahneman & Tversky, 1984): They assign relatively more value, importance, and weight to events that have negative, rather than positive, implications for them. In decision making, potential costs are more influential than potential gains (e.g., Kahneman & Tversky, 1979). In impression formation, negative information is weighted more heavily than positive information (e.g., Anderson, 1974; Fiske, 1980; Hamilton & Zanna, 1972). In nonverbal communication, perceivers are more responsive to negatively toned messages than to positive ones (Frodi, Lamb, Leavitt, & Donovan, 1978). Quite generally, then, "losses loom larger than gains" (Kahneman & Tversky, 1984, p. 348). There are good evolutionary reasons for this widespread and pronounced asymmetry in people's evaluative reactions. Events that may negatively affect the individual are typically of greater time urgency than are events that lead to desirable consequences. Averting danger to one's well-being, such as preventing loss of life or limb, often requires an immediate response. In

Journal ArticleDOI
TL;DR: In this article, the authors present an intraorganizational ecological perspective on strategy making and examine how internal selection may combine with external selection to explain organizational change and survival, and propose that consistently successful organizations are characterized by top managements who spend efforts on building the induced and autonomous strategic processes, as well as concerning themselves with the content of strategy.
Abstract: This paper presents an intraorganizational ecological perspective on strategy making, and examines how internal selection may combine with external selection to explain organizational change and survival. The perspective serves to illuminate data from a field study of the evolution of Intel Corporation's corporate strategy. The data, in turn, are used to refine and deepen the conceptual framework. Relationships between induced and autonomous strategic processes and four modes of organizational adaptation are discussed. Apparent paradoxes associated with structural inertia and strategic reorientation arguments are elucidated and several new propositions derived. The paper proposes that consistently successful organizations are characterized by top managements who spend efforts on building the induced and autonomous strategic processes, as well as concerning themselves with the content of strategy; that such organizations simultaneously exercise induced and autonomous processes; and that successful reorientations in organizations are likely to have been preceded by internal experimentation and selection processes effected through the autonomous process.

Journal ArticleDOI
03 Jan 1991-Nature
TL;DR: This gene, called XIST (for Xi-specific transcripts), is a candidate for a gene either involved in or uniquely influenced by the process of X inactivation, and is described as an X-linked gene with a novel expression pattern.
Abstract: X-chromosome inactivation results in the cis-limited dosage compensation of genes on one of the pair of X chromosomes in mammalian females. Although most X-linked genes are believed to be subject to inactivation, several are known to be expressed from both active and inactive X chromosomes. Here we describe an X-linked gene with a novel expression pattern--transcripts are detected only from the inactive X chromosome (Xi) and not from the active X chromosome (Xa). This gene, called XIST (for Xi-specific transcripts), is a candidate for a gene either involved in or uniquely influenced by the process of X inactivation.

Proceedings ArticleDOI
01 May 1991
TL;DR: An algorithm that improves the locality of a loop nest by transforming the code via interchange, reversal, skewing and tiling is proposed, and is successful in optimizing codes such as matrix multiplication, successive over-relaxation, LU decomposition without pivoting, and Givens QR factorization.
Abstract: This paper proposes an algorithm that improves the locality of a loop nest by transforming the code via interchange, reversal, skewing and tiling. The loop transformation algorithm is based on two concepts: a mathematical formulation of reuse and locality, and a loop transformation theory that unifies the various transforms as unimodular matrix transformations.The algorithm has been implemented in the SUIF (Stanford University Intermediate Format) compiler, and is successful in optimizing codes such as matrix multiplication, successive over-relaxation (SOR), LU decomposition without pivoting, and Givens QR factorization. Performance evaluation indicates that locality optimization is especially crucial for scaling up the performance of parallel code.

Journal ArticleDOI
TL;DR: A randomized clinical trial to test the hypothesis that recombinant methionyl granulocyte colony-stimulating factor (G-CSF) can reduce chemotherapy-related neutropenia in patients with cancer and the clinical implications.
Abstract: Background. Neutropenia and infection are major dose-limiting side effects of chemotherapy. Previous studies have suggested that recombinant methionyl granulocyte colony-stimulating factor (G-CSF) can reduce chemotherapy-related neutropenia in patients with cancer. We conducted a randomized clinical trial to test this hypothesis and the clinical implications. Methods. Patients with small-cell lung cancer were enrolled in a multicenter, randomized, double-blind, placebo-controlled trial of recombinant methionyl G-CSF to study the incidence of infection as manifested by fever with neutropenia (absolute neutrophil count, <1.0×l09 per liter, with a temperature ≥38.2°C) resulting from up to six cycles of chemotherapy with cyclophosphamide, doxorubicin, and etoposide. The patients were randomly assigned to receive either placebo or G-CSF, with treatment beginning on day 4 and continuing through day 17 of a 21 -day cycle. Results. The safety of the study treatment could be evaluated in 207 of the 211 pa...