scispace - formally typeset
Search or ask a question

Showing papers by "Stanford University published in 1979"


Journal ArticleDOI
TL;DR: In this article, the authors discuss the problem of estimating the sampling distribution of a pre-specified random variable R(X, F) on the basis of the observed data x.
Abstract: We discuss the following problem given a random sample X = (X 1, X 2,…, X n) from an unknown probability distribution F, estimate the sampling distribution of some prespecified random variable R(X, F), on the basis of the observed data x. (Standard jackknife theory gives an approximate mean and variance in the case R(X, F) = \(\theta \left( {\hat F} \right) - \theta \left( F \right)\), θ some parameter of interest.) A general method, called the “bootstrap”, is introduced, and shown to work satisfactorily on a variety of estimation problems. The jackknife is shown to be a linear approximation method for the bootstrap. The exposition proceeds by a series of examples: variance of the sample median, error rates in a linear discriminant analysis, ratio estimation, estimating regression parameters, etc.

14,483 citations


Journal ArticleDOI
TL;DR: The authors found that younger children are quite limited in their knowledge and cognition about cognitive phenomena, or in their metacognition, and do relatively little monitoring of their own memory, comprehension, and other cognitive enterprises.
Abstract: Preschool and elementary school children were asked to study a set of items until they were sure they could recall them perfectly (Flavell, Friedrichs, & Hoyt, 1970). The older subjects studied for a while, said they were ready, and usually were, that is, they showed perfect recall. The younger children studied for a while, said they were ready, and usually were not. In another study, elementary school children were asked to help the experimenter evaluate the communicative adequacy of verbal instructions, indicating any omissions and obscurities (Markman, 1977). Although the instructions were riddled with blatant omissions and obscurities, the younger subjects were surprisingly poor at detecting them. They incorrectly thought they had understood and could follow the instructions, much as their counterparts in the study by Flavell et al. (1970) incorrectly thought they had memorized and could recall the items. Results such as these have suggested that young children are quite limited in their knowledge and cognition about cognitive phenomena, or in their metacognition, and do relatively little monitoring of their own memory, comprehension, and other cognitive enterprises (see, e.g., Brown, 1978; Flavell, 1978; Flavell & Wellman, 1977; Kreutzer, Leonard, & Flavell, 1975; Flavell, Note 1, Note 2, Note 3; Markman, Note 4). Investigators have recently concluded that metacognition plays an important role in oral communication of information, oral persuasion, oral comprehension, reading comprehension, writing, language acquisition, attention, memory, problem solving, social cognition, and, various types of self-control and self-instruction; there are also clear indications that ideas about metacognition are beginning to make contact with similar ideas in the areas of social learning theory, cognitive behavior modification, personalty development, and education (Flavell, Note 1, Note 2, Note 3). Thus, the nature and de-

8,092 citations


Journal ArticleDOI
TL;DR: In this paper, a simple discrete-time model for valuing options is presented, which is based on the Black-Scholes model, which has previously been derived only by much more difficult methods.

5,864 citations


Journal ArticleDOI
TL;DR: In this article, the capacity of the Gaussian relay channel was investigated, and a lower bound of the capacity was established for the general relay channel, where the dependence of the received symbols upon the inputs is given by p(y,y) to both x and y. In particular, the authors proved that if y is a degraded form of y, then C \: = \: \max \!p(x,y,x,2})} \min \,{I(X,y), I(X,Y,Y,X,Y
Abstract: A relay channel consists of an input x_{l} , a relay output y_{1} , a channel output y , and a relay sender x_{2} (whose transmission is allowed to depend on the past symbols y_{1} . The dependence of the received symbols upon the inputs is given by p(y,y_{1}|x_{1},x_{2}) . The channel is assumed to be memoryless. In this paper the following capacity theorems are proved. 1)If y is a degraded form of y_{1} , then C \: = \: \max \!_{p(x_{1},x_{2})} \min \,{I(X_{1},X_{2};Y), I(X_{1}; Y_{1}|X_{2})} . 2)If y_{1} is a degraded form of y , then C \: = \: \max \!_{p(x_{1})} \max_{x_{2}} I(X_{1};Y|x_{2}) . 3)If p(y,y_{1}|x_{1},x_{2}) is an arbitrary relay channel with feedback from (y,y_{1}) to both x_{1} \and x_{2} , then C\: = \: \max_{p(x_{1},x_{2})} \min \,{I(X_{1},X_{2};Y),I \,(X_{1};Y,Y_{1}|X_{2})} . 4)For a general relay channel, C \: \leq \: \max_{p(x_{1},x_{2})} \min \,{I \,(X_{1}, X_{2};Y),I(X_{1};Y,Y_{1}|X_{2}) . Superposition block Markov encoding is used to show achievability of C , and converses are established. The capacities of the Gaussian relay channel and certain discrete relay channels are evaluated. Finally, an achievable lower bound to the capacity of the general relay channel is established.

4,311 citations


Journal ArticleDOI
TL;DR: In this paper, subjects supporting and opposing capital punishment were exposed to two purported studies, one seemingly confirming and one seemingly disconfirming their existing beliefs about the deterrent efficacy of the death penalty.
Abstract: People who hold strong opinions on complex social issues are likely to examine relevant empirical evidence in a biased manner. They are apt to accept "confirming" evidence at face value while subjecting "discontinuing" evidence to critical evaluation, and as a result to draw undue support for their initial positions from mixed or random empirical findings. Thus, the result of exposing contending factions in a social dispute to an identical body of relevant empirical evidence may be not a narrowing of disagreement but rather an increase in polarization. To test these assumptions and predictions, subjects supporting and opposing capital punishment were exposed to two purported studies, one seemingly confirming and one seemingly disconfirming their existing beliefs about the deterrent efficacy of the death penalty. As predicted, both proponents and opponents of capital punishment rated those results and procedures that confirmed their own beliefs to be the more convincing and probative ones, and they reported corresponding shifts in their beliefs as the various results and procedures were presented. The net effect of such evaluations and opinion shifts was the postulated increase in attitude polarization. The human understanding when it has once adopted an opinion draws all things else to support and agree with it. And though there be a greater number and weight of instances to be found on the other side, yet these it either neglects and despises, or else by some distinction sets aside and rejects, in order that by this great and pernicious predetermination the authority of its former conclusion may remain inviolate. (Bacon, 1620/1960)

3,808 citations



Journal ArticleDOI
TL;DR: The generalized cross-validation (GCV) method as discussed by the authors is a generalized version of Allen's PRESS, which can be used in subset selection and singular value truncation, and even to choose from among mixtures of these methods.
Abstract: Consider the ridge estimate (λ) for β in the model unknown, (λ) = (X T X + nλI)−1 X T y. We study the method of generalized cross-validation (GCV) for choosing a good value for λ from the data. The estimate is the minimizer of V(λ) given by where A(λ) = X(X T X + nλI)−1 X T . This estimate is a rotation-invariant version of Allen's PRESS, or ordinary cross-validation. This estimate behaves like a risk improvement estimator, but does not require an estimate of σ2, so can be used when n − p is small, or even if p ≥ 2 n in certain cases. The GCV method can also be used in subset selection and singular value truncation methods for regression, and even to choose from among mixtures of these methods.

3,697 citations


Posted Content
TL;DR: In this paper, the marginal utility of consumption evolves according to a random walk with trend, and consumption itself should evolve in the same way, and the evidence supports a modified version of the life cycle permanent income hypothesis.
Abstract: Optimization of the part of consumers is shown to imply that the marginal utility of consumption evolves according to a random walk with trend. To a reasonable approximation, consumption itself should evolve in the same way. In particular, no variable apart from current consumption should be of any value in predicting future consumption. This implication is tested with time-series data for the postwar United States. It is confirmed for real disposable income, which has no predictive power for consumption, but rejected for an index of stock prices. The paper concludes that the evidence supports a modified version of the life cycle-permanent income hypothesis.

2,957 citations


Journal ArticleDOI
TL;DR: Ten percent dextran sulfate accelerates the rate of hybridization of randomly cleaved double-stranded DNA probes to immobilized nucleic acids by as much as 100-fold, without increasing the background significantly.
Abstract: We describe a technique for transferring electrophoretically separated bands of double-stranded DNA from agarose gels to diazobenzyloxymethyl-paper. Controlled cleavage of the DNA in situ by sequential treatment with dilute acid, which causes partial depurination, and dilute alkali, which causes cleavage and separation of the strands, allows the DNA to leave the gel rapidly and completely, with an efficiency independent of its size. Covalent attachment of DNA to paper prevents losses during subsequent hybridization and washing steps and allows a single paper to be reused many times. Ten percent dextran sulfate, originally found to accelerate DNA hybridization in solution by about 10-fold [J.G. Wetmur (1975) Biopolymers 14, 2517-2524], accelerates the rate of hybridization of randomly cleaved double-stranded DNA probes to immobilized nucleic acids by as much as 100-fold, without increasing the background significantly.

2,949 citations


Journal ArticleDOI
TL;DR: In this paper, the authors derived a single-beta asset pricing model in a multi-good, continuous-time model with uncertain consumption-goods prices and uncertain investment opportunities.

2,667 citations


Journal ArticleDOI
TL;DR: This paper used hybridoma monoclonal antibodies obtained after immunization of mice with rat cells to study rat cell-surface antigens present on subpopulations of rat lymphocytes.
Abstract: Xenogeneic immunizations have the advantage of detecting a wide range of antigenic determinants because many commonly occurring proteins have diverged significantly during the course of evolution and are thus antigenic in other species. The broadness of xenogeneic responses, however, means that the antisera they produce are usually complex and require extensive absorptions to make them specific for a single antigen. This problem has now been overcome by generating hybridomas producing monoclonal antibodies (Kohler & Milstein 1975). These permit dissection ofthe xenogeneic response so that large amounts of individual antibodies can be obtained, each of which recognizes only one of the determinants recognized by a broadly reactive conventional antiserum. Williams et al. (1977) used hybridoma monoclonal antibodies obtained after immunizations of mice with rat cells to study rat cell-surface antigens present on subpopulations of rat lymphocytes, i.e., differentiation antigens. Springer et al. (1978a) and Stern et al. (1978) used a similar approach to study mouse lymphocyte antigens. They prepared monoclonal antibodies by immunizing rats with mouse lymphocytes and showed that these monoclonals recognized previously undetected mouse cell surface determinants including a glycoprotein antigen that appears to be specific for macrophages (Springer et al. 1978b). Trowbridge (1978) also used rat anti-mouse immunizations to generate a monoclonal antibody against the non-polymorphic lymphocyte surface antigen T200.

Journal ArticleDOI
29 Mar 1979-Nature
TL;DR: The nucleotide sequence of a 1,091-base pair cloned cDNA insert encoding bovine corticotropin-β-lipotropin precursor mRNA indicates that the precursor protein consists of repetitive units and includes a third melanotropin sequence in its cryptic portion.
Abstract: The nucleotide sequence of a 1,091-base pair cloned cDNA insert encoding bovine corticotropin-beta-lipotropin precursor mRNA is reported. The corresponding amino acid sequence indicates that the precursor protein consists of repetitive units and includes a third melanotropin sequence in its cryptic portion. Pairs of lysine and arginine residues separate the component peptides of the precursor.

Journal ArticleDOI
16 Mar 1979-Science
TL;DR: It is suggested that two-thirds of the heat lost from new oceanic lithosphere at the Gal�pagos Rift in the first million years may be vented from thermal springs, predominantly along the axial ridge within the rift valley.
Abstract: The submarine hydrothermal activity on and near the Galapagos Rift has been explored with the aid of the deep submersible Alvin Analyses of water samples from hydrothermal vents reveal that hydrothermal activity provides significant or dominant sources and sinks for several components of seawater; studies of conductive and convective heat transfer suggest that two-thirds of the heat lost from new oceanic lithosphere at the Galapagos Rift in the first million years may be vented from thermal springs, predominantly along the axial ridge within the rift valley The vent areas are populated by animal communities They appear to utilize chemosynthesis by sulfur-oxidizing bacteria to derive their entire energy supply from reactions between the seawater and the rocks at high temperatures, rather than photosynthesis

Journal ArticleDOI
TL;DR: This paper found that people recall script actions in their familiar order, and that a scrambled text that presented some script actions out of order tended to be recalled in canonical order, while goal-relevant deviations from a script were remembered better than script actions.

Journal ArticleDOI
TL;DR: These relatively simple bioassays can be conducted in most research laboratories without the need for sophisticated equipment and are demonstrated by an analysis of the BMP and ATA of processed samples of peat.

Journal ArticleDOI
TL;DR: In this article, a miniature gas analysis system based on the principles of gas chromatography (GC) has been built in silicon using photolithography and chemical etching techniques, which allows size reductions of nearly three orders of magnitude compared to conventional laboratory instruments.
Abstract: A miniature gas analysis system has been built based on the principles of gas chromatography (GC). The major components are fabricated in silicon using photolithography and chemical etching techniques, which allows size reductions of nearly three orders of magnitude compared to conventional laboratory instruments. The chromatography system consists of a sample injection valve and a 1.5-m-long separating capillary column, which are fabricated on a substrate silicon wafer. The output thermal conductivity detector is separately batch fabricated and integrably mounted on the substrate wafer. The theory of gas chromatography has been used to optimize the performance of the sensor so that separations of gaseous hydrocarbon mixtures are performed in less than 10 s. The system is expected to find application in the areas of portable ambient air quality monitors, implanted biological experiments, and planetary probes.

Journal ArticleDOI
TL;DR: In this article, the existence of fundamental scalar fields constitutes a serious flaw of the Weinberg-Salam theory and a possible scheme without such fields is described, where the symmetry breaking is induced by a new strongly interacting sector whose natural scale is of the order of a few TeV.
Abstract: We argue that the existence of fundamental scalar fields constitutes a serious flaw of the Weinberg-Salam theory. A possible scheme without such fields is described. The symmetry breaking is induced by a new strongly interacting sector whose natural scale is of the order of a few TeV.

Journal ArticleDOI
TL;DR: In this article, the authors derived an after tax version of the Capital Asset Pricing Model, which accounts for a progressive tax scheme and for wealth and income related constraints on borrowing, and showed that before-tax expected rates of return are linearly related to systematic risk and to dividend yield.

Journal ArticleDOI
TL;DR: In this paper, a new equation was derived for large amplitude forced radial oscillations of a bubble in an incident sound field, including the effects of acoustic radiation, as in Keller and Kolodner's equation.
Abstract: A new equation is derived for large amplitude forced radial oscillations of a bubble in an incident sound field. It includes the effects of acoustic radiation, as in Keller and Kolodner’s equation, and the effects of viscosity and surface tension, as in the modified Rayleigh equation due to Plesset, Noltingk and Neppiras, and Poritsky. The free and forced periodic solutions are computed numerically. For large bubbles, such as underwater explosion bubbles, the free oscillations agree with those obtained by Keller and Kolodner. For small bubbles, such as cavitation bubbles, with small or intermediate forcing amplitudes, the results agree with those calculated by Lauterborn from the modified Rayleigh equation of Plesset et al. For large forcing amplitudes that equation yielded unsatisfactory results whereas the new equation yields quite satisfactory ones.

Journal ArticleDOI
TL;DR: The simplifier finds a normal form for any expression formed from individual variables that is a theorem it is simplified to the constant true, so the simplifier can be used as a decision procedure for the quantifier-free theory containing these functions and predicates.
Abstract: A method for combining decision procedures for several theories into a single decision procedure for their combination is described, and a simplifier based on this method is discussed. The simplifier finds a normal form for any expression formed from individual variables, the usual Boolean connectives, the equality predicate =, the conditional function if-then-else, the integers, the arithmetic functions and predicates +, -, and ≤, the Lisp functions and predicates car, cdr, cons, and atom, the functions store and select for storing into and selecting from arrays, and uninterpreted function symbols. If the expression is a theorem it is simplified to the constant true, so the simplifier can be used as a decision procedure for the quantifier-free theory containing these functions and predicates. The simplifier is currently used in the Stanford Pascal Verifier.

Journal ArticleDOI
TL;DR: The procedure, which involves polyethylene glycolinduced DNA uptake by protoplasts and subsequent regeneration of the bacterial cell wall, yields up to 80% transformants with an efficiency of 4x107 transformants per μg of supercoiled DNA.
Abstract: A highly efficient method for transformation of Bacillus subtilis by plasmid DNA is reported. The procedure, which involves polyethylene glycolinduced DNA uptake by protoplasts and subsequent regeneration of the bacterial cell wall, yields up to 80% transformants with an efficiency of 4x107 transformants per μg of supercoiled DNA. Plasmids constructed by in vitro ligation or endonuclease-generated fragments of linear plasmid DNA can also transform PEG-treated protoplasts, but at a lower frequency.

Journal ArticleDOI
TL;DR: A simple constructive algorithm for the evaluation of formulas having two literals per clause, which runs in linear time on a random access machine.

Journal ArticleDOI
01 Nov 1979-Nature
TL;DR: A yeast DNA sequence that behaves as a chromosomal replicator, ars1 (autonomously replicating sequence), has been isolated and allows autonomous replication of all co-linear DNA.
Abstract: A yeast DNA sequence that behaves as a chromosomal replicator, ars1 (autonomously replicating sequence), has been isolated. On transformation, ars1 allows autonomous replication of all co-linear DNA. The replicator can integrate into other replication units and can function in multimeric form. The 850-base pair ars1 element has no detectable homology to other yeast sequences. Such replicator-containing plasmids can be used for the isolation of DNA sequences in yeast cells as well as for the study of chromosomal DNA replication.

Journal ArticleDOI
TL;DR: In this article, the eigenvalues of the evolution equations appear as exponents in anomalous logarithm corrections to the nominal power law of form factors determined by dimensional counting.

Posted Content
TL;DR: The authors analyzes compensation schemes which pay according to an individual's ordinal rank in an organization rather than his output level and shows that wages based upon rank induce the same efficient allocation of resources as an incentive reward scheme based on individual output levels.
Abstract: This paper analyzes compensation schemes which pay according to an individual's ordinal rank in an organization rather than his output level. When workers are risk neutral, it is shown that wages based upon rank induce the same efficient allocation of resources as an incentive reward scheme based on individual output levels. Under some circumstances, risk-averse workers actually prefer to be paid on the basis of rank. In addition, if workers are heterogeneous inability, low-quality workers attempt to contaminate high-quality firms, resulting in adverse selection. However, if ability is known in advance, a competitive handicapping structure exists which allows all workers to compete efficiently in the same organization.

Book ChapterDOI
TL;DR: A survey of the literature on person perception and social cognition emerging from other laboratories can be found in this article, where the purpose, the goals and functions of person categorization, the nature of categories at different levels of abstraction, and determining prototypicality in detail.
Abstract: Publisher Summary The chapter provides a brief glimpse on the various theoretical and empirical approaches taken to study person categories and categorization. The chapter provides a comprehensive and representative survey of the literature on person perception and social cognition emerging from other laboratories. Interest in the issues of category accessibility has been renewed recently as cognitive-social psychologists attempt to understand the person categorization process. The chapter discusses the nature of categories at different level of abstractions. The prototype approach, prototypicality rules (full view and the restricted view), and from prototype to social behavior is also discussed. Knowledge about person prototypes not only makes information processing easier, it also helps the perceiver to plan behavior in social interactions . It is easier to process information about characters that fit well with and are, therefore, prototypical of shared beliefs about various personality types. Character prototypicality was manipulated in a free-recall and personality impression paradigm through variations in the consistency of a character's identification with preexisting beliefs about two personality-type categories-extraversion and introversion. The chapter discusses the purpose, the goals and functions of person categorization, the nature of categories at different levels of abstraction, and determining prototypicality in detail.

Journal ArticleDOI
TL;DR: In this article, periodic homogeneous isotropic turbulence is used to simulate the experimental decay of grid turbulence and the computed flow field is then treated as a realization of a physical turbulent flow.
Abstract: A calculation of periodic homogeneous isotropic turbulence is used to simulate the experimental decay of grid turbulence. The calculation is found to match the experiment in a number of important aspects and the computed flow field is then treated as a realization of a physical turbulent flow. From this flow, a calculation is conducted of the large eddy field and the various averages of the subgrid-scale turbulence that occur in the large eddy simulation equations. These quantities are compared with the predictions of the models that are usually applied in large eddy simulation. The results show that the terms which involve the large-scale field are accurately modeled but the subgrid-scale Reynolds stresses are only moderately well modeled. It is also possible to use the method to predict the constants of the models without reference to experiment. Attempts to find improved models have not met with success.

Journal ArticleDOI
01 Dec 1979-Gene
TL;DR: A system of biological containment for recombinant DNA experiments in Saccharomyces cerevisiae (Brewer's/Baker's yeast) is described and has recently been certified at the HV2 level by the National Institutes of Health.

Journal ArticleDOI
TL;DR: A new method is proposed which differs from the Bartels-Stewart algorithm in that A is only reduced to Hessenberg form, and the resulting algorithm is between 30 and 70 percent faster depending upon the dimensions of the matrices A and B.
Abstract: One of the most effective methods for solving the matrix equation AX+XB=C is the Bartels-Stewart algorithm. Key to this technique is the orthogonal reduction of A and B to triangular form using the QR algorithm for eigenvalues. A new method is proposed which differs from the Bartels-Stewart algorithm in that A is only reduced to Hessenberg form. The resulting algorithm is between 30 and 70 percent faster depending upon the dimensions of the matrices A and B . The stability of the new method is demonstrated through a roundoff error analysis and supported by numerical tests. Finally, it is shown how the techniques described can be applied and generalized to other matrix equation problems.

Journal ArticleDOI
W. E. Spicer1, P. W. Chye1, P. R. Skeath1, C. Y. Su1, I. Lindau1 
TL;DR: In this article, Fermi level stabilizes after a small fraction of a monolayer of either metal or oxygen atoms have been placed on the surface of the semiconductor.
Abstract: For n- and p-doped III-V compounds, Fermi-level pinning and accompanying phenomena of the (110) cleavage surface have been studied carefully using photoemission at hv≲ 300 eV (so that core as well as valence band levels could be studied). Both the clean surfaces and the changes produced, as metals or oxygen are added to those surfaces in submonolayer quantities, have been examined. It is found that, in general, the Fermi level stabilizes after a small fraction of a monolayer of either metal or oxygen atoms have been placed on the surface. Most strikingly, Fermi-level pinning produced on a given semiconductor by metals and oxygen are similar. However, there is a strong difference in these pinning positions depending on the semiconductor: The pinning position is near (1) the conduction band maximum (CBM) for InP, (2) midgap for GaAs, and (3) the valence band maximum (VBM) for GaSb. The similarity in the pinning position on a given semiconductor produced by both metals and oxygen suggests that the states responsible for the pinning resulted from interaction between the adatoms and the semiconductor. Support for formation of defect levels in the semiconductor at or near the surface is found in the appearance of semiconductor atoms in the metal and in disorder in the valence band with a few percent of oxygen. Based on the available information on Fermi energy pinning, a model is developed for each semiconductor with two different electronic levels which are produced by removal of anions or cations from their normal positions in the surface region of the semiconductors. The pinning levels have the following locations, with respect to the VBM: GaAs, 0.75 and 0.5 eV; InP, 0.9 and 1.2 eV (all levels + 0.1 eV).