scispace - formally typeset
Search or ask a question

Showing papers by "Stanford University published in 1992"


Book
John R. Koza1
01 Jan 1992
TL;DR: This book discusses the evolution of architecture, primitive functions, terminals, sufficiency, and closure, and the role of representation and the lens effect in genetic programming.
Abstract: Background on genetic algorithms, LISP, and genetic programming hierarchical problem-solving introduction to automatically-defined functions - the two-boxes problem problems that straddle the breakeven point for computational effort Boolean parity functions determining the architecture of the program the lawnmower problem the bumblebee problem the increasing benefits of ADFs as problems are scaled up finding an impulse response function artificial ant on the San Mateo trail obstacle-avoiding robot the minesweeper problem automatic discovery of detectors for letter recognition flushes and four-of-a-kinds in a pinochle deck introduction to biochemistry and molecular biology prediction of transmembrane domains in proteins prediction of omega loops in proteins lookahead version of the transmembrane problem evolutionary selection of the architecture of the program evolution of primitives and sufficiency evolutionary selection of terminals evolution of closure simultaneous evolution of architecture, primitive functions, terminals, sufficiency, and closure the role of representation and the lens effect Appendices: list of special symbols list of special functions list of type fonts default parameters computer implementation annotated bibliography of genetic programming electronic mailing list and public repository

13,487 citations


Journal ArticleDOI
TL;DR: Cumulative prospect theory as discussed by the authors applies to uncertain as well as to risky prospects with any number of outcomes, and it allows different weighting functions for gains and for losses, and two principles, diminishing sensitivity and loss aversion, are invoked to explain the characteristic curvature of the value function and the weighting function.
Abstract: We develop a new version of prospect theory that employs cumulative rather than separable decision weights and extends the theory in several respects. This version, called cumulative prospect theory, applies to uncertain as well as to risky prospects with any number of outcomes, and it allows different weighting functions for gains and for losses. Two principles, diminishing sensitivity and loss aversion, are invoked to explain the characteristic curvature of the value function and the weighting functions. A review of the experimental evidence and the results of a new experiment confirm a distinctive fourfold pattern of risk attitudes: risk aversion for gains and risk seeking for losses of high probability; risk seeking for gains and risk aversion for losses of low probability. Expected utility theory reigned for several decades as the dominant normative and descriptive model of decision making under uncertainty, but it has come under serious question in recent years. There is now general agreement that the theory does not provide an adequate description of individual choice: a substantial body of evidence shows that decision makers systematically violate its basic tenets. Many alternative models have been proposed in response to this empirical challenge (for reviews, see Camerer, 1989; Fishburn, 1988; Machina, 1987). Some time ago we presented a model of choice, called prospect theory, which explained the major violations of expected utility theory in choices between risky prospects with a small number of outcomes (Kahneman and Tversky, 1979; Tversky and Kahneman, 1986). The key elements of this theory are 1) a value function that is concave for gains, convex for losses, and steeper for losses than for gains,

13,433 citations


Journal ArticleDOI
TL;DR: In this paper, a variable-based definition of virtual reality is proposed, which can be used to classify virtual reality in relation to other media, such as TV, movies, etc.
Abstract: Virtual reality (VR) is typically defined in terms of technological hardware. This paper attempts to cast a new, variable-based definition of virtual reality that can be used to classify virtual reality in relation to other media. The defintion of virtual reality is based on concepts of "presence" and "telepresence," which refer to the sense of being in an environment, generated by natural or mediated means, respectively. Two technological dimensions that contribute to telepresence, vividness and interactivity, are discussed. A variety of media are classified according to these dimensions. Suggestions are made for the application of the new definition of virtual reality within the field of communication research.

4,051 citations


Journal ArticleDOI
TL;DR: This paper presents a Bayesian method for constructing probabilistic networks from databases, focusing on constructing Bayesian belief networks, and extends the basic method to handle missing data and hidden variables.
Abstract: This paper presents a Bayesian method for constructing probabilistic networks from databases. In particular, we focus on constructing Bayesian belief networks. Potential applications include computer-assisted hypothesis testing, automated scientific discovery, and automated construction of probabilistic expert systems. We extend the basic method to handle missing data and hidden (latent) variables. We show how to perform probabilistic inference by averaging over the inferences of multiple belief networks. Results are presented of a preliminary evaluation of an algorithm for constructing a belief network from a database of cases. Finally, we relate the methods in this paper to previous work, and we discuss open problems.

3,971 citations


Journal ArticleDOI
TL;DR: This assumed isomorphism of space, place, and culture results in some significant problems. as mentioned in this paper argues that differences between cultures come about not from their isolation from each other, but because of their connections with each other.
Abstract: This assumed isomorphism of space, place, and culture results in some significant problems. First, there is the issue of those who inhabit the border, what Gloria Anzaldua calls the “narrow strip along steep edges” of national boundaries. The fiction ofconclusion that a focus on people who live in the borders between dominant societies or nations (and here borders is also a metaphor for people who identify, culturally, with more than one group) makes clear the fact that differences between cultures come about not because of their isolation from each other, but because of their connections with each other. Such a conclusion also suggests that along with difference comes the hierarchies of power. Culture is not only a concept that expresses difference between peoples, but also a concept that masks the uneven power relations between peoples, and these uneven power relations can only exist through connection, rather than isolation.

2,870 citations


Book ChapterDOI
01 Jan 1992
TL;DR: In this paper, the authors consider the problem of finding the best unbiased estimator of a linear function of the mean of a set of observed random variables. And they show that for large samples the maximum likelihood estimator approximately minimizes the mean squared error when compared with other reasonable estimators.
Abstract: It has long been customary to measure the adequacy of an estimator by the smallness of its mean squared error. The least squares estimators were studied by Gauss and by other authors later in the nineteenth century. A proof that the best unbiased estimator of a linear function of the means of a set of observed random variables is the least squares estimator was given by Markov [12], a modified version of whose proof is given by David and Neyman [4]. A slightly more general theorem is given by Aitken [1]. Fisher [5] indicated that for large samples the maximum likelihood estimator approximately minimizes the mean squared error when compared with other reasonable estimators. This paper will be concerned with optimum properties or failure of optimum properties of the natural estimator in certain special problems with the risk usually measured by the mean squared error or, in the case of several parameters, by a quadratic function of the estimators. We shall first mention some recent papers on this subject and then give some results, mostly unpublished, in greater detail.

2,651 citations


Book
01 Jan 1992
TL;DR: Temporal logic is a formal tool/language which yields excellent results in specifying reactive systems, and this volume (the first two), offers an introduction to temporal logic and to the computational model for reactive programs which has been developed by the authors as mentioned in this paper.
Abstract: Reactive systems are computing systems which are interactive, such as real-time systems, operating systems, concurrent systems and control systems. These are among the most difficult computing systems to program. Temporal logic is a formal tool/language which yields excellent results in specifying reactive systems, and this volume (the first of two), offers an introduction to temporal logic and to the computational model for reactive programs which has been developed by the authors.

2,650 citations


Journal ArticleDOI
TL;DR: A mediator is a software module that exploits encoded knowledge about certain sets or subsets of data to create information for a higher layer of applications as discussed by the authors, which simplifies, abstracts, reduces, merges, and explains data.
Abstract: For single databases, primary hindrances for end-user access are the volume of data that is becoming available, the lack of abstraction, and the need to understand the representation of the data. When information is combined from multiple databases, the major concern is the mismatch encountered in information representation and structure. Intelligent and active use of information requires a class of software modules that mediate between the workstation applications and the databases. It is shown that mediation simplifies, abstracts, reduces, merges, and explains data. A mediator is a software module that exploits encoded knowledge about certain sets or subsets of data to create information for a higher layer of applications. A model of information processing and information system components is described. The mediator architecture, including mediator interfaces, sharing of mediator modules, distribution of mediators, and triggers for knowledge maintenance, are discussed. >

2,441 citations


Journal ArticleDOI
TL;DR: The theory of quasi-phase-matched second-harmonic generation in both the space domain and the wave vector mismatch domain is presented in this paper, where various types of errors in the periodicity of these structures are analyzed to find their effects on the conversion efficiency and on the shape of the tuning curve.
Abstract: The theory of quasi-phase-matched second-harmonic generation is presented in both the space domain and the wave vector mismatch domain. Departures from ideal quasi-phase matching in periodicity, wavelength, angle of propagation, and temperature are examined to determine the tuning properties and acceptance bandwidths for second-harmonic generation in periodic structures. Numerical examples are tabulated for periodically poled lithium niobate. Various types of errors in the periodicity of these structures are then analyzed to find their effects on the conversion efficiency and on the shape of the tuning curve. This analysis is useful for establishing fabrication tolerances for practical quasi-phase-matched devices. A method of designing structures having desired phase-matching tuning curve shapes is also described. The method makes use of varying domain lengths to establish a varying effective nonlinear coefficient along the interaction length. >

2,137 citations


Journal ArticleDOI
TL;DR: The experimental limits placed on the oblique correction parameters S and T are reviewed and the value of S can be estimated for running and walking technicolor theories are discussed.
Abstract: I will first review the experimental limits placed on the oblique correction parameters S and T. Then, I will discuss how the value of S can be estimated for running and walking technicolor theories.

2,020 citations


Journal ArticleDOI
TL;DR: Examination of the impact of authoritative parenting, parental involvement in schooling, and parental encouragement to succeed on adolescent school achievement in an ethnically and socio-economically heterogeneous sample of approximately 6,400 American 14-18-year-olds finds parental involvement is much more likely to promote adolescent school success when it occurs in the context of an authoritative home environment.
Abstract: This article examines the impact of authoritative parenting, parental involvement in schooling, and parental encouragement to succeed on adolescent school achievement in an ethnically and socio-economically heterogeneous sample of approximately 6,400 American 14-18-year-olds. Adolescents reported in 1987 on their parents' general child-rearing practices and on their parents' achievement-specific socialization behaviors. In 1987, and again in 1988, data were collected on several aspects of the adolescents' school performance and school engagement. Authoritative parenting (high acceptance, supervision, and psychological autonomy granting) leads to better adolescent school performance and stronger school engagement. The positive impact of authoritative parenting on adolescent achievement, however, is mediated by the positive effect of authoritativeness on parental involvement in schooling. In addition, nonauthoritativeness attenuates the beneficial impact of parental involvement in schooling on adolescents achievement. Parental involvement is much more likely to promote adolescent school success when it occurs in the context of an authoritative home environment.

Journal ArticleDOI
TL;DR: Higher education may be the best SES predictor of good health, and the relationship between these SES measures and risk factors was strongest and most consistent for education, showing higher risk associated with lower levels of education.
Abstract: BACKGROUND. Socioeconomic status (SES) is usually measured by determining education, income, occupation, or a composite of these dimensions. Although education is the most commonly used measure of SES in epidemiological studies, no investigators in the United States have conducted an empirical analysis quantifying the relative impact of each separate dimension of SES on risk factors for disease. METHODS. Using data on 2380 participants from the Stanford Five-City Project (85% White, non-Hispanic), we examined the independent contribution of education, income, and occupation to a set of cardiovascular disease risk factors (cigarette smoking, systolic and diastolic blood pressure, and total and high-density lipoprotein cholesterol). RESULTS. The relationship between these SES measures and risk factors was strongest and most consistent for education, showing higher risk associated with lower levels of education. Using a forward selection model that allowed for inclusion of all three SES measures after adjust...

Journal ArticleDOI
TL;DR: The ability of psychophysical observers and single cortical neurons to discriminate weak motion signals in a stochastic visual display is compared and psychophysical decisions in this task are likely to be based upon a relatively small number of neural signals.
Abstract: We compared the ability of psychophysical observers and single cortical neurons to discriminate weak motion signals in a stochastic visual display. All data were obtained from rhesus monkeys trained to perform a direction discrimination task near psychophysical threshold. The conditions for such a comparison were ideal in that both psychophysical and physiological data were obtained in the same animals, on the same sets of trials, and using the same visual display. In addition, the psychophysical task was tailored in each experiment to the physiological properties of the neuron under study; the visual display was matched to each neuron's preference for size, speed, and direction of motion. Under these conditions, the sensitivity of most MT neurons was very similar to the psychophysical sensitivity of the animal observers. In fact, the responses of single neurons typically provided a satisfactory account of both absolute psychophysical threshold and the shape of the psychometric function relating performance to the strength of the motion signal. Thus, psychophysical decisions in our task are likely to be based upon a relatively small number of neural signals. These signals could be carried by a small number of neurons if the responses of the pooled neurons are statistically independent. Alternatively, the signals may be carried by a much larger pool of neurons if their responses are partially intercorrelated.

Journal ArticleDOI
TL;DR: The concept of anobject file as a temporary episodic representation, within which successive states of an object are linked and integrated, is developed, which develops the concept of a reviewing process, which is triggered by the appearance of the target and retrieves just one of the previewed items.

Proceedings Article
30 Nov 1992
TL;DR: Of OBS, Optimal Brain Damage, and magnitude-based methods, only OBS deletes the correct weights from a trained XOR network in every case, and thus yields better generalization on test data.
Abstract: We investigate the use of information from all second order derivatives of the error function to perform network pruning (i.e., removing unimportant weights from a trained network) in order to improve generalization, simplify networks, reduce hardware or storage requirements, increase the speed of further training, and in some cases enable rule extraction. Our method, Optimal Brain Surgeon (OBS), is Significantly better than magnitude-based methods and Optimal Brain Damage [Le Cun, Denker and Solla, 1990], which often remove the wrong weights. OBS permits the pruning of more weights than other methods (for the same error on the training set), and thus yields better generalization on test data. Crucial to OBS is a recursion relation for calculating the inverse Hessian matrix H-1 from training data and structural information of the net. OBS permits a 90%, a 76%, and a 62% reduction in weights over backpropagation with weight decay on three benchmark MONK's problems [Thrun et al., 1991]. Of OBS, Optimal Brain Damage, and magnitude-based methods, only OBS deletes the correct weights from a trained XOR network in every case. Finally, whereas Sejnowski and Rosenberg [1987] used 18,000 weights in their NETtalk network, we used OBS to prune a network to just 1560 weights, yielding better generalization.

Journal ArticleDOI
25 Jun 1992-Nature
TL;DR: It is reported here that overexpression of calcineurin in Jurkat cells renders them more resistant to the effects of CsA and FK506 and augments both NFAT- and NFIL2A-dependent transcription.
Abstract: THE immunosuppressive drugs cyclosporin A (CsA) and FKS06 both interfere with a Ca2+-sensitive T-cell signal transduction pathway1–4thereby preventing the activation of specific transcription factors (such as NF-AT and NF-IL2A)1,5–7involved in lymphokine gene expression. CsA and FK506 seem to act by interaction with their cognate intracellular receptors8–10, cyclophilin and FKBP, respectively (see ref. 11 for review). The Ca2+/calmodulin-regulated phosphatase calcineurin is a major target of drug-isomerase complexes in vitro12We have therefore tested the hypothesis that this interaction is responsible for the in vivo effects of CsA/FK506. We report here that overexpression of calcineurin in Jurkat cells renders them more resistant to the effects of CsA and FK506 and augments both NFAT- and NFIL2A-dependent transcription. These results identify calcineurin as a key enzyme in the T-cell signal transduction cascade and provide biological evidence to support the notion that the interaction of drug-isomerase complexes with calcineurin underlies the molecular basis of CsA/FK506-mediated immunosuppression.

Journal ArticleDOI
TL;DR: In this paper, the authors define asset allocation as the allocation of an investor's portfolio across a number of ”major” asset classes, and propose an effective way to accomplish all these tasks is to use an asset class factor model.
Abstract: is widely agreed that asset allocation accounts for a large part of the variability in the return on a typical investor’s portfolio. This is especially true if the portfolio is invested in multiple funds, each including a number of securities. Asset allocation is generally defined as the allocation of an investor’s portfolio across a number of ”major” asset classes. Clearly such a generalization cannot be made operational without defining such classes. Once a set of asset classes has been defined, it is important to determine the exposures of each component of an investor’s overall portfolio to movements in their returns. Such information can be aggregated to determine the investor’s overall effective asset mix. If it does not conform to the desired mix, appropriate alterations can then be made. Once a procedure for measuring exposure to variations in returns of major asset classes is in place, it is possible to determine how effectively individual fund managers have performed their functions and the extent (if any) to which value has been added through active management. Finally, the effectiveness of the investor’s overall asset allocation can be compared with that of one or more benchmark, asset mixes. An effective way to accomplish all these tasks is to use an asset class factor model. After describing 7 5 w 8

Journal ArticleDOI
TL;DR: In this paper, two hypotheses about the effect of context on choice are proposed, one hypothesis is that consumer choice is often influenced by the context, defined by the set of alternatives under consideration.
Abstract: Consumer choice is often influenced by the context, defined by the set of alternatives under consideration. Two hypotheses about the effect of context on choice are proposed. The first hypothesis, ...

Journal ArticleDOI
TL;DR: This article reviewed the strategic decision making literature by focusing on the dominant paradigms, i.e., rationality and bounded rationality, politics and power, and garbage can, and concluded that strategic decision makers are boundedly rational, that power wins battles of choice, and that chance matters.
Abstract: This article reviews the strategic decision making literature by focusing on the dominant paradigms–i.e., rationality and bounded rationality, politics and power, and garbage can. We review the theory and key empirical support, and identify emergent debates within each paradigm. We conclude that strategic decision makers are boundedly rational, that power wins battles of choice, and that chance matters. Further, we argue that these paradigms rest on unrealistic assumptions and tired controversies which are no longer very controversial. We conclude with a research agenda that emphasizes a more realistic view of strategic decision makers and decision making, and greater attention to normative implications, especially among profit-seeking firms in global contexts.

Book
01 Jan 1992
TL;DR: Martin this article argues that the best way to view organizations is to see them through all three perspectives -integration, differentiation, and fragmentation -each revealing a different kind of truth, and the author has done extensive research studying the organizational culture of a large California high technology firm (which is not identified in the book).
Abstract: This is essentially a textbook in organizational culture. But, unlike most textbooks authors, Professor Martin is making a contribution to the field in that she focuses on a way of looking at the field that is new. In the past, those who have studied organizational culture have usually done so from one of three perspectives: 1) "Integration" - all members of an organization share a consensus of values and purpose; 2) "Differentiation" - there are frequent conflicts among groups in organizations with limited consensus; 3) "Fragmentation" - there is considereable ambiguity in organizations with consensus coexisting with conflict, and much change among groups. The author argues that the best way to view organizations is to see them through all three perspectives - each revealing a different kind of truth. The author has done extensive research studying the organizational culture of a large California high technology firm (which is not identified in the book). She interviewed many employees at different levels and in different departments, and used surveys to extend the interviews. Her work is like an ethnography in which the researcher's own perspectives and cultural norms have to be accounted for. As a result, the book explores what she learned from her studies and how she learned it.

Journal ArticleDOI
TL;DR: In this paper, the authors demonstrate that certain classical problems associated with the notion of the teacher in supervised learning can be solved by judicious use of learned internal models as components of the adaptive system.

Journal ArticleDOI
TL;DR: Maude as discussed by the authors is a programming language whose modules are rewriting logic theories, which is defined and given denotational and operational semantics, and it provides a simple unification of concurrent programming with functional and object-oriented programming and supports high level declarative programming of concurrent systems.

Journal ArticleDOI
TL;DR: The authors explored two hypotheses derived from socioemotional selectivity theory: (a) Selective reductions in social interaction begin in early adulthood and (b) emotional closeness to significant others increases rather than decreases in adulthood even when rate reductions occur.
Abstract: This investigation explored 2 hypotheses derived from socioemotional selectivity theory: (a) Selective reductions in social interaction begin in early adulthood and (b) emotional closeness to significant others increases rather than decreases in adulthood even when rate reductions occur. Transcribed interviews with 28 women and 22 men from the Child Guidance Study, conducted over 34 years, were reviewed and rated for frequency of interaction, satisfaction with the relationship, and degree of emotional closeness in 6 types of relationships. Interaction frequency with acquaintances and close friends declined from early adulthood on. Interaction frequency with spouses and siblings increased across the same time period and emotional closeness increased throughout adulthood in relationships with relatives and close friends. Findings suggest that individuals begin narrowing their range of social partners long before old age.

Yoav Shoham1
17 Dec 1992
TL;DR: This paper describes features of the agent oriented programming framework in a little more detail, and summarizes recent results and ongoing AOP-related work.
Abstract: Shoham, Y., Agent-oriented programming, Artificial Intelligence 60 (1993) 51-92. A new computational framework is presented, called agent-oriented programming (AOP), which can be viewed as a specialization of object-oriented programming. The state of an agent consists of components such as beliefs, decisions, capabilities, and obligations; for this reason the state of an agent is called its mental state. The mental state of agents is described formally in an extension of standard epistemic logics: beside temporalizing the knowledge and belief operators, AOP introduces operators for obligation, decision, and capability. Agents are controlled by agent programs, which include primitives for communicating with other agents. In the spirit of speech act theory, each communication primitive is of a certain type: informing, requesting, offering, and so on. This article presents the concept of AOP, discusses the concept of mental state and its formal underpinning, defines a class of agent interpreters, and then describes in detail a specific interpreter that has been implemented.

Journal ArticleDOI
10 Jul 1992-Science
TL;DR: Two 35-kilodalton proteins (p35 or syntaxins) were identified that interact with the synaptic vesicle protein p65 (synaptotagmin) and may function in docking synaptic vESicles near calcium channels at presynaptic active zones.
Abstract: Synaptic vesicles store neurotransmitters that are released during calcium-regulated exocytosis. The specificity of neurotransmitter release requires the localization of both synaptic vesicles and calcium channels to the presynaptic active zone. Two 35-kilodalton proteins (p35 or syntaxins) were identified that interact with the synaptic vesicle protein p65 (synaptotagmin). The p35 proteins are expressed only in the nervous system, are 84 percent identical, include carboxyl-terminal membrane anchors, and are concentrated on the plasma membrane at synaptic sites. An antibody to p35 immunoprecipitated solubilized N-type calcium channels. The p35 proteins may function in docking synaptic vesicles near calcium channels at presynaptic active zones.

Proceedings ArticleDOI
24 Oct 1992
TL;DR: Agarwal et al. as discussed by the authors showed that the MAXSNP-hard problem does not have polynomial-time approximation schemes unless P=NP, and for some epsilon > 0 the size of the maximal clique in a graph cannot be approximated within a factor of n/sup 1/ε / unless P = NP.
Abstract: The class PCP(f(n),g(n)) consists of all languages L for which there exists a polynomial-time probabilistic oracle machine that used O(f(n)) random bits, queries O(g(n)) bits of its oracle and behaves as follows: If x in L then there exists an oracle y such that the machine accepts for all random choices but if x not in L then for every oracle y the machine rejects with high probability. Arora and Safra (1992) characterized NP as PCP(log n, (loglogn)/sup O(1)/). The authors improve on their result by showing that NP=PCP(logn, 1). The result has the following consequences: (1) MAXSNP-hard problems (e.g. metric TSP, MAX-SAT, MAX-CUT) do not have polynomial time approximation schemes unless P=NP; and (2) for some epsilon >0 the size of the maximal clique in a graph cannot be approximated within a factor of n/sup epsilon / unless P=NP. >

Journal ArticleDOI
TL;DR: The findings indicate that evaluations of a proposed extension when there were intervening extensions differed from evaluations whenthere were no intervening extensions only when there was a significant disparity between the perceived quality of the intervening extension (as judged by its success or failure) and the perceivedquality of the core brand.
Abstract: A laboratory experiment examines factors affecting evaluations of proposed extensions from a core brand that has or has not already been extended into other product categories. Specifically, the pe...

Journal ArticleDOI
30 Oct 1992-Cell
TL;DR: Human XIST cDNAs containing at least eight exons and totaling 17 kb have been isolated and sequenced within the region on the X chromosome known to contain the X inactivation center, suggesting that XIST may function as a structural RNA within the nucleus.

Journal ArticleDOI
TL;DR: The history of research on childhood socialization in the context of th e family is traced through the present century as mentioned in this paper, which is a propitious occasion for taking stock of psychology's progress in the study of human development and to consider where developmental psychology has been, where it stands, and where it is going.
Abstract: The history of research on childhood socialization in the context of th e family is traced through the present century. The 2 major early theories—behaviorism and psychoanalytic theory—are described. These theories declined in mid-century, under the impact of failures to find empirical support. Simple reinforcement theory was seriously weakened by work on developmental psycholinguistics, attachment, modeling, and altruism. The field turne d to more domain-specific minitheories. The advent of microanalytic analyses of parent-child interaction focused attention on bidirectional processes. Views about the nature of identificatio n and its role in socialization underwent profound change. The role of "parent as teacher" was reconceptualized (with strong influence from Vygotskian thinking). There has been increasing emphasis on the role of emotion s and mutual cognitions in establishing the meaning of parent-child exchanges. The enormous asymmetry in power and competence between adults and children implies that the parent-child relationship must have a unique role in childhood socialization. The American Psychological Association's centennial is a propitious occasion for taking stock of psychology' s progress in the study of human development and to consider where developmental psychology has been, where it stands, and where it is going. Attempting to understand the socialization process has been a long-standing enterprise in both social and developmental psychology. When broadly conceived, the outcomes of interest have not changed greatly over time. That is, students of socialization continue to be concerned with the cluster of processes that lead to adults being able to function adequatel y within the requirements of the social group or groups among whom they live. Therefore the target or outcome behaviors of interest have continued to be some aspect of adequate func

Journal ArticleDOI
TL;DR: The traditional geostatistical tool, the variogram, a tool that is beginning to be used in ecology, is shown to provide an incomplete and misleading summary of spatial pattern when local means and variances change.
Abstract: Geostatistics brings to ecology novel tools for the interpretation of spatial patterns of organisms, of the numerous environmental components with which they in- teract, and of the joint spatial dependence between organisms and their environment. The purpose of this paper is to use data from the ecological literature as well as from original research to provide a comprehensive and easily understood analysis ofgeostatistics' manner of modeling and methods. The traditional geostatistical tool, the variogram, a tool that is beginning to be used in ecology, is shown to provide an incomplete and misleading summary of spatial pattern when local means and variances change. Use of the non-ergodic covariance and correlogram provides a more effective description of lag-to-lag spatial dependence because the changing local means and variances are accounted for. Indicator transforma- tions capture the spatial patterns of nominal ecological variables like gene frequencies and the presence/absence of an organism and of subgroups of a population like large or small individuals. Robust variogram measures are shown to be useful in data sets that contain many data outliers. Appropriate removal of outliers reveals latent spatial dependence and patterns. Cross-variograms, cross-covariances, and cross-correlograms define the joint spa- tial dependence between co-occurring organisms. The results of all of these analyses bring new insights into the spatial relations of organisms in their environment.