scispace - formally typeset
Search or ask a question

Showing papers by "Stanford University published in 1975"


Journal ArticleDOI
TL;DR: The multidimensional binary search tree (or k-d tree) as a data structure for storage of information to be retrieved by associative searches is developed and it is shown to be quite efficient in its storage requirements.
Abstract: This paper develops the multidimensional binary search tree (or k-d tree, where k is the dimensionality of the search space) as a data structure for storage of information to be retrieved by associative searches. The k-d tree is defined and examples are given. It is shown to be quite efficient in its storage requirements. A significant advantage of this structure is that a single data structure can handle many types of queries very efficiently. Various utility algorithms are developed; their proven average running times in an n record file are: insertion, O(log n); deletion of the root, O(n(k-1)/k); deletion of a random node, O(log n); and optimization (guarantees logarithmic performance of searches), O(n log n). Search algorithms are given for partial match queries with t keys specified [proven maximum running time of O(n(k-t)/k)] and for nearest neighbor queries [empirically observed average running time of O(log n).] These performances far surpass the best currently known algorithms for these tasks. An algorithm is presented to handle any general intersection query. The main focus of this paper is theoretical. It is felt, however, that k-d trees could be quite useful in many applications, and examples of potential uses are given.

7,159 citations


Journal ArticleDOI
24 Mar 1975
TL;DR: It is shown that in treating periodic interference the adaptive noise canceller acts as a notch filter with narrow bandwidth, infinite null, and the capability of tracking the exact frequency of the interference; in this case the canceller behaves as a linear, time-invariant system, with the adaptive filter converging on a dynamic rather than a static solution.
Abstract: This paper describes the concept of adaptive noise cancelling, an alternative method of estimating signals corrupted by additive noise or interference. The method uses a "primary" input containing the corrupted signal and a "reference" input containing noise correlated in some unknown way with the primary noise. The reference input is adaptively filtered and subtracted from the primary input to obtain the signal estimate. Adaptive filtering before subtraction allows the treatment of inputs that are deterministic or stochastic, stationary or time variable. Wiener solutions are developed to describe asymptotic adaptive performance and output signal-to-noise ratio for stationary stochastic inputs, including single and multiple reference inputs. These solutions show that when the reference input is free of signal and certain other conditions are met noise in the primary input can be essentiany eliminated without signal distortion. It is further shown that in treating periodic interference the adaptive noise canceller acts as a notch filter with narrow bandwidth, infinite null, and the capability of tracking the exact frequency of the interference; in this case the canceller behaves as a linear, time-invariant system, with the adaptive filter converging on a dynamic rather than a static solution. Experimental results are presented that illustrate the usefulness of the adaptive noise cancelling technique in a variety of practical applications. These applications include the cancelling of various forms of periodic interference in electrocardiography, the cancelling of periodic interference in speech signals, and the cancelling of broad-band interference in the side-lobes of an antenna array. In further experiments it is shown that a sine wave and Gaussian noise can be separated by using a reference input that is a delayed version of the primary input. Suggested applications include the elimination of tape hum or turntable rumble during the playback of recorded broad-band signals and the automatic detection of very-low-level periodic signals masked by broad-band noise.

4,165 citations


Journal ArticleDOI
TL;DR: Human visual perception and the fundamental laws of optics are considered in the development of a shading rule that provides better quality and increased realism in generated images.
Abstract: The quality of computer generated images of three-dimensional scenes depends on the shading technique used to paint the objects on the cathode-ray tube screen. The shading algorithm itself depends in part on the method for modeling the object, which also determines the hidden surface algorithm. The various methods of object modeling, shading, and hidden surface removal are thus strongly interconnected. Several shading techniques corresponding to different methods of object modeling and the related hidden surface algorithms are presented here. Human visual perception and the fundamental laws of optics are considered in the development of a shading rule that provides better quality and increased realism in generated images.

3,393 citations


Journal ArticleDOI
TL;DR: In this paper, the authors provide a conceptual framework for the analysis of organizational legitimacy and the process of legitimation through which organizations act to increase their perceived legitimacy. But they do not discuss the specific circumstances which can lead to organizational legitimacy problems and the actions that can be taken to legitimate an organization.
Abstract: Organizations seek to establish congruence between the social values associated with or implied by their activities and the norms of acceptable behavior in the larger social system of which they are a part. Insofar as these two value systems are congruent we can speak of organizational legitimacy. When an actual or potential disparity exists between the two value systems, there will exist a threat to organizational legitimacy. These threats take the form of legal, economic, and other social sanctions. In this paper, it is argued that an empirical focus on organizational efforts to become legitimate can aid in explaining and analyzing many organizational behaviors taken with respect to the environment, and further, can generate hypotheses and a conceptual perspective that can direct additional attention to the issue of organizational legitimacy. This paper provides a conceptual framework for the analysis of organizational legitimacy and the process of legitimation through which organizations act to increase their perceived legitimacy. It presents a number of examples including a discussion of the American Institute for Foreign Study as a demonstration of these ideas in action. Both the particular circumstances which can lead to problems of organizational legitimacy and some of the actions that can be taken to legitimate an organization are illustrated.

2,791 citations


Journal ArticleDOI
TL;DR: Many of the network results of Jackson on arrival and service rate dependencies, of Posner and Bernholtz on different classes of customers, and of Chandy on different types of service centers are combined and extended in this paper.
Abstract: We derive the joint equilibrium distribution of queue sizes in a network of queues containing N service centers and R classes of customers. The equilibrium state probabilities have the general form: P(S) - Cd(S) $f_1$($x_1$)$f_2$($x_2$)...$f_N$($x_N$) where S is the state of the system, $x_i$ is the configuration of customers at the ith service center, d(S) is a function of the state of the model, $f_i$ is a function that depends on the type of the ith service center, and C is a normalizing constant. We consider four types of service centers to model central processors, data channels, terminals, and routing delays. The queueing disciplines associated with these service centers include first-come-first-served, processor sharing, no queueing, and last-come-first-served. Each customer belongs to a single class of customers while awaiting or receiving service at a service center but may change classes and service centers according to fixed probabilities at the completion of a service request. For open networks we consider state dependent arrival processes. Closed networks are those with no arrivals. A network may be closed with respect to some classes of customers and open with respect to other classes of customers. At three of the four types of service centers, the service times of customers are governed by probability distributions having rational Laplace transforms, different classes of customers having different distributions. At first-come-first-served type service centers the service time distribution must be identical and exponential for all classes of customers. Many of the network results of Jackson on arrival and service rate dependencies, of Posner and Bernholtz on different classes of customers, and of Chandy on different types of service centers are combined and extended in this paper. The results become special cases of the model presented here. An example shows how different classes of customers can affect models of computer systems. Finally, we show that an equivalent model encompassing all of the results involves only classes of customers with identical exponentially distributed service times. All of the other structure of the first model can be absorbed into the fixed probabilities governing the change of class and change of service center of each class of customers.

2,416 citations


Journal ArticleDOI
TL;DR: It is an opioid agonist on the guinea pig myenteric plexus-longitudinal muscle preparation, and on the mouse vas deferens, and it binds to opiate receptors in homogenates of Guinea pig brain.

2,271 citations


Journal ArticleDOI
TL;DR: A progress report on ATI studies is provided in this article, with a focus on instruction related to instruction, drawing on a comprehensive review Richard Snow and I have just completed (Cronbach & Snow, in press).
Abstract: The historic separation of experimental psychology from the study of individual differences impeded psychological research, So I argued when last I had occasion to address the APA audience (Cronbach, 1957). It was time, I said, for the'manipulating and the correlating schools of research to crossbreed, to bring forth a science of Aptitude x Treatment interactions (ATIs). As that hybrid discipline is now flourishing, a progress report on ATI studies is the appropriate first business of this article. It is not practical to treat here the studies of ATIs in social behavior (e.g., Fiedler, 1973; McGuire, 1969), ATIs in response to drugs and therapy (e.g., Insel & Moos, 1974; Lasagna, 1972; Schildkraut, 1970), or ATIs in learning and motivation generally. I confine myself to ATIs related to instruction, drawing on a comprehensive review Richard Snow and I have just completed (Cronbach & Snow, in press). In that field, several research programs have brought us a long way; particularly to be acknowledged are the sustained inquiries, of Bill McKeachie, Jack Atkinson, Russ Kropp and Fred King, George Stern, David Hunt, Victor Bunderson and Jack Dunham, and Snow and his graduate students. Important as ATIs are proving to be, the line of investigation I advocated in 1957 no longer seems sufficient. Interactions are not confined to the first order; the dimensions of the situation and of the person enter into complex interactions. This complexity forces us to ask once again, Should social science aspire to reduce behavior to laws?

2,155 citations


Journal ArticleDOI
Grace Wahba1
TL;DR: In this article, a generalized cross-validation estimate for smoothing polynomial splines is proposed, where the tradeoff between the "roughness" of the solution, as measured by the average square error of the smoothing spline, is defined.
Abstract: Smoothing splines are well known to provide nice curves which smooth discrete, noisy data. We obtain a practical, effective method for estimating the optimum amount of smoothing from the data. Derivatives can be estimated from the data by differentiating the resulting (nearly) optimally smoothed spline. We consider the modely i (t i )+? i ,i=1, 2, ...,n,t i?[0, 1], whereg?W 2 (m) ={f:f,f?, ...,f (m?1) abs. cont.,f (m)??2[0,1]}, and the {? i } are random errors withE? i =0,E? i ? j =?2? ij . The error variance ?2 may be unknown. As an estimate ofg we take the solutiong n, ? to the problem: Findf?W 2 (m) to minimize $$\frac{1}{n}\sum\limits_{j = 1}^n {(f(t_j ) - y_j )^2 + \lambda \int\limits_0^1 {(f^{(m)} (u))^2 du} }$$ . The functiong n, ? is a smoothing polynomial spline of degree 2m?1. The parameter ? controls the tradeoff between the "roughness" of the solution, as measured by $$\int\limits_0^1 {[f^{(m)} (u)]^2 du}$$ , and the infidelity to the data as measured by $$\frac{1}{n}\sum\limits_{j = 1}^n {(f(t_j ) - y_j )^2 }$$ , and so governs the average square errorR(?; g)=R(?) defined by $$R(\lambda ) = \frac{1}{n}\sum\limits_{j = 1}^n {(g_{n,\lambda } (t_j ) - g(t_j ))^2 }$$ . We provide an estimate $$\hat \lambda$$ , called the generalized cross-validation estimate, for the minimizer ofR(?). The estimate $$\hat \lambda$$ is the minimizer ofV(?) defined by $$V(\lambda ) = \frac{1}{n}\parallel (I - A(\lambda ))y\parallel ^2 /\left[ {\frac{1}{n}{\text{Trace(}}I - A(\lambda ))} \right]^2$$ , wherey=(y 1, ...,y n)t andA(?) is then×n matrix satisfying(g n, ? (t 1), ...,g n, ? (t n))t=A (?) y. We prove that there exist a sequence of minimizers $$\tilde \lambda = \tilde \lambda (n)$$ ofEV(?), such that as the (regular) mesh{t i} i=1 n becomes finer, $$\mathop {\lim }\limits_{n \to \infty } ER(\tilde \lambda )/\mathop {\min }\limits_\lambda ER(\lambda ) \downarrow 1$$ . A Monte Carlo experiment with several smoothg's was tried withm=2,n=50 and several values of ?2, and typical values of $$R(\hat \lambda )/\mathop {\min }\limits_\lambda R(\lambda )$$ were found to be in the range 1.01---1.4. The derivativeg? ofg can be estimated by $$g'_{n,\hat \lambda } (t)$$ . In the Monte Carlo examples tried, the minimizer of $$R_D (\lambda ) = \frac{1}{n}\sum\limits_{j = 1}^n {(g'_{n,\lambda } (t_j ) - } g'(t_j ))$$ tended to be close to the minimizer ofR(?), so that $$\hat \lambda$$ was also a good value of the smoothing parameter for estimating the derivative.

1,735 citations



Journal ArticleDOI
TL;DR: This review attempts to organize a range of neuropsychological and psycho-physiological data on attention by identifying three basic at-tentional control processes: one regulates arousal resulting from input; a second controls the preparatory activation of response mechanisms; and a third operates to coordinate arousal and activation, an operation that demands effort.
Abstract: This review attempts to organize a range of neuropsychological and psycho-physiological data On attention. Three separate, but interacting, neural systems are distinguished: One controls arousal, which is defined in terms of phasic physiological responses to input. The arousal control circuits center on the amygdala. A second system controls activation, which is defined in terms of tonic physiological readiness to respond. The readiness circuits center on the basal ganglia of the forebrain. A third system is discerned which coordinates arousal and activation. This coordinating activity is defined as demanding effort. Its cin;uitry centers on the hippocampus. When arousal, activation, and effort are involved in problem solving, at least two further distinctions can be made. During categorizing, arousal precedes activation; during reasoning, activation precedes arousal. Finally, the review addresses the Question of whether effort in problem solving is to be attributed solely to peripheral muscular factors or whether, in fact, direct monitoring of changes in brain organization (changes in set, attitude, etc.) can be productive of measurable indicators of effort. Pribram presented a series of experiments detailing the effects of hippocampal lesions on problem-solving behavior. The results of these experiments were accounted for by recourse to the concept of experiencing a change in awareness which was interpreted as indicating the involvement of \"attention.\" A continuing interest in problems of attention has been reflected not only in the work reported from our laboratory (a dozen and a half papers) but also in that of many others (see reviews by Mostofsky, 1970; Horn and Hinde, 1970; and Kahneman, 1973). Research has largely been directed to two general paradigms defined by the following operations: (a) recording the occurrence of physiological or behavioral responses against a background of monotonous repetition of We wish to express our gratitude to Muriel Bagshaw for additional data analysis and for allowing us to present her unpublished work in this manuscript, and to Linda Heider and Barbara Honegger for their patient help. sensory events (orienting, vigilance, and habituation) and (b) the invariant (categorizing), or the variable but computable (reasoning), pairing of the outcome of response (reinforcement) to sensory events. An assessment of the data derived from these paradigms (in approximately 200 experiments) has led us to identify three basic at-tentional control processes: One regulates arousal resulting from input; a second controls the preparatory activation of response mechanisms; and a third operates to coordinate arousal and activation, an operation that demands effort. …

1,197 citations



Book ChapterDOI
A. Leonard1
TL;DR: In this paper, the derivation of smoothed or filtered momentum and continuity equations for large-scale, energy-containing eddies is considered and questions regarding the energy loss of largescale turbulence are discussed along with aspects of turbulent diffusion of a passive scalar.
Abstract: The derivation of smoothed or filtered momentum and continuity equations for large-scale, energy-containing eddies is considered. Questions regarding the energy loss of large-scale turbulence are discussed along with aspects of turbulent diffusion of a passive scalar. It is found that the large-scale fluctuations satisfy filtered or averaged momentum and continuity equations. An averaging of the nonlinear advection term yields two terms.

Journal ArticleDOI
TL;DR: In this article, it was shown that a low-density gas can be cooled by illuminating it with intense, quasi-monochromatic light confined to the lower-frequency half of a resonance line's Doppler width.


Journal ArticleDOI
TL;DR: The alpha-beta procedure for searching game trees is shown to be optimal in a certain sense, and bounds are obtained for its running time with various kinds of random data.

Book
01 Jun 1975

Journal ArticleDOI
TL;DR: The authors showed that self-perceptions and social perceptions may persevere after the initial basis for such perceptions has been completely discredited, even after false feedback has been given to the subjects, indicating that they either succeeded or failed on a novel discrimination task.
Abstract: Two experiments demonstrated that self-perceptions and social perceptions may persevere after the initial basis for such perceptions has been completely discredited. In both studies subjects first received false feedback, indicating that they had either succeeded or failed on a novel discrimination task and then were thoroughly debriefed concerning the predetermined and random nature of this outcome manipulation. In experiment 2, both the initial outcome manipulation and subsequent debriefing were watched and overheard by observers. Both actors and observers showed substantial perseverance of initial impressions concerning the actors' performance and abilities following a standard "outcome" debriefing. "Process" debriefing, in which explicit discussion of the perseverance process was provided, generally proved sufficient to eliminate erroneous self-perceptions. Biased attribution processes that might underlie perserverance phenomena and the implications of the present data for the ethical conduct of deception research are discussed.

Journal ArticleDOI
TL;DR: In this article, the mean of a multivariate normal distribution having uniformly lower mean squared error than the sample mean is reviewed briefly in an empirical Bayes context and applied to predict baseball averages, to estimate toxomosis prevalence rates, and to estimate the exact size of Pearson's chi-square test with results from a computer simulation.
Abstract: In 1961, James and Stein exhibited an estimator of the mean of a multivariate normal distribution having uniformly lower mean squared error than the sample mean. This estimator is reviewed briefly in an empirical Bayes context. Stein's rule and its generalizations are then applied to predict baseball averages, to estimate toxomosis prevalence rates, and to estimate the exact size of Pearson's chi-square test with results from a computer simulation. In each of these examples, the mean square error of these rules is less than half that of the sample mean.

Journal ArticleDOI
TL;DR: In this paper, the authors have found events of the form {e}+}+{e}^{\ensuremath{-}}\ensuredmath{\rightarrow}{e]^{\ifmmode\pm\p\else\textpm\fi{}}+{\ensemblemath{\mu}}^{\enuremath{\mp}}+\mathrm{missing}energy}$, in which no other charged particles or photons are detected.
Abstract: We have found events of the form ${e}^{+}+{e}^{\ensuremath{-}}\ensuremath{\rightarrow}{e}^{\ifmmode\pm\else\textpm\fi{}}+{\ensuremath{\mu}}^{\ensuremath{\mp}}+\mathrm{missing}\mathrm{energy}$, in which no other charged particles or photons are detected. Most of these events are detected at or above a center-of-mass energy of 4 GeV. The missing-energy and missing-momentum spectra require that at least two additional particles be produced in each event. We have no conventional explanation for these events.

Journal ArticleDOI
TL;DR: The five EcoRI† restriction sites in bacteriophage lambda DNA have been mapped and the DNA lengths of the EcoRI fragments are in agreement with their electrophoretic mobility on agarose gels but are not in Agreement with their mobilities on polyacrylamide gels.

Journal ArticleDOI
01 Apr 1975
TL;DR: A least-mean-square (LMS) adaptive algorithm for complex signals is derived where the boldfaced terms represent complex (phasor) signals and the bar above Xjdesignates complex conjugate.
Abstract: A least-mean-square (LMS) adaptive algorithm for complex signals is derived. The original Widrow-Hoff LMS algorithm is W j+l = W j + 2µejX j . The complex form is shown to be W j+1 = W j + 2µejX- j , where the boldfaced terms represent complex (phasor) signals and the bar above X j designates complex conjugate.

Journal ArticleDOI
TL;DR: The conclusion is offered that interactions between microspikes and the substratum adjacent to the growth cone are important determinants of the directions and pathways of axonal elongation.

Journal ArticleDOI
TL;DR: By utilizing total body hypothermia and circulatory arrest aortic arch replacement can be carried out with an acceptable mortality rate, and corrective surgery could be offered to patients with life-threatening enlarging aneurysms of the aorta.

Journal ArticleDOI
TL;DR: In this article, the authors found that participants who had undertaken the activity expecting an extrinsic reward showed less subsequent interest in the activity than those who had not expected a reward.
Abstract: Preschool children engaged in a novel activity in individual sessions. In the expected reward conditions, subjects expected to win a chance to play with highly attractive toys by engaging in the activity; in the unexpected reward conditions, subjects had no prior knowledge of this reward. Orthogonally, subjects in the surveillance conditions were told that their performance would be monitored via a television camera; while subjects in the nonsurveillance conditions were not monitored. Two weeks later, unobtrusive measures of the subjects' intrinsic interest in the activity were obtained in their classrooms. Two significant main effects were obtained reproducing and expanding findings from earlier studies. Subjects who had undertaken the activity expecting an extrinsic reward showed less subsequent interest in the activity than those who had not expected a reward, and subjects who had been placed under surveillance showed less subsequent interest than those not previously monitored.

Journal ArticleDOI
Peter Wright1
TL;DR: Designing effective marketing programs requires forecasting the choice strategy a consumer will use in a given decision environment and both simplifying and optimizing considerations may affect the design of these programs.
Abstract: Designing effective marketing programs requires forecasting the choice strategy a consumer will use in a given decision environment. Both simplifying and optimizing considerations may affect the st...

Proceedings ArticleDOI
19 May 1975
TL;DR: My approach to computer vision is best characterized as inverse computer graphics, where perceived television pictures are analyzed to compute detailed geometric models to generate synthetic television images.
Abstract: My approach to computer vision is best characterized as inverse computer graphics. In computer graphics, the world is represented in sufficient detail so that the image forming process can be numerically simulated to generate synthetic television images; in the inverse, perceived television pictures (from a real TV camera) are analyzed to compute detailed geometric models.

Journal ArticleDOI
TL;DR: In this article, the asymptotic relative efficiency of the normal discrimination procedure and logistic regression is compared, and it is shown that the latter procedure is between one half and two thirds as effective as normal discrimination for statistically interesting values of the parameters.
Abstract: A random vector x arises from one of two multivariate normal distributions differing in mean but not covariance. A training set x 1, x 2, ··· x n of previous cases, along with their correct assignments, is known. These can be used to estimate Fisher's discriminant by maximum likelihood and then to assign x on the basis of the estimated discriminant, a method known as the normal discrimination procedure. Logistic regression does the same thing but with the estimation of Fisher's disriminant done conditionally on the observed values of x 1 x 2, ···, x n . This article computes the asymptotic relative efficiency of the two procedures. Typically, logistic regression is shown to be between one half and two thirds as effective as normal discrimination for statistically interesting values of the parameters.

Journal ArticleDOI
TL;DR: In this article, the problem of determining the slow viscous flow of an unbounded fluid past a single solid particle is formulated exactly as a system of linear integral equations of the first kind for a distribution of Stokeslets over the particle surface.
Abstract: The problem of determining the slow viscous flow of an unbounded fluid past a single solid particle is formulated exactly as a system of linear integral equations of the first kind for a distribution of Stokeslets over the particle surface. The unknown density of Stokeslets is identical with the surface-stress force and can be obtained numerically by reducing the integral equations to a system of linear algebraic equations. This appears to be an efficient way of determining solutions for several external flows past a particle, since it requires that the matrix of the algebraic system be inverted only once for a given particle.The technique was tested successfully against the analytic solutions for spheroids in uniform and simple shear flows, and was then applied to two problems involving the motion of finite circular cylinders: (i) a cylinder translating parallel to its axis, for which the local stress force distribution and the drag were determined; and (ii) the equivalent axis ratio of a freely suspended cylinder, which was calculated by determining the couple on a stationary cylinder placed symmetrically in two different simple shear flows. The numerical results were found to be consistent with the asymptotic analysis of Cox (1970, 1971) and in excellent agreement with the experiments of Anczurowski & Mason (1968), but not with those of Harris & Pittman (1975).

Book ChapterDOI
01 Jan 1975
TL;DR: This chapter presents some criteria for evaluating ideas for representation and presents a rough sketch of a particular version of a frame representation, and discusses the ways in which it can deal with the issues raised.
Abstract: Publisher Summary This chapter presents some criteria for evaluating ideas for representation. It also presents a rough sketch of a particular version of a frame representation, and discusses the ways in which it can deal with the issues raised. The proceduralists assert that human knowledge is primarily a knowing how. The human information processor is a stored program device, with its knowledge of the world embedded in the programs. The declarativists do not believe that knowledge of a subject is intimately bound with the procedures for its use. They see intelligence as resting on two bases: a quite general set of procedures for manipulating facts of all sorts, and a set of specific facts describing particular knowledge domains. In thinking, the general procedures are applied to the domain-specific data to make deductions. Often this process has been based on the model of axiomatic mathematics. The facts are axioms and the thought process involves proof procedures for drawing conclusions from them.

Journal ArticleDOI
TL;DR: In this paper, seawater reacted with basalt at 200°C and 500 bars for 4752 hours, and the results showed that aqueous composition and alteration minerals were similar to those in the Iceland geothermal fields; sufficient heavy metals were solubilized to account for the deep-sea heavy-metal deposits.