scispace - formally typeset
Search or ask a question

Showing papers by "Stanford University published in 1984"


Journal ArticleDOI
TL;DR: In this article, a system which utilizes a minimum mean square error (MMSE) estimator is proposed and then compared with other widely used systems which are based on Wiener filtering and the "spectral subtraction" algorithm.
Abstract: This paper focuses on the class of speech enhancement systems which capitalize on the major importance of the short-time spectral amplitude (STSA) of the speech signal in its perception. A system which utilizes a minimum mean-square error (MMSE) STSA estimator is proposed and then compared with other widely used systems which are based on Wiener filtering and the "spectral subtraction" algorithm. In this paper we derive the MMSE STSA estimator, based on modeling speech and noise spectral components as statistically independent Gaussian random variables. We analyze the performance of the proposed STSA estimator and compare it with a STSA estimator derived from the Wiener estimator. We also examine the MMSE STSA estimator under uncertainty of signal presence in the noisy observations. In constructing the enhanced signal, the MMSE STSA estimator is combined with the complex exponential of the noisy phase. It is shown here that the latter is the MMSE estimator of the complex exponential of the original phase, which does not affect the STSA estimation. The proposed approach results in a significant reduction of the noise, and provides enhanced speech with colorless residual noise. The complexity of the proposed algorithm is approximately that of other systems in the discussed class.

3,905 citations


Journal Article
TL;DR: During the past few years several design algorithms have been developed for a variety of vector quantizers and the performance of these codes has been studied for speech waveforms, speech linear predictive parameter vectors, images, and several simulated random processes.
Abstract: A vector quantizer is a system for mapping a sequence of continuous or discrete vectors into a digital sequence suitable for communication over or storage in a digital channel. The goal of such a system is data compression: to reduce the bit rate so as to minimize communication channel capacity or digital storage memory requirements while maintaining the necessary fidelity of the data. The mapping for each vector may or may not have memory in the sense of depending on past actions of the coder, just as in well established scalar techniques such as PCM, which has no memory, and predictive quantization, which does. Even though information theory implies that one can always obtain better performance by coding vectors instead of scalars, scalar quantizers have remained by far the most common data compression system because of their simplicity and good performance when the communication rate is sufficiently large. In addition, relatively few design techniques have existed for vector quantizers. During the past few years several design algorithms have been developed for a variety of vector quantizers and the performance of these codes has been studied for speech waveforms, speech linear predictive parameter vectors, images, and several simulated random processes. It is the purpose of this article to survey some of these design techniques and their applications.

2,743 citations


Book ChapterDOI
TL;DR: These lectures are based on a book "The RELATIVISTIC NUCLEAR MANY-BODY PROBLEM" written with Brian Serot which will appear as volume 16 of the series Advances in Nuclear Physics edited by J. W. Negele and E. Vogt.
Abstract: These lectures are based on a book “THE RELATIVISTIC NUCLEAR MANY-BODY PROBLEM” written with Brian Serot which will appear as volume 16 of the series Advances in Nuclear Physics edited by J. W. Negele and E. Vogt [R1]. I am distributing copies of the table of contents.

1,961 citations


Journal ArticleDOI
TL;DR: Tobin's model is also known as censored or truncated regression models as discussed by the authors, where the observations outside a specified range are totally lost and censored if one can at least observe the exogenous variables, and truncation occurs if a patient is still alive at the last observation date or if he or she cannot be located.

1,552 citations


Book
01 Jan 1984
TL;DR: This anthology of essays from the inventor of literate programming also contains excerpts from the programs for TEX and METAFONT and CWEB, a system for Literate programming in C and related languages.
Abstract: From the Publisher: Including early papers on structured programming, as well as the Computer Journal article that launched its study, this anthology of essays from the inventor of literate programming also contains excerpts from the programs for TEX and METAFONT and CWEB, a system for literate programming in C and related languages.

1,486 citations


Journal ArticleDOI
01 Oct 1984-Nature
TL;DR: The cold dark matter hypothesis as mentioned in this paper suggests that the dark matter that appears to be gravitationally dominant on all scales larger than galactic cores may consist of axions, stable photinos, or other collisionless particles whose velocity dispersion in the early Universe is so small that fluctuations of galactic size or larger are not damped by free streaming.
Abstract: The dark matter that appears to be gravitationally dominant on all scales larger than galactic cores may consist of axions, stable photinos, or other collisionless particles whose velocity dispersion in the early Universe is so small that fluctuations of galactic size or larger are not damped by free streaming. An attractive feature of this cold dark matter hypothesis is its considerable predictive power: the post-recombination fluctuation spectrum is calculable, and it in turn governs the formation of galaxies and clusters. Good agreement with the data is obtained for a Zeldovich (|δk|2 ∝ k) spectrum of primordial fluctuations.

1,448 citations


Journal ArticleDOI
TL;DR: In this paper, the cosmological constraints on supersymmetric theories with a new stable particle were considered and bounds on the parameters in the lagrangian which govern its mass and couplings were derived.

1,437 citations


Proceedings ArticleDOI
06 Jun 1984
TL;DR: A model for asynchronous distributed computation is presented and it is shown that natural asynchronous distributed versions of a large class of deterministic and stochastic gradient-like algorithms retain the desirable convergence properties of their centralized counterparts.
Abstract: We present a model for asynchronous distributed computation and then proceed to analyze the convergence of natural asynchronous distributed versions of a large class of deterministic and stochastic gradient-like algorithms. We show that such algorithms retain the desirable convergence properties of their centralized counterparts, provided that the time between consecutive communications between processors plus communication delays are not too large.

1,278 citations


Journal ArticleDOI
TL;DR: Of 10 distinct cloned DNA copies of mRNAs expressed in T lymphocytes but not in B lymphocytes and associated with membrane-bound polysomes, one hybridizes to a region of the genome that has rearranged in a T- cell lymphoma and several T-cell hybridomas, suggesting that it encodes one chain of the elusive antigen receptor on the surface of T lymphocyte.
Abstract: Of 10 distinct cloned DNA copies of mRNAs expressed in T lymphocytes but not in B lymphocytes and associated with membrane-bound polysomes, one hybridizes to a region of the genome that has rearranged in a T-cell lymphoma and several T-cell hybridomas. These characteristics suggest that it encodes one chain of the elusive antigen receptor on the surface of T lymphocytes.

1,218 citations


Journal ArticleDOI
01 Jul 1984
TL;DR: The combination of decreasing feature sizes and increasing chip sizes is leading to a communication crisis in the area of VLSI circuits and systems, and the possibility of applying optical and electrooptical technologies to such interconnection problems is investigated.
Abstract: The combination of decreasing feature sizes and increasing chip sizes is leading to a communication crisis in the area of VLSI circuits and systems. It is anticipated that the speeds of MOS circuits will soon be limited by interconnection delays, rather than gate delays. This paper investigates the possibility of applying optical and electrooptical technologies to such interconnection problems. The origins of the communication crisis are discussed. Those aspects of electrooptic technology that are applicable to the generation, routing, and detection of light at the level of chips and boards are reviewed. Algorithmic implications of interconnections are discussed, with emphasis on the definition of a hierarchy of interconnection problems from the signal-processing area having an increasing level of complexity. One potential application of optical interconnections is to the problem of clock distribution, for which a single signal must be routed to many parts of a chip or board. More complex is the problem of supplying data interconnections via optical technology. Areas in need of future research are identified.

1,187 citations


Journal ArticleDOI
TL;DR: Stressors, social resources, and coping were additively predictive of patient's functioning, but coping and social resources did not have stress-attenuation or buffering effects.
Abstract: We used a stress and coping paradigm to guide the development of indices of coping responses and to explore the roles of stress, social resources, and coping among 424 men and women entering treatment for depression. We also used an expanded concept of multiple domains of life stress to develop several indices of ongoing life strains. Although most prior studies have focused on acute life events, we found that chronic strains were somewhat more strongly and consistently related to the severity of dysfunction. The coping indices generally showed acceptable conceptual and psychometric characteristics and only moderate relationships to respondents' sociodemographic characteristics or to the severity of the stressful event for which coping was sampled. Coping responses directed toward problem solving and affective regulation were associated with less severe dysfunction, whereas emotional-discharge responses, more frequently used by women, were linked to greater dysfunction. Stressors, social resources, and coping were additively predictive of patient's functioning, but coping and social resources did not have stress-attenuation or buffering effects.

Journal ArticleDOI
TL;DR: In this paper, the authors present the initial experiments regarding a specific unsolved control problem which appeared to be central to advances in the art of robotics, which involves the control of a flexible member (one link of a robot system).
Abstract: The present investigation is concerned with initial experiments regarding a specific unsolved control problem which appeared to be central to advances in the art of robotics. This problem involves the control of a flexible member (one link of a robot system). The position of the end-effector, called the end point or tip, is controlled by measuring that position and using the measurement as a basis for applying control torque to the other end of the flexible member, as for instance, the robot's elbow joint. A description is presented of the features of the first experimental arm which has been made, and an outline is provided of the general strategy for controlling it using its tip sensor and shoulder torquer.

Journal ArticleDOI
TL;DR: 1-methyl-4-phenyl-1,2,5,6-tetrahydropyridine appears effective in producing an animal model for Parkinson's disease in the squirrel monkey, and may be one of the more selective neurotoxins described to date.

Journal ArticleDOI
P Ponte1, Sun-Yu Ng1, Joanne N. Engel1, Peter W. Gunning1, Larry Kedes1 
TL;DR: The complete nucleotide sequence of a human beta actin cDNA is reported, and conservation of sequences suggests that strong selective pressures operate on non-translated segments of Beta actin mRNA.
Abstract: We report the complete nucleotide sequence of a human beta actin cDNA. Both the 5' and 3' untranslated regions of the sequence are similar (greater than 80%) to the analogous regions of the rat beta-actin gene reported by Nudel et al (1983). When a segment of the 3' untranslated region is used as a radiolabelled probe, strong hybridization to chick beta actin mRNA is seen. This conservation of sequences suggests that strong selective pressures operate on non-translated segments of beta actin mRNA.

Journal ArticleDOI
20 Jan 1984-Science
TL;DR: The dentate-interpositus nuclei were concluded to be critically involved in the learning and production of classically conditioned responses.
Abstract: Classical conditioning of the eyelid response in the rabbit was used to investigate the neuronal structures mediating basic associative learning of discrete, adaptive responses. Lesions of the ipsilateral dentate-interpositus nuclei, but not of the cerebellar cortex, abolished the learned eyeblink response. Recordings from these nuclei have revealed neuronal responses related to the learning of the response. Stimulating these recording sites produced the eyelid response. The dentate-interpositus nuclei were concluded to be critically involved in the learning and production of classically conditioned responses.


Journal ArticleDOI
TL;DR: The study concluded that the consensus development conference is an effective technology transfer procedure both in the United States and in Sweden.
Abstract: A pair of consensus development conferences held in the United States and in Sweden presented an unusual opportunity for a cross-cultural study of technology transfer The two conferences were based...

Journal ArticleDOI
TL;DR: Fast transversal filter (FTF) implementations of recursive-least-squares (RLS) adaptive-filtering algorithms are presented in this paper and substantial improvements in transient behavior in comparison to stochastic-gradient or LMS adaptive algorithms are efficiently achieved by the presented algorithms.
Abstract: Fast transversal filter (FTF) implementations of recursive-least-squares (RLS) adaptive-filtering algorithms are presented in this paper. Substantial improvements in transient behavior in comparison to stochastic-gradient or LMS adaptive algorithms are efficiently achieved by the presented algorithms. The true, not approximate, solution of the RLS problem is always obtained by the FTF algorithms even during the critical initialization period (first N iterations) of the adaptive filter. This true solution is recursively calculated at a relatively modest increase in computational requirements in comparison to stochastic-gradient algorithms (factor of 1.6 to 3.5, depending upon application). Additionally, the fast transversal filter algorithms are shown to offer substantial reductions in computational requirements relative to existing, fast-RLS algorithms, such as the fast Kalman algorithms of Morf, Ljung, and Falconer (1976) and the fast ladder (lattice) algorithms of Morf and Lee (1977-1981). They are further shown to attain (steady-state unnormalized), or improve upon (first N initialization steps), the very low computational requirements of the efficient RLS solutions of Carayannis, Manolakis, and Kalouptsidis (1983). Finally, several efficient procedures are presented by which to ensure the numerical Stability of the transversal-filter algorithms, including the incorporation of soft-constraints into the performance criteria, internal bounding and rescuing procedures, and dynamic-range-increasing, square-root (normalized) variations of the transversal filters.

Journal ArticleDOI
TL;DR: In this article, it was shown that conformal invariance is broken spontaneously by the vacuum expectation value of an unphysical scalar field; this process induces general relativity as an effective long distance limit.

Journal ArticleDOI
08 Mar 1984-Nature
TL;DR: Comparison of the sequence of a cloned T cell-specific cDNA with those of cross-reacting cloned cDNAs isolated from a thymocyte library indicates the presence of variable, constant and joining regions remarkably similar in size and sequence to those encoding immunoglobulin proteins.
Abstract: Comparison of the sequence of a cloned T cell-specific cDNA with those of cross-reacting cloned cDNAs isolated from a thymocyte library indicates the presence of variable, constant and joining regions remarkably similar in size and sequence to those encoding immunoglobulin proteins. Together with the evidence for somatic gene rearrangements reported in the accompanying paper, this strongly suggests that the TM86 cDNA clone encodes one chain of the T-cell receptor for antigen.

Journal ArticleDOI
TL;DR: Characteristics of competitive advantage in manufacturing firms are described, a general framework for relating such advantage to corporate, business and functional levels of strategy is given, and an approach for pursuing that potential is outlined.
Abstract: Summary The primary objective of strategy is to develop and support a lasting competitive advantage. In manufacturing industries, substantial focus has been given during the early eighties to the importance of the manufacturing function's contribution to overall corporate success, and yet the apparent lack of attention (historically) to achieving that potential contribution. In this article, characteristics of competitive advantage in manufacturing firms are described, a general framework for relating such advantage to corporate, business and functional levels of strategy is given, and an approach for pursuing that potential is outlined.

Journal ArticleDOI
TL;DR: The first phase of the Stanford Innovation Project, a long-term study of U.S. industrial innovation as mentioned in this paper, identified eight broad areas that appear to be important for new product success in a high-technology environment.
Abstract: This paper summarizes the first phase of the Stanford Innovation Project, a long-term study of U.S. industrial innovation. As part of this initial phase, begun in 1982, two surveys were conducted: 1) an open-ended survey of 158 new products in the electronics industry, followed by 2) a structured survey of 118 of the original products. Both surveys used a pairwise comparison methodology. Our research identified eight broad areas that appear to be important for new product success in a high-technology environment: 1) market knowledge gained through frequent and intense customer interaction, which leads to high benefit-to-cost products; 2) and 3) planning and coordination of the new product process, especially the RD 4) emphasis on marketing and sales; 5) management support for the product throughout the development and launch stages; 6) the contribution margin of the product; 7) early market entry; 8) proximity of the new product technologies and markets to the existing strengths of the developing unit. Based on these results, a preliminary model of the new product process is proposed in the concluding section. There is nothing more difficult to plan, more doubtful of success, nor more dangerous to manage than the creation of a new system. Niccolo Machiavelli.

Journal ArticleDOI
TL;DR: For example, Boyes-Braem et al. as mentioned in this paper found that part terms proliferate in subjects' listings of attributes characterizing category members at the basic level, but are rarely listed at a general level.
Abstract: Concepts may be organized into taxonomies varying in inclusiveness or abstraction, such as furniture, table, card table or animal, bird, robin. For taxonomies of common objects and organisms, the basic level, the level of table and bird, has been determined to be most informative (Rosch, Mervis, Gray, Johnson, & Boyes-Braem, 1976). Psychology, linguistics, and anthropology have produced a variety of measures of perception, behavior, and communication that converge on the basic level. Here, we present data showing that the basic level differs qualitatively from other levels in taxonomies of objects and of living things and present an explanation for why so many measures converge at that level. We have found that part terms proliferate in subjects' listings of attributes characterizing category members at the basic level, but are rarely listed at a general level. At a more specific level, fewer parts are listed, though more are judged to be true. Basic level objects are distinguished from one another by parts, but members of subordinate categories share parts and differ from one another on other attributes. Informants agree on the parts of objects, and also on relative "goodness" of the various parts. Perceptual salience and functional significance both appear to contribute to perceived part goodness. Names of parts frequently enjoy a duality not evident in names of other attributes; they refer at once to a particular appearance and to a particular function. We propose that part configuration underlies the various empirical operations of perception, behavior, and communication that converge at the basic level. Part configuration underlies the perceptual measures because it determines the shapes of objects to a large degree. Parts underlie the behavioral tasks because most of our behaviors is indirect toward parts of objects. Labeling appears to follow the natural breaks of perception and behavior; consequently, part configuration also underlies communication measures. Because elements of more abstract taxonomies, such as scenes and events, can also be decomposed into parts, this analysis provides a bridge to organization in other domains of knowledge. Knowledge organization by parts (partonomy) is contrasted to organization by kinds (taxonomy). Taxonomies serve to organize numerous classes of entities and to allow inference from larger sets to sets included in them. Partonomies serve to separate entities into their structural components and to organize knowledge of function by components of structure. The informativeness of the basic level may originate from the availability of inference from structure to function at that level.

Journal ArticleDOI
01 Dec 1984-Cell
TL;DR: Transport of the VSV-encoded glycoprotein between successive compartments of the Golgi has been reconstituted in a cell-free system and is measured, in a rapid and sensitive new assay, by the coupled incorporation of 3H-N-acetylglucosamine (GlcNAc).

Journal ArticleDOI
01 Oct 1984-Cell
TL;DR: Eighteen cDNAs, cloned from interferon-treated T98G neuroblastoma cells, correspond to seven different mRNAs induced up to 40-fold by interferons, and one codes for metallothionein II and another for a class I HLA.

Journal ArticleDOI
TL;DR: In this article, the authors constructed models in which the Higgs doublet whose vacuum expectation breaks SU(2) × U(10) is a bound state of massive strongly interacting fermions.

Journal ArticleDOI
01 Oct 1984-Cell
TL;DR: DnaA protein recognizes in addition to oriC a number of specific sites: within or near the replication origins of pSC101, pBR322, and ColE1; within the regulatory regions of the dnaA and "X-protein" genes; and in IRL-Tn5.

Journal ArticleDOI
TL;DR: The data from these two paradigms suggest that the P3 amplitude and latency abnormalities observed reflect a common, rather than a diagnostically specific deficit, in patients with dementia, schizophrenia and depression.

Journal ArticleDOI
TL;DR: This paper showed that children limit the possible meanings of nouns to refer mainly to categorical relations, and that this constraint greatly simplifies the problem of language learning by limiting the hypotheses that children need to consider.

Journal ArticleDOI
TL;DR: Dart differs from previous approaches to diagnosis taken in the design-automation community in that it is more general and in many cases more efficient, and allows it to be applied to a wide class of devices ranging from digital logic to nuclear reactors.