scispace - formally typeset
Search or ask a question

Showing papers by "Massachusetts Institute of Technology published in 1991"


Journal ArticleDOI
TL;DR: A near-real-time computer system that can locate and track a subject's head, and then recognize the person by comparing characteristics of the face to those of known individuals, and that is easy to implement using a neural network architecture.
Abstract: We have developed a near-real-time computer system that can locate and track a subject's head, and then recognize the person by comparing characteristics of the face to those of known individuals. The computational approach taken in this system is motivated by both physiology and information theory, as well as by the practical requirements of near-real-time performance and accuracy. Our approach treats the face recognition problem as an intrinsically two-dimensional (2-D) recognition problem rather than requiring recovery of three-dimensional geometry, taking advantage of the fact that faces are normally upright and thus may be described by a small set of 2-D characteristic views. The system functions by projecting face images onto a feature space that spans the significant variations among known face images. The significant features are known as "eigenfaces," because they are the eigenvectors (principal components) of the set of faces; they do not necessarily correspond to features such as eyes, ears, and noses. The projection operation characterizes an individual face by a weighted sum of the eigenface features, and so to recognize a particular face it is necessary only to compare these weights to those of known individuals. Some particular advantages of our approach are that it provides for the ability to learn and later recognize new faces in an unsupervised manner, and that it is easy to implement using a neural network architecture.

14,562 citations


Journal ArticleDOI
22 Nov 1991-Science
TL;DR: OCT as discussed by the authors uses low-coherence interferometry to produce a two-dimensional image of optical scattering from internal tissue microstructures in a way analogous to ultrasonic pulse-echo imaging.
Abstract: A technique called optical coherence tomography (OCT) has been developed for noninvasive cross-sectional imaging in biological systems. OCT uses low-coherence interferometry to produce a two-dimensional image of optical scattering from internal tissue microstructures in a way that is analogous to ultrasonic pulse-echo imaging. OCT has longitudinal and lateral spatial resolutions of a few micrometers and can detect reflected signals as small as approximately 10(-10) of the incident optical power. Tomographic imaging is demonstrated in vitro in the peripapillary area of the retina and in the coronary artery, two clinically relevant examples that are representative of transparent and turbid media, respectively.

11,568 citations


Proceedings ArticleDOI
03 Jun 1991
TL;DR: An approach to the detection and identification of human faces is presented, and a working, near-real-time face recognition system which tracks a subject's head and then recognizes the person by comparing characteristics of the face to those of known individuals is described.
Abstract: An approach to the detection and identification of human faces is presented, and a working, near-real-time face recognition system which tracks a subject's head and then recognizes the person by comparing characteristics of the face to those of known individuals is described. This approach treats face recognition as a two-dimensional recognition problem, taking advantage of the fact that faces are normally upright and thus may be described by a small set of 2-D characteristic views. Face images are projected onto a feature space ('face space') that best encodes the variation among known face images. The face space is defined by the 'eigenfaces', which are the eigenvectors of the set of faces; they do not necessarily correspond to isolated features such as eyes, ears, and noses. The framework provides the ability to learn to recognize new faces in an unsupervised manner. >

5,489 citations


Journal ArticleDOI
TL;DR: A new supervised learning procedure for systems composed of many separate networks, each of which learns to handle a subset of the complete set of training cases, which is demonstrated to be able to be solved by a very simple expert network.
Abstract: We present a new supervised learning procedure for systems composed of many separate networks, each of which learns to handle a subset of the complete set of training cases. The new procedure can be viewed either as a modular version of a multilayer supervised network, or as an associative version of competitive learning. It therefore provides a new link between these two apparently different approaches. We demonstrate that the learning procedure divides up a vowel discrimination task into appropriate subtasks, each of which can be solved by a very simple expert network.

4,338 citations


Book
01 Jan 1991
TL;DR: In this article, the cyclic deformation and fatigue crack initiation in polycrystalline ductile solids was studied and a total-life approach was proposed to deal with the problem.
Abstract: Preface 1. Introduction and overview Part I. Cyclic Deformation and Fatigue Crack Initiation: 2. Cyclic deformation in ductile single crystals 3. Cyclic deformation in polycrystalline ductile solids 4. Fatigue crack initiation in ductile solids 5. Cyclic deformation and crack initiation in brittle solids 6. Cyclic deformation and crack initiation in noncrystalline solids Part II. Total-Life Approaches: 7. Stress-life approach 8. Strain-life approach Part III. Damage-Tolerant Approach: 9. Fracture mechanics and its implications for fatigue 10. Fatigue crack growth in ductile solids 11. Fatigue crack growth in brittle solids 12. Fatigue crack growth in noncrystalline solids Part IV. Advanced Topics: 13. Contact fatigue: sliding, rolling and fretting 14. Retardation and transients in fatigue crack growth 15. Small fatigue cracks 16. Environmental interactions: corrosion-fatigue and creep-fatigue Appendix References Indexes.

4,158 citations


Journal ArticleDOI
TL;DR: Brooks et al. as discussed by the authors decompose an intelligent system into independent and parallel activity producers which all interface directly to the world through perception and action, rather than interface to each other particularly much.

3,783 citations


Journal ArticleDOI
TL;DR: The authors present an efficient architecture to synthesize filters of arbitrary orientations from linear combinations of basis filters, allowing one to adaptively steer a filter to any orientation, and to determine analytically the filter output as a function of orientation.
Abstract: The authors present an efficient architecture to synthesize filters of arbitrary orientations from linear combinations of basis filters, allowing one to adaptively steer a filter to any orientation, and to determine analytically the filter output as a function of orientation. Steerable filters may be designed in quadrature pairs to allow adaptive control over phase as well as orientation. The authors show how to design and steer the filters and present examples of their use in the analysis of orientation and phase, angularly adaptive filtering, edge detection, and shape from shading. One can also build a self-similar steerable pyramid representation. The same concepts can be generalized to the design of 3-D steerable filters. >

3,365 citations


Journal ArticleDOI
10 Jan 1991-Nature
TL;DR: GTPases are conserved molecular switches, built according to a common structural design, and rapidly accruing knowledge of individual GTPases—crystal structures, biochemical properties, or results of molecular genetic experiments—support and generate hypotheses relating structure to function in other members of the diverse family of GTPase.
Abstract: GTPases are conserved molecular switches, built according to a common structural design. Rapidly accruing knowledge of individual GTPases--crystal structures, biochemical properties, or results of molecular genetic experiments--support and generate hypotheses relating structure to function in other members of the diverse family of GTPases.

3,236 citations


Journal ArticleDOI
TL;DR: The NLPCA method is demonstrated using time-dependent, simulated batch reaction data and shows that it successfully reduces dimensionality and produces a feature space map resembling the actual distribution of the underlying system parameters.
Abstract: Nonlinear principal component analysis is a novel technique for multivariate data analysis, similar to the well-known method of principal component analysis. NLPCA, like PCA, is used to identify and remove correlations among problem variables as an aid to dimensionality reduction, visualization, and exploratory data analysis. While PCA identifies only linear correlations between variables, NLPCA uncovers both linear and nonlinear correlations, without restriction on the character of the nonlinearities present in the data. NLPCA operates by training a feedforward neural network to perform the identity mapping, where the network inputs are reproduced at the output layer. The network contains an internal “bottleneck” layer (containing fewer nodes than input or output layers), which forces the network to develop a compact representation of the input data, and two additional hidden layers. The NLPCA method is demonstrated using time-dependent, simulated batch reaction data. Results show that NLPCA successfully reduces dimensionality and produces a feature space map resembling the actual distribution of the underlying system parameters.

2,643 citations


Posted Content
TL;DR: In this paper, the authors construct new estimates of the international equity portfolio holdings of investors in the U.S., Japan, and Britain, and use a simple model of investor preferences and behavior to show that current portfolio patterns imply that investors in each nation expect returns in their domestic equity market to be several hundred basis points higher than returns in other markets.
Abstract: The benefits of international diversification have been recognized for decades. In spite of this, most investors hold nearly all of their wealth in domestic assets. In this paper, we construct new estimates of the international equity portfolio holdings of investors in the U.S., Japan, and Britain. More than 98% of the equity portfolio of Japanese investors is held domestically; the analogous percentages are 94% for the U.S., and 82% for Britain. We use a simple model of investor preferences and behavior to show that current portfolio patterns imply that investors in each nation expect returns in their domestic equity market to be several hundred basis points higher than returns in other markets. This lack of diversification appears to be the result of investor choices, rather than institutional constraints.

2,139 citations


Journal ArticleDOI
TL;DR: Cell Death During the Metamorphosis of Moths, Cells That Develop Improperly, and Mechanisms That Kill Cells.
Abstract: CONTENTS INTRODUCTION . . . . . . . . ... . . ... . . . . . . . . . . . . . . . ... . . . . . . . . . . . ...... . ... ... . . . . . . . . . . . . . . . ... ....... ... . . . . . . . . . . . ... . . ... . . . . . 664 CELL DEATH IN CAENORHABDITIS ELEGANS .... .. ..... . . . . 664 Programmed Cell Death . ... . . . . . . . . ... . . . . . ... . . . . . . . . . . . . . . . .. . . . . . . 665 Pathological Cell Deaths ....... . ........ ... . . . . . . . . . . . . . . .. . . . . . . . .. . . . . . . . . . . . . . . . . . . ........ 669 Summary . ... . . . . . . . . . .. .. '" . . .. . . . ... .... . . . . . .... . . ..... . . . ....... . . . . . . . . . . .. . . . . . . . . . . . . ...... 670 CELL DEATH IN OTHER ANIMALS .. . .. . . . . . . . . . . . . . . . . . . . . . . .. .. .. . ... . . . ... . . . ... . . ... . . . . . . . .. ... . . . . ... . . . ... ...... 670 Cell Death During the Metamorphosis of Moths. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ... . . . . . . . . . . . . . . . . 671 Deaths of Vertebrate Neurons Deprived of Growth Factors . . . ...... .... . . . . . ...... ... . 673 Cell Death ol'lhymocytes . . . . . . . . . . . . . . . . . . . . . . . . . . . ... . . ... . . . . . . . . . . . . ..... . ... . . . . . . . . . . . . . . . . . . .. . ... . . . . . ... 675 Cell Death in the Regressing Rat Prostate ........ . 678 FUNCTIONS OF CELL DEATH 679 Cells That Appear to Have No Function .. ..... . . .... . . . . .... . ...... 680 Cells That Are Generated in Excess ......... ......... . . . . . . . . . . . 683 Cells That Develop Improperly . .. . . . .. . . . . . . . . . . . . . . . . . . . . ....... . . . . . . . . . . . . . . . . .. .. . . ....... ... .. . ... . . ...... 684 Cells That Have Completed Their Functions . . . . . . . . . .... . . ... . . . . . . . . . . . ... . . . . . . ... ... .. . . .. ... .. . . ... . 684 Cells That Are Harmful 685 MECHANISMS OF CELL DEATH 685 Cell Death Is an Active Process . . . .. . . ... . . . . . . . . ... . ...... . . .. 685 Control of the Cell Death Process ....... . 686 Mechanisms That Kill Cells . . . ... . . ... . ... . . . . . . . . . ... . . ... . ... . . . . . . .. . . . .. .. .... . . . . . . . .. 687 Engulfment of Dead Cells . . . ....... . . . . . . . 689 Degradation of Dead Cells. . . . . . . ... ... . . . . . . . .. . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . ... . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . 690 Future Prospects 690

Journal ArticleDOI
TL;DR: In this paper, the authors investigated the possibility of dissipating mechanical energy with piezoelectric material shunted with passive electrical circuits, and derived the effective mechanical impedance for the piezolectric element shunted by an arbitrary circuit.

Journal ArticleDOI
22 Nov 1991-Science
TL;DR: Oncogenes are yielding their place at center stage to a second group of actors, the tumor suppressor genes, which promise to teach us equally important lessons about the molecular mechanisms of cancer pathogenesis.
Abstract: For the past decade, cellular oncogenes have attracted the attention of biologists intent on understanding the molecular origins of cancer. As the present decade unfolds, oncogenes are yielding their place at center stage to a second group of actors, the tumor suppressor genes, which promise to teach us equally important lessons about the molecular mechanisms of cancer pathogenesis.

Journal ArticleDOI
TL;DR: In this article, the stock price distributions that arise when prices follow a diffusion process with a stochastically varying volatility parameter are studied, and an explicit closed-form solution for the case where volatility is driven by an arithmetic Ornstein-Ublenbeck (or AR1) process is derived.
Abstract: We study the stock price distributions that arise when prices follow a diffusion process with a stochastically varying volatility parameter. We use analytic techniques to derive an explicit closed-form solution for the case where volatility is driven by an arithmetic Ornstein-Ublenbeck (or AR1) process. We then apply our results to two related problems in the finance literature: (1) options pricing in a world of stochastic volatility, and (2) the relationship between stochastic volatility and the nature of "fat tailes" in stock price distributions. Article published by Oxford University Press on behalf of the Society for Financial Studies in its journal, The Review of Financial Studies.

01 Jan 1991
TL;DR: Early vision as discussed by the authors is defined as measuring the amounts of various kinds of visual substances present in the image (e.g., redness or rightward motion energy) rather than in how it labels "things".
Abstract: What are the elements of early vision? This question might be taken to mean, What are the fundamental atoms of vision?—and might be variously answered in terms ofsuch candidate structures as edges, peaks, corners, and so on. In this chapter we adopt a rather different point of view and ask the question, What are the fundamentalsubstances of vision? This distinction is important becausewe wish to focus on the first steps in extraction of visualinformation. At this level it is premature to talk aboutdiscrete objects, even such simple ones as edges and corners.There is general agreement that early vision involvesmeasurements of a number of basic image properties in-cluding orientation, color, motion, and so on. Figure l.lshows a caricature (in the style of Neisser, 1976), of the sort of architecture that has become quite popular as a model for both human and machine vision. The first stageof processing involves a set of parallel pathways, eachdevoted to one particular-visual property. We propose that the measurements of these basic properties be con-sidered as the elements of early vision. We think of earlyvision as measuring the amounts of various kinds of vi-sual "substances" present in the image (e.g., redness orrightward motion energy). In other words, we are inter- ested in how early vision measures “stuff” rather than in how it labels “things.”What, then, are these elementary visual substances?Various lists have been compiled using a mixture of intui-tion and experiment. Electrophysiologists have describedneurons in striate cortex that are selectively sensitive tocertain visual properties; for reviews, see Hubel (1988) and DeValois and DeValois (1988). Psychophysicists haveinferred the existence of channels that are tuned for cer- tain visual properties; for reviews, see Graham (1989), Olzak and Thomas (1986), Pokorny and Smith (1986), and Watson (1986). Researchers in perception have foundaspects of visual stimuli that are processed pre-attentive- ly (Beck, 1966; Bergen & Julesz, 1983; Julesz & Bergen,

Book ChapterDOI
24 Aug 1991
TL;DR: In this article, the authors make the converse claim that the state of computer architecture has been a strong influence on our models of thought, and they use non-Von Neumann computational models they use share many characteristics with biological computation.
Abstract: Computers and Thought are the two categories that together define Artificial Intelligence as a discipline. It is generally accepted that work in Artificial Intelligence over the last thirty years has had a strong influence on aspects of computer architectures. In this paper we also make the converse claim; that the state of computer architecture has been a strong influence on our models of thought. The Von Neumann model of computation has lead Artificial Intelligence in particular directions. Intelligence in biological systems is completely different. Recent work in behavior-based Artificial Intelligence has produced new models of intelligence that are much closer in spirit to biological systems. The non-Von Neumann computational models they use share many characteristics with biological computation.

Journal ArticleDOI
TL;DR: This work has simplified the standard mammalian DNA isolation procedure with the aim of minimizing the number of manipulations required for each sample.
Abstract: The reverse genetics technologies that have recently been developed for mice have provided new tools to probe gene function in vivo. Unfortunately these powerful systems often require the analysis of large numbers of DNA samples. The genetargeting technology requires screening of embryonic stem-cell clones and later of the mice themselves, the latter also being the case for standard transgenic technology. It is not always possible or desirable to rely on PCR analyses, necessitating the isolation of large numbers of DNA samples of sufficient quality for Southern blot analysis. We have simplified the standard mammalian DNA isolation procedure with the aim of minimizing the number of manipulations required for each sample. The basic procedure applied to cultured cells does not require any centrifugation steps or organic solvent extractions.

Journal ArticleDOI
25 Oct 1991-Science
TL;DR: The crystal structure of the GCN4Leucine zipper suggests a key role for the leucine repeat, but also shows how other features of the coiled coil contribute to dimer formation.
Abstract: The x-ray crystal structure of a peptide corresponding to the leucine zipper of the yeast transcriptional activator GCN4 has been determined at 1.8 angstrom resolution. The peptide forms a parallel, two-stranded coiled coil of alpha helices packed as in the "knobs-into-holes" model proposed by Crick in 1953. Contacts between the helices include ion pairs and an extensive hydrophobic interface that contains a distinctive hydrogen bond. The conserved leucines, like the residues in the alternate hydrophobic repeat, make side-to-side interactions (as in a handshake) in every other layer of the dimer interface. The crystal structure of the GCN4 leucine zipper suggests a key role for the leucine repeat, but also shows how other features of the coiled coil contribute to dimer formation.

Journal ArticleDOI
TL;DR: A theoretical framework is constructed in which the development and deployment of information technology in organizations is a social phenomenon, and the organizational consequences of technology are products of both material and social dimensions.
Abstract: Recent work in social theory departs from prior traditions in proposing that social phenomena can be understood as comprising both subjective and objective elements. We apply this premise of duality to understanding the relationship between information technology and organizations. We construct a theoretical framework in which the development and deployment of information technology in organizations is a social phenomenon, and in which the organizational consequences of technology are products of both material and social dimensions. The framework is based on Giddens' theory of structuration, and it allows us to progress beyond several of the false dichotomies subjective vs objective, socially constructed vs material, macro vs micro, and qualitative vs quantitative that persist in investigations of the interaction between organizations and information technology. The framework can be used to guide studies in two main areas of information systems research-systems development and the organizational consequences of using information technology.

Journal ArticleDOI
TL;DR: In this article, it was shown that all languages in NP have zero-knowledge interactive proofs, which are probabilistic and interactive proofs that, for the members of a language, efficiently demonstrate membership in the language without conveying any additional knowledge.
Abstract: In this paper the generality and wide applicability of Zero-knowledge proofs, a notion introduced by Goldwasser, Micali, and Rackoff is demonstrated. These are probabilistic and interactive proofs that, for the members of a language, efficiently demonstrate membership in the language without conveying any additional knowledge. All previously known zero-knowledge proofs were only for number-theoretic languages in NP fl CONP. Under the assumption that secure encryption functions exist or by using "physical means for hiding information, '' it is shown that all languages in NP have zero-knowledge proofs. Loosely speaking, it is possible to demonstrate that a CNF formula is satisfiable without revealing any other property of the formula, in particular, without yielding neither a satis@ing assignment nor properties such as whether there is a satisfying assignment in which xl = X3 etc. It is also demonstrated that zero-knowledge proofs exist "outside the domain of cryptography and number theory. " Using no assumptions. it is shown that both graph isomorphism and graph nonisomor- phism have zero-knowledge interactive proofs. The mere existence of an interactive proof for graph nonisomorphism is interesting, since graph nonisomorphism is not known to be in NP and hence no efficient proofs were known before for demonstrating that two graphs are not isomorphic.

Journal ArticleDOI
TL;DR: A lattice Boltzmann model for simulating immiscible binary fluids in two dimensions is introduced and a theoretical value of the surface-tension coefficient is derived and found to be in excellent agreement with values obtained from simulations.
Abstract: We introduce a lattice Boltzmann model for simulating immiscible binary fluids in two dimensions. The model, based on the Boltzmann equation of lattice-gas hydrodynamics, incorporates features of a previously introduced discrete immiscible lattice-gas model. A theoretical value of the surface-tension coefficient is derived and found to be in excellent agreement with values obtained from simulations. The model serves as a numerical method for the simulation of immiscible two-phase flow; a preliminary application illustrates a simulation of flow in a two-dimensional microscopic model of a porous medium. Extension of the model to three dimensions appears straightforward.

Journal ArticleDOI
28 Jun 1991-Cell
TL;DR: The c-abl proto-oncogene, which encodes a cytoplasmic protein-tyrosine kinase, is expressed throughout murine gestation and ubiquitously in adult mouse tissues, however, its levels are highest in thymus, spleen, and testes.

Journal ArticleDOI
TL;DR: In this article, a physically motivated, passivity-based formalism is used to provide energy conservation and stability guarantees in the presence of transmission delays, and an adaptive tracking controller is incorporated for the control of the remote robotic system and can be used to simplify, transform or enhance the remote dynamics perceived by the operator.
Abstract: A study is made of how the existence of transmission time delays affects the application of advanced robot control schemes to effective force-reflecting telerobotic systems. This application best exploits the presence of the human operator while making full use of available robot control technology and computing power. A physically motivated, passivity-based formalism is used to provide energy conservation and stability guarantees in the presence of transmission delays. The notion of wave variable is utilized to characterize time-delay systems and leads to a configuration for force-reflecting teleoperation. The effectiveness of the approach is demonstrated experimentally. Within the same framework, an adaptive tracking controller is incorporated for the control of the remote robotic system and can be used to simplify, transform, or enhance the remote dynamics perceived by the operator. >

Journal ArticleDOI
15 Nov 1991-Science
TL;DR: Striking homology with the calcitonin receptor and lack of homological with other G protein-linked receptors indicate that receptors for these calcium-regulating hormones are related and represent a new family.
Abstract: The complementary DNA encoding a 585-amino acid parathyroid hormone-parathyroid hormone-related peptide (PTH-PTHrP) receptor with seven potential membrane-spanning domains was cloned by COS-7 expression using an opossum kidney cell complementary DNA (cDNA) library. The expressed receptor binds PTH and PTHrP with equal affinity, and both ligands equivalently stimulate adenylate cyclase. Striking homology with the calcitonin receptor and lack of homology with other G protein-linked receptors indicate that receptors for these calcium-regulating hormones are related and represent a new family.

Journal ArticleDOI
TL;DR: The authors developed a simple model of exchange rate behavior under a target zone regime and showed that the expectation that monetary policy will be adjusted to limit exchange rate variation affects exchange rate behaviour even when the exchange rate lies inside the zone and is thus not being defended actively.
Abstract: This paper develops a simple model of exchange rate behavior under a target zone regime. It shows that the expectation that monetary policy will be adjusted to limit exchange rate variation affects exchange rate behavior even when the exchange rate lies inside the zone and is thus not being defended actively. Somewhat surprisingly, the analysis of target zones turns out to have a strong formal similarity to problems in option pricing and investment under uncertainty.

Posted Content
TL;DR: In this paper, the authors investigate the dynamic effects of international trade on an LDC and a DC, the latter distinguished by a higher initial level of knowledge, under autarky and free trade, and find that under free trade the LDC experiences dynamic losses from trade, whilst the DC experiences dynamic gains.
Abstract: Using an endogenous growth model in which learning by doing, although bounded in each good, exhibits spillovers across goods, this paper investigates the dynamic effects of international trade. Examining an LDC and a DC, the latter distinguished by a higher initial level of knowledge, under autarky and free trade, I find that under free trade the LDC (DC) experiences rates of technical progress and GOP growth less than or equal (greater than or equal) to those enjoyed under autarky. Unless the LDC's population is several orders of magnitude greater than that of the DC and the initial technical gap between the two economies is not large, the LDC will be unable to catch up with its trading partner. Hence, in terms of technical progress and growth, the LDC experiences dynamic losses from trade, whilst the DC experiences dynamic gains. However, since technical progress abroad can improve welfare at home, LDC consumers may enjoy - higher intertemporal utility along the free trade path. In the case of DC consumers, as long as their economy is not overtaken by the LDC they will enjoy both more rapid technical progress and the traditional static gains from trade, and hence experience an unambiguous improvement in intertemporal welfare.

Journal ArticleDOI
TL;DR: Results of Monte Carlo simulations performed using multilayer perceptron (MLP) networks trained with backpropagation, radial basis function (RBF) networks, and high-order polynomial networks graphically demonstrate that network outputs provide good estimates of Bayesian probabilities.
Abstract: Many neural network classifiers provide outputs which estimate Bayesian a posteriori probabilities. When the estimation is accurate, network outputs can be treated as probabilities and sum to one. Simple proofs show that Bayesian probabilities are estimated when desired network outputs are 1 of M (one output unity, all others zero) and a squared-error or cross-entropy cost function is used. Results of Monte Carlo simulations performed using multilayer perceptron (MLP) networks trained with backpropagation, radial basis function (RBF) networks, and high-order polynomial networks graphically demonstrate that network outputs provide good estimates of Bayesian probabilities. Estimation accuracy depends on network complexity, the amount of training data, and the degree to which training data reflect true likelihood distributions and a priori class probabilities. Interpretation of network outputs as Bayesian probabilities allows outputs from multiple networks to be combined for higher level decision making, sim...

Journal ArticleDOI
18 Jul 1991-Nature
TL;DR: The comparison of the new strategy and a standard clinical processor shows large improvements in the scores of speech reception tests for all subjects, which have important implications for the treatment of deafness and for minimal representations of speech at the auditory periphery.
Abstract: HIGH levels of speech recognition have been achieved with a new sound processing strategy for multielectrode cochlear implants. A cochlear implant system consists of one or more implanted electrodes for direct electrical activation of the auditory nerve, an external speech processor that transforms a microphone input into stimuli for each electrode, and a transcutaneous (rf-link) or percutaneous (direct) connection between the processor and the electrodes. We report here the comparison of the new strategy and a standard clinical processor. The standard compressed analogue (CA) processor presented analogue waveforms simultaneously to all electrodes, whereas the new continuous interleaved sampling (CIS) strategy presented brief pulses to each electrode in a nonoverlapping sequence. Seven experienced implant users, selected for their excellent performance with the CA processor, participated as subjects. The new strategy produced large improvements in the scores of speech reception tests for all subjects. These results have important implications for the treatment of deafness and for minimal representations of speech at the auditory periphery.

Journal ArticleDOI
TL;DR: In this paper, the authors generalize the mapping method of Wisdom (1982) to encompass all gravitational n-body problems with a dominant central mass and use it to compute the evolution of the outer planets for a billion years.
Abstract: The present study generalizes the mapping method of Wisdom (1982) to encompass all gravitational n-body problems with a dominant central mass. The rationale for the generalized mapping method is discussed as well as details for the mapping for the n-body problem. Some refinements of the method are considered, and the relationship of the mapping method to other symplectic integration methods is shown. The method is used to compute the evolution of the outer planets for a billion years. The resulting evolution is compared to the 845 million year evolution of the outer planets performed on the Digital Orerry using standard numerical integration techniques. This calculation provides independent numerical confirmation of the result of Sussman and Wisdom (1988) that the motion of the planet Pluto is chaotic.

Journal ArticleDOI
TL;DR: A criterion for determining the appropriate drainage density at which to extract networks from digital elevation data is suggested to extract the highest resolution (highest drainage density) network that satisfies scaling laws that have traditionally been found to hold for channel networks.
Abstract: Channel networks with artibtrary drainage density or resolution can be extracted from digital elevation data. However, for digital elevation data derived networks to be useful they have to be extracted at the correct length scale or drainage density. Here we suggest a criterion for determining the appropriate drainage density at which to extract networks from digital elevation data. The criterion is basically to extract the highest resolution (highest drainage density) network that satisfies scaling laws that have traditionally been found to hold for channel networks. Procedures that use this criterion are presented and tested on 21 digital elevation data sets well distributed throughout the U.S.