scispace - formally typeset
Search or ask a question

Showing papers by "Massachusetts Institute of Technology published in 1988"


Book
01 Jan 1988
TL;DR: This book provides a clear and simple account of the key ideas and algorithms of reinforcement learning, which ranges from the history of the field's intellectual foundations to the most recent developments and applications.
Abstract: Reinforcement learning, one of the most active research areas in artificial intelligence, is a computational approach to learning whereby an agent tries to maximize the total amount of reward it receives when interacting with a complex, uncertain environment. In Reinforcement Learning, Richard Sutton and Andrew Barto provide a clear and simple account of the key ideas and algorithms of reinforcement learning. Their discussion ranges from the history of the field's intellectual foundations to the most recent developments and applications. The only necessary mathematical background is familiarity with elementary concepts of probability. The book is divided into three parts. Part I defines the reinforcement learning problem in terms of Markov decision processes. Part II provides basic solution methods: dynamic programming, Monte Carlo methods, and temporal-difference learning. Part III presents a unified view of the solution methods and incorporates artificial neural networks, eligibility traces, and planning; the two final chapters present case studies and consider the future of reinforcement learning.

37,989 citations


Book
01 Aug 1988
TL;DR: The linear elasticity of anisotropic cellular solids is studied in this article. But the authors focus on the design of sandwich panels with foam cores and do not consider the properties of the materials.
Abstract: 1. Introduction 2. The structure of cellular solids 3. Material properties 4. The mechanics of honeycombs 5. The mechanics of foams: basic results 6. The mechanics of foams refinements 7. Thermal, electrical and acoustic properties of foams 8. Energy absorption in cellular materials 9. The design of sandwich panels with foam cores 10. Wood 11. Cancellous bone 12. Cork 13. Sources, suppliers and property data Appendix: the linear-elasticity of anisotropic cellular solids.

8,946 citations


Journal ArticleDOI
TL;DR: A digital signature scheme based on the computational difficulty of integer factorization possesses the novel property of being robust against an adaptive chosen-message attack: an adversary who receives signatures for messages of his choice cannot later forge the signature of even a single additional message.
Abstract: We present a digital signature scheme based on the computational difficulty of integer factorization. The scheme possesses the novel property of being robust against an adaptive chosen-message attack: an adversary who receives signatures for messages of his choice (where each message may be chosen in a way that depends on the signatures of previously chosen messages) cannot later forge the signature of even a single additional message. This may be somewhat surprising, since in the folklore the properties of having forgery being equivalent to factoring and being invulnerable to an adaptive chosen-message attack were considered to be contradictory. More generally, we show how to construct a signature scheme with such properties based on the existence of a "claw-free" pair of permutations--a potentially weaker assumption than the intractibility of integer factorization. The new scheme is potentially practical: signing and verifying signatures are reasonably fast, and signatures are compact.

3,150 citations


Journal ArticleDOI
TL;DR: A systematic nomenclature has been developed primarily for FAB-MS, but can be used as well for other ionization techniques and is applicable to spectra recorded in either the positive or negative ion mode during both MS and MS/MS experiments.
Abstract: A summary of the ion types observed in the Fast Atom Bombardment Mass Spectrometry (FAB-MS) and collision induced decomposition (CID) MS/MS spectra of glycoconjugates (glycosphingolipids, glycopeptides, glycosides and carbohydrates) is presented. The variety of product ion types that arise by cleavages within the carbohydrate moieties has prompted us to introduce a systematic nomenclature to designate these ions. The proposed nomenclature has been developed primarily for FAB-MS, but can be used as well for other ionization techniques [field desorption (FD), direct chemical ionization (DCI), laser desorption/Fourier transform (LD/FT), etc.], and is applicable to spectra recorded in either the positive or negative ion mode during both MS and MS/MS experiments. Ai, Bi and Ci labels are used to designate fragments containing a terminal (nonreducing end) sugar unit, whereas Xj, Yj and Zj represent ions still containing the aglycone (or the reducing sugar unit). Subscripts indicate the position relative to the termini analogous to the system used in peptides, and superscripts indicate cleavages within carbohydrate rings. FAB-MS/MS spectra of a native glycosphingolipid and glycopeptide, and a permethylated ganglioside are shown as illustrations.

2,497 citations


Proceedings ArticleDOI
01 Jan 1988
TL;DR: In this article, the authors show that every function of n inputs can be efficiently computed by a complete network of n processors in such a way that if no faults occur, no set of size t can be found.
Abstract: Every function of n inputs can be efficiently computed by a complete network of n processors in such a way that: If no faults occur, no set of size t

2,207 citations


Journal ArticleDOI
28 Oct 1988-Science
TL;DR: The data show the existence of a phorbol ester-responsive regulatory protein that acts by controlling the DNA binding activity and subcellular localization of a transcription factor in cells that do not express immunoglobulin kappa light chain genes.
Abstract: In cells that do not express immunoglobulin kappa light chain genes, the kappa enhancer binding protein NF-kappa B is found in cytosolic fractions and exhibits DNA binding activity only in the presence of a dissociating agent such as sodium deoxycholate. The dependence on deoxycholate is shown to result from association of NF-kappa B with a 60- to 70-kilodalton inhibitory protein (I kappa B). The fractionated inhibitor can inactivate NF-kappa B from various sources--including the nuclei of phorbol ester-treated cells--in a specific, saturable, and reversible manner. The cytoplasmic localization of the complex of NF-kappa B and I kappa B was supported by enucleation experiments. An active phorbol ester must therefore, presumably by activation of protein kinase C, cause dissociation of a cytoplasmic complex of NF-kappa B and I kappa B by modifying I kappa B. this releases active NF-kappa B which can translocate into the nucleus to activate target enhancers. The data show the existence of a phorbol ester-responsive regulatory protein that acts by controlling the DNA binding activity and subcellular localization of a transcription factor.

2,071 citations



Journal ArticleDOI
TL;DR: An alternative method based on the preflow concept of Karzanov, which runs as fast as any other known method on dense graphs, achieving an O(n) time bound on an n-vertex graph and faster on graphs of moderate density.
Abstract: All previously known efficient maximum-flow algorithms work by finding augmenting paths, either one path at a time (as in the original Ford and Fulkerson algorithm) or all shortest-length augmenting paths at once (using the layered network approach of Dinic). An alternative method based on the preflow concept of Karzanov is introduced. A preflow is like a flow, except that the total amount flowing into a vertex is allowed to exceed the total amount flowing out. The method maintains a preflow in the original network and pushes local flow excess toward the sink along what are estimated to be shortest paths. The algorithm and its analysis are simple and intuitive, yet the algorithm runs as fast as any other known method on dense graphs, achieving an O(n3) time bound on an n-vertex graph. By incorporating the dynamic tree data structure of Sleator and Tarjan, we obtain a version of the algorithm running in O(nm log(n2/m)) time on an n-vertex, m-edge graph. This is as fast as any known method for any graph density and faster on graphs of moderate density. The algorithm also admits efficient distributed and parallel implementations. A parallel implementation running in O(n2log n) time using n processors and O(m) space is obtained. This time bound matches that of the Shiloach-Vishkin algorithm, which also uses n processors but requires O(n2) space.

1,700 citations


ReportDOI
TL;DR: In this article, the authors investigated whether stock prices are mean-reverting, using data from the United States and 17 other countries, and they found that there is positive and negative autocorrelation in returns over short horizons and negative auto-correlation over longer horizons.

1,666 citations


Proceedings ArticleDOI
01 Jan 1988
TL;DR: It is shown that any reasonable multiparty protocol can be achieved if at least 2n/3 of the participants are honest and the secrecy achieved is unconditional.
Abstract: Under the assumption that each pair of participants can communicate secretly, we show that any reasonable multiparty protocol can be achieved if at least 2n/3 of the participants are honest. The secrecy achieved is unconditional. It does not rely on any assumption about computational intractability.

1,663 citations


Journal ArticleDOI
TL;DR: Fault-tolerant consensus protocols are given for various cases of partial synchrony and various fault models that allow partially synchronous processors to reach some approximately common notion of time.
Abstract: The concept of partial synchrony in a distributed system is introduced. Partial synchrony lies between the cases of a synchronous system and an asynchronous system. In a synchronous system, there is a known fixed upper bound D on the time required for a message to be sent from one processor to another and a known fixed upper bound P on the relative speeds of different processors. In an asynchronous system no fixed upper bounds D and P exist. In one version of partial synchrony, fixed bounds D and P exist, but they are not known a priori. The problem is to design protocols that work correctly in the partially synchronous system regardless of the actual values of the bounds D and P. In another version of partial synchrony, the bounds are known, but are only guaranteed to hold starting at some unknown time T, and protocols must be designed to work correctly regardless of when time T occurs. Fault-tolerant consensus protocols are given for various cases of partial synchrony and various fault models. Lower bounds that show in most cases that our protocols are optimal with respect to the number of faults tolerated are also given. Our consensus protocols for partially synchronous processors use new protocols for fault-tolerant “distributed clocks” that allow partially synchronous processors to reach some approximately common notion of time.

Journal ArticleDOI
20 Oct 1988-Nature
TL;DR: The first use of a complete RFLP linkage map to resolve quantitative traits into discrete Mendelian factors, in an interspecific back-cross of tomato is reported, broadly applicable to the genetic dissection of quantitative inheritance of physiological, morphological and behavioural traits in any higher plant or animal.
Abstract: The conflict between the Mendelian theory of particulate inheritance and the observation of continuous variation for most traits in nature was resolved in the early 1900s by the concept that quantitative traits can result from segregation of multiple genes, modified by environmental effects. Although pioneering experiments showed that linkage could occasionally be detected to such quantitative trait loci (QTLs), accurate and systematic mapping of QTLs has not been possible because the inheritance of an entire genome could not be studied with genetic markers. The use of restriction fragment length polymorphisms (RFLPs) has made such investigations possible, at least in principle. Here, we report the first use of a complete RFLP linkage map to resolve quantitative traits into discrete Mendelian factors, in an interspecific back-cross of tomato. Applying new analytical methods, we mapped at least six QTLs controlling fruit mass, four QTLs for the concentration of soluble solids and five QTLs for fruit pH. This approach is broadly applicable to the genetic dissection of quantitative inheritance of physiological, morphological and behavioural traits in any higher plant or animal.

Journal ArticleDOI
14 Jul 1988-Nature
TL;DR: The interaction between E1A and the retinoblastoma gene product is the first demonstration of a physical link between an oncogene and an anti-oncogene.
Abstract: One of the cellular targets implicated in the process of transformation by the adenovirus E1A proteins is a 105K cellular protein. Previously, this protein had been shown to form stable protein/protein complexes with the E1A polypeptides but its identity was unknown. Here, we demonstrate that it is the product of the retinoblastoma gene. The interaction between E1A and the retinoblastoma gene product is the first demonstration of a physical link between an oncogene and an anti-oncogene.

Journal ArticleDOI
TL;DR: It is concluded that connectionists' claims about the dispensability of rules in explanations in the psychology of language must be rejected, and that, on the contrary, the linguistic and developmental facts provide good evidence for such rules.

Journal ArticleDOI
TL;DR: A simple two-step fluorometric assay of DNA in cartilage explants, utilizing the bisbenzimidazole dye Hoechst 33258, is described, which offers advantages over other established DNA assays of Cartilage and may be especially useful in metabolic studies of cartilageExplants.

Journal ArticleDOI
07 Oct 1988-Cell
TL;DR: It is shown here that the c-kit gene is disrupted in two spontaneous mutant W alleles, W44 and Wx, which strongly support the identification of c-Kit as the gene product of the W locus.

Journal ArticleDOI
28 Jul 1988-Nature
TL;DR: Quantitative analysis of neuromelanin-pigmented neurons in control and parkinsonian midbrains demonstrates that the dopamine-containing cell groups of the normal human midbrain differ markedly from each other in the percentage of neurmelan in-pIGmented neurons they contain, and suggests a selective vulnerability of the neuromelsin- pigmented subpopulation of dopamine- containing mesencephalic neurons in Parkinson's disease.
Abstract: In idiopathic Parkinson's disease massive cell death occurs in the dopamine-containing substantia nigra1,24. A link between the vulnerability of nigral neurons and the prominent pigmentation of the substantia nigra, though long suspected, has not been proved2. This possibility is supported by evidence that N-methyl-4-phenyl-1,2,3,6-tetrahydropyridine (MPTP) and its metabolite MPP+, the latter of which causes destruction of nigral neurons, bind to neuromelanin3,4. We have directly tested this hypothesis by a quantitative analysis of neuromelanin-pigmented neurons in control and parkinsonian midbrains. The findings demonstrate first that the dopamine-containing cell groups of the normal human midbrain differ markedly from each other in the percentage of neuromelanin-pigmented neurons they contain. Second, the estimated cell loss in these cell groups in Parkinson's disease is directly correlated (r = 0.97, P = 0.0057) with the percentage of neuromelanin-pigmented neurons normally present in them. Third, within each cell group in the Parkinson's brains, there is greater relative sparing of non-pigmented than of neuromelanin-pigmented neurons. This evidence suggests a selective vulnerability of the neuromelanin-pigmented subpopulation of dopamine-containing mesencephalic neurons in Parkinson's disease.

01 Jan 1988
TL;DR: An overview of the Kerberos authentication model as imple- mented for MIT's Project Athena is given, which describes the protocols used by clients, servers, and Kerbero to achieve authentication.
Abstract: In an open network computing environment, a workstation cannot be trusted to identify its users correctly to network services. Kerberos provides an alternative approach whereby a trusted third-party authentication service is used to verify users' identities. This paper gives an overview of the Kerberos authentication model as imple- mented for MIT's Project Athena. It describes the protocols used by clients, servers, and Kerberos to achieve authentication. It also describes the management and replication of the database required. The views of Kerberos as seen by the user, programmer, and administrator are described. Finally, the role of Kerberos in the larger Athena picture is given, along with a list of applications that presently use Kerberos for user authentica- tion. We describe the addition of Kerberos authentication to the Sun Network File Sys- tem as a case study for integrating Kerberos with an existing application.

Journal ArticleDOI
25 Feb 1988-Nature
TL;DR: A neural network model, programmed using back-propagation learning, can decode spatial information from area la neurons and accounts for their observed response properties.
Abstract: Neurons in area 7a of the posterior parietal cortex of monkeys respond to both the retinal location of a visual stimulus and the position of the eyes and by combining these signals represent the spatial location of external objects. A neural network model, programmed using back-propagation learning, can decode this spatial information from area 7a neurons and accounts for their observed response properties.

Journal ArticleDOI
TL;DR: In the test, lead users were successfully identified and proved to have unique and useful data regarding both new product needs and solutions responsive to those needs and new product concepts generated on the basis of lead user data were found to be strongly preferred by a representative sample of PC-CAD users.
Abstract: Recently, a "lead user" concept has been proposed for new product development in fields subject to rapid change von Hippel [von Hippel, E. 1986. Lead users: A source of novel product concepts. Management Sci.32 791-805.]. In this paper we integrate market research within this lead user methodology and report a test of it in the rapidly evolving field of computer-aided systems for the design of printed circuit boards PC-CAD. In the test, lead users were successfully identified and proved to have unique and useful data regarding both new product needs and solutions responsive to those needs. New product concepts generated on the basis of lead user data were found to be strongly preferred by a representative sample of PC-CAD users. We discuss strengths and weaknesses of this first empirical test of the lead user methodology, and suggest directions for future research.

Proceedings ArticleDOI
01 Jan 1988
TL;DR: It is shown that this protocol, more commonly known as oblivious transfer, can be used to simulate a more sophisticated protocol,known as oblivious circuit evaluation([Y], and that with such a communication channel, one can have completely noninteractive zero-knowledge proofs of statements in NP.
Abstract: Suppose your netmail is being erratically censored by Captain Yossarian. Whenever you send a message, he censors each bit of the message with probability 1/2, replacing each censored bit by some reserved character. Well versed in such concepts as redundancy, this is no real problem to you. The question is, can it actually be turned around and used to your advantage? We answer this question strongly in the affirmative. We show that this protocol, more commonly known as oblivious transfer, can be used to simulate a more sophisticated protocol, known as oblivious circuit evaluation([Y]). We also show that with such a communication channel, one can have completely noninteractive zero-knowledge proofs of statements in NP. These results do not use any complexity-theoretic assumptions. We can show that they have applications to a variety of models in which oblivious transfer can be done.


Journal ArticleDOI
TL;DR: In this paper, a closed-form solution to the least square problem for three or more points is presented, which requires the computation of the square root of a symmetric matrix, and the best scale is equal to the ratio of the root-mean-square deviations of the coordinates in the two systems from their respective centroids.
Abstract: Finding the relationship between two coordinate systems by using pairs of measurements of the coordinates of a number of points in both systems is a classic photogrammetric task. The solution has applications in stereophotogrammetry and in robotics. We present here a closed-form solution to the least-squares problem for three or more points. Currently, various empirical, graphical, and numerical iterative methods are in use. Derivation of a closed-form solution can be simplified by using unit quaternions to represent rotation, as was shown in an earlier paper [ J. Opt. Soc. Am. A4, 629 ( 1987)]. Since orthonormal matrices are used more widely to represent rotation, we now present a solution in which 3 × 3 matrices are used. Our method requires the computation of the square root of a symmetric matrix. We compare the new result with that obtained by an alternative method in which orthonormality is not directly enforced. In this other method a best-fit linear transformation is found, and then the nearest orthonormal matrix is chosen for the rotation. We note that the best translational offset is the difference between the centroid of the coordinates in one system and the rotated and scaled centroid of the coordinates in the other system. The best scale is equal to the ratio of the root-mean-square deviations of the coordinates in the two systems from their respective centroids. These exact results are to be preferred to approximate methods based on measurements of a few selected points.

Journal ArticleDOI
22 Apr 1988-Cell
TL;DR: It is concluded that both 70Z/3 and HeLa cells contain apparently cytosolic NF-kappa B in a form with no evident DNA-binding activity, and phorbol esters both release the inhibition of binding and cause a translocation to the nucleus.

Book
01 Jan 1988
TL;DR: In this paper, the fundamental problem of Morse theory is to study the topological changes in the space X ≤c as the number c varies, where X is a topological space and c is a real number.
Abstract: Suppose that X is a topological space, f is a real valued function on X, and c is a real number. Then we will denote by X ≤c the subspace of points x in X such that f(x)≤c. The fundamental problem of Morse theory is to study the topological changes in the space X ≤c as the number c varies.

Journal ArticleDOI
01 Oct 1988
TL;DR: In this article, the authors present protocols for allowing a "prover" to convince a "verifier" that the prover knows some verifiable secret information, without allowing the verifier to learn anything about the secret.
Abstract: Protocols are given for allowing a “prover” to convince a “verifier” that the prover knows some verifiable secret information, without allowing the verifier to learn anything about the secret. The secret can be probabilistically or deterministically verifiable, and only one of the prover or the verifier need have constrained resources. This paper unifies and extends models and techniques previously put forward by the authors, and compares some independent related work.

Journal ArticleDOI
01 Jul 1988-Nature
TL;DR: In this paper, a new group of photosynthetic picoplankters was identified, which are extremely abundant, and barely visible using traditional microscopic techniques, reaching concentrations greater than 105 cells ml−1 in the deep euphotic zone.
Abstract: The recent discovery of photosynthetic picoplankton has changed our understanding of marine food webs1. Both prokaryotic2,3 and eukaryotic4,5 species occur in most of the world's oceans and account for a significant proportion of global productivity6. Using shipboard flow cytometry, we have identified a new group of picoplankters which are extremely abundant, and barely visible using traditional microscopic techniques. These cells are smaller than the coccoid cyanobacteria and reach concentrations greater than 105 cells ml–1 in the deep euphotic zone. They fluoresce red and contain a divinyl chlorophyll a-like pigment, as well as chlorophyll b, α-carotene, and zeaxanthin. This unusual combination of pigments, and a distinctive prokaryotic ultrastructure, suggests that these picoplankters are free-living relatives of Prochloron7. They differ from previously reported prochlorophytes—the putative ancestors of the chloroplasts of higher plants—in that they contain α-carotene rather than β-carotene and contain a divinyl chlorophyll a-like pigment as the dominant chlorophyll.

Journal ArticleDOI
01 Aug 1988
TL;DR: This paper attempts to capture some of the early reasoning which shaped the Internet protocols.
Abstract: The Internet protocol suite, TCP/IP, was first proposed fifteen years ago. It was developed by the Defense Advanced Research Projects Agency (DARPA), and has been used widely in military and commercial systems. While there have been papers and specifications that describe how the protocols work, it is sometimes difficult to deduce from these why the protocol is as it is. For example, the Internet protocol is based on a connectionless or datagram mode of service. The motivation for this has been greatly misunderstood. This paper attempts to capture some of the early reasoning which shaped the Internet protocols.

Journal ArticleDOI
TL;DR: In this paper, the optimality of the one share-one-vote rule in a corporate control contest was analyzed and sufficient conditions were given for one-share-one vote to be optimal overall.

Journal ArticleDOI
TL;DR: In this paper, the effect of covariance matrix sample size on the system performance of adaptive arrays using the sample matrix inversion (SMI) algorithm has been investigated, and a technique to reduce these effects by modifying the covariance matrices estimate is described from the point of view of eigenvector decomposition.
Abstract: Simulations were used to investigate the effect of covariance matrix sample size on the system performance of adaptive arrays using the sample matrix inversion (SMI) algorithm. Inadequate estimation of the covariance matrix results in adapted antenna patterns with high sidelobes and distorted mainbeams. A technique to reduce these effects by modifying the covariance matrix estimate is described from the point of view of eigenvector decomposition. This diagonal loading technique reduces the system nulling capability against low-level interference, but parametric studies show that it is an effective approach in many situations. >