scispace - formally typeset
Search or ask a question

Showing papers by "Carnegie Mellon University published in 1996"


Journal ArticleDOI
Claude Amsler1, Michael Doser2, Mario Antonelli, D. M. Asner3  +173 moreInstitutions (86)
TL;DR: This biennial Review summarizes much of particle physics, using data from previous editions.

12,798 citations


Book ChapterDOI
01 Jan 1996
TL;DR: This paper presents a protocol for routing in ad hoc networks that uses dynamic source routing that adapts quickly to routing changes when host movement is frequent, yet requires little or no overhead during periods in which hosts move less frequently.
Abstract: An ad hoc network is a collection of wireless mobile hosts forming a temporary network without the aid of any established infrastructure or centralized administration. In such an environment, it may be necessary for one mobile host to enlist the aid of other hosts in forwarding a packet to its destination, due to the limited range of each mobile host’s wireless transmissions. This paper presents a protocol for routing in ad hoc networks that uses dynamic source routing. The protocol adapts quickly to routing changes when host movement is frequent, yet requires little or no overhead during periods in which hosts move less frequently. Based on results from a packet-level simulation of mobile hosts operating in an ad hoc network, the protocol performs well over a variety of environmental conditions such as host density and movement rates. For all but the highest rates of host movement simulated, the overhead of the protocol is quite low, falling to just 1% of total data packets transmitted for moderate movement rates in a network of 24 mobile hosts. In all cases, the difference in length between the routes used and the optimal route lengths is negligible, and in most cases, route lengths are on average within a factor of 1.01 of optimal.

8,256 citations


Journal ArticleDOI
TL;DR: Central issues of reinforcement learning are discussed, including trading off exploration and exploitation, establishing the foundations of the field via Markov decision theory, learning from delayed reinforcement, constructing empirical models to accelerate learning, making use of generalization and hierarchy, and coping with hidden state.
Abstract: This paper surveys the field of reinforcement learning from a computer-science perspective. It is written to be accessible to researchers familiar with machine learning. Both the historical basis of the field and a broad selection of current work are summarized. Reinforcement learning is the problem faced by an agent that learns behavior through trial-and-error interactions with a dynamic environment. The work described here has a resemblance to work in psychology, but differs considerably in the details and in the use of the word "reinforcement." The paper discusses central issues of reinforcement learning, including trading off exploration and exploitation, establishing the foundations of the field via Markov decision theory, learning from delayed reinforcement, constructing empirical models to accelerate learning, making use of generalization and hierarchy, and coping with hidden state. It concludes with a survey of some implemented systems and an assessment of the practical utility of current methods for reinforcement learning.

6,895 citations


Posted Content
TL;DR: A survey of reinforcement learning from a computer science perspective can be found in this article, where the authors discuss the central issues of RL, including trading off exploration and exploitation, establishing the foundations of RL via Markov decision theory, learning from delayed reinforcement, constructing empirical models to accelerate learning, making use of generalization and hierarchy, and coping with hidden state.
Abstract: This paper surveys the field of reinforcement learning from a computer-science perspective. It is written to be accessible to researchers familiar with machine learning. Both the historical basis of the field and a broad selection of current work are summarized. Reinforcement learning is the problem faced by an agent that learns behavior through trial-and-error interactions with a dynamic environment. The work described here has a resemblance to work in psychology, but differs considerably in the details and in the use of the word ``reinforcement.'' The paper discusses central issues of reinforcement learning, including trading off exploration and exploitation, establishing the foundations of the field via Markov decision theory, learning from delayed reinforcement, constructing empirical models to accelerate learning, making use of generalization and hierarchy, and coping with hidden state. It concludes with a survey of some implemented systems and an assessment of the practical utility of current methods for reinforcement learning.

5,970 citations


Journal ArticleDOI
TL;DR: In this paper, the interaction between spin waves and itinerant electrons is considerably enhanced in the vicinity of an interface between normal and ferromagnetic layers in metallic thin films, leading to a local increase of the Gilbert damping parameter which characterizes spin dynamics.
Abstract: The interaction between spin waves and itinerant electrons is considerably enhanced in the vicinity of an interface between normal and ferromagnetic layers in metallic thin films. This leads to a local increase of the Gilbert damping parameter which characterizes spin dynamics. When a dc current crosses this interface, stimulated emission of spin waves is predicted to take place. Beyond a certain critical current density, the spin damping becomes negative; a spontaneous precession of the magnetization is predicted to arise. This is the magnetic analog of the injection laser. An extra dc voltage appears across the interface, given by an expression similar to that for the Josephson voltage across a superconducting junction. \textcopyright{} 1996 The American Physical Society.

4,433 citations


Posted Content
TL;DR: In this article, the spatial distribution of innovation activity and the geographic concentration of production are examined, using three sources of economic knowledge: industry R&D, skilled labor, and the size of the pool of basic science for a specific industry.
Abstract: Previous research has indicated that investment in R&D by private firms and universities can lead to knowledge spillover, which can lead to exploitation from other third-party firms. If the ability of these third-party firms to acquire knowledge spillovers is influenced by their proximity to the knowledge source, then geographic clustering should be observable, especially in industries where access to knowledge spillovers is vital. The spatial distribution of innovation activity and the geographic concentration of production are examined, using three sources of economic knowledge: industry R&D, skilled labor, and the size of the pool of basic science for a specific industry. Results show that the propensity for innovative activity to cluster spatially is more attributable to the influence of knowledge spillovers and not merely the geographic concentration of production. (SFL)

3,695 citations


Book
01 Apr 1996
TL;DR: 1. architectural Styles, 2. Shared Information Systems, 3. Education of Software Architects, 4. Architectural Design Guidance.
Abstract: 1. Introduction. 2. Architectural Styles. 3. Case Studies. 4. Shared Information Systems. 5. Architectural Design Guidance. 6. Formal Models and Specifications. 7. Linguistic Issues. 8. Tools for Architectural Design. 9. Education of Software Architects. Bibliography. Index.

3,208 citations


Journal ArticleDOI
TL;DR: Analysis of the ability of networks to reproduce data on acquired surface dyslexia support a view of the reading system that incorporates a graded division of labor between semantic and phonological processes, and contrasts in important ways with the standard dual-route account.
Abstract: A connectionist approach to processing in quasi-regular domains, as exemplified by English word reading, is developed. Networks using appropriately structured orthographic and phonological representations were trained to read both regular and exception words, and yet were also able to read pronounceable nonwords as well as skilled readers. A mathematical analysis of a simplified system clarifies the close relationship of word frequency and spelling-sound consistency in influencing naming latencies. These insights were verified in subsequent simulations, including an attractor network that accounted for latency data directly in its time to settle on a response. Further analyses of the ability of networks to reproduce data on acquired surface dyslexia support a view of the reading system that incorporates a graded division of labor between semantic and phonological processes, and contrasts in important ways with the standard dual-route account.

2,600 citations


Journal ArticleDOI
TL;DR: For instance, the authors argued that people often act against their self-interest in full knowledge that they are doing so; they experience a feeling of being “out of control,” and attributed this phenomenon to the operation of "visceral factors" such as hunger, thirst and sexual desire, moods and emotions, physical pain, and craving for a drug one is addicted to.

2,492 citations


Book ChapterDOI
27 May 1996
TL;DR: Triangle as discussed by the authors is a robust implementation of two-dimensional constrained Delaunay triangulation and Ruppert's Delaunayer refinement algorithm for quality mesh generation, and it is shown that the problem of triangulating a planar straight line graph (PSLG) without introducing new small angles is impossible for some PSLGs.
Abstract: Triangle is a robust implementation of two-dimensional constrained Delaunay triangulation and Ruppert's Delaunay refinement algorithm for quality mesh generation. Several implementation issues are discussed, including the choice of triangulation algorithms and data structures, the effect of several variants of the Delaunay refinement algorithm on mesh quality, and the use of adaptive exact arithmetic to ensure robustness with minimal sacrifice of speed. The problem of triangulating a planar straight line graph (PSLG) without introducing new small angles is shown to be impossible for some PSLGs, contradicting the claim that a variant of the Delaunay refinement algorithm solves this problem.

2,268 citations


Journal ArticleDOI
TL;DR: This paper is a review of existing work on adaptive hypermedia and introduces several dimensions of classification of AH systems, methods and techniques and describes the most important of them.
Abstract: Adaptive hypermedia is a new direction of research within the area of adaptive and user model-based interfaces. Adaptive hypermedia (AH) systems build a model of the individual user and apply it for adaptation to that user, for example, to adapt the content of a hypermedia page to the user's knowledge and goals, or to suggest the most relevant links to follow. AH systems are used now in several application areas where the hyperspace is reasonably large and where a hypermedia application is expected to be used by individuals with different goals, knowledge and backgrounds. This paper is a review of existing work on adaptive hypermedia. The paper is centered around a set of identified methods and techniques of AH. It introduces several dimensions of classification of AH systems, methods and techniques and describes the most important of them.


Journal ArticleDOI
TL;DR: The conditions under which a set of continuous 2D Gabor wavelets will provide a complete representation of any image are derived, and self-similar wavelet parametrization is found which allow stable reconstruction by summation as though the wavelets formed an orthonormal basis.
Abstract: This paper extends to two dimensions the frame criterion developed by Daubechies for one-dimensional wavelets, and it computes the frame bounds for the particular case of 2D Gabor wavelets. Completeness criteria for 2D Gabor image representations are important because of their increasing role in many computer vision applications and also in modeling biological vision, since recent neurophysiological evidence from the visual cortex of mammalian brains suggests that the filter response profiles of the main class of linearly-responding cortical neurons (called simple cells) are best modeled as a family of self-similar 2D Gabor wavelets. We therefore derive the conditions under which a set of continuous 2D Gabor wavelets will provide a complete representation of any image, and we also find self-similar wavelet parametrization which allow stable reconstruction by summation as though the wavelets formed an orthonormal basis. Approximating a "tight frame" generates redundancy which allows low-resolution neural responses to represent high-resolution images.

Journal ArticleDOI
TL;DR: Functional magnetic resonance imaging is used to probe PFC activity during a sequential letter task in which memory load was varied in an incremental fashion, providing a "dose-response curve" describing the involvement of both PFC and related brain regions in WM function.

Proceedings ArticleDOI
01 Nov 1996
TL;DR: This document specifies Mobile IPv6, a protocol which allows nodes to remain reachable while moving around in the IPv6 Internet, and defines a new IPv6 protocol and a new destination option.
Abstract: This document specifies a protocol which allows nodes to remain reachable while moving around in the IPv6 Internet. Each mobile node is always identified by its home address, regardless of its current point of attachment to the Internet. While situated away from its home, a mobile node is also associated with a care-of address, which provides information about the mobile node's current location. IPv6 packets addressed to a mobile node's home address are transparently routed to its care-of address. The protocol enables IPv6 nodes to cache the binding of a mobile node's home address with its care-of address, and to then send any packets destined for the mobile node directly to it at this care-of address. To support this operation, Mobile IPv6 defines a new IPv6 protocol and a new destination option. All IPv6 nodes, whether mobile or stationary can communicate with mobile nodes.

Journal ArticleDOI
TL;DR: The state of the art in specification and verification, which includes advances in model checking and theorem proving, is assessed and future directions in fundamental concepts, new methods and tools, integration of methods, and education and technology transfer are outlined.
Abstract: Hardware and software systems will inevitably grow in scale and functionality. Because of this increase in complexity, the likelihood of subtle errors is much greater. Moreover, some of these errors may cause catastrophic loss of money, time, or even human life. A major goal of software engineering is to enable developers to construct systems that operate reliably despite this complexity. One way of achieving this goal is by using formal methods, which are mathematically based languages, techniques, and tools for specifying and verifying such systems. Use of formal methods does not a priori guarantee correctness. However, they can greatly increase our understanding of a system by revealing inconsistencies, ambiguities, and incompleteness that might otherwise go undetected. The first part of this report assesses the state of the art in specification and verification. For verification, we highlight advances in model checking and theorem proving. In the three sections on specification, model checking, and theorem proving, we explain what we mean by the general technique and briefly describe some successful case studies and well-known tools. The second part of this report outlines future directions in fundamental concepts, new methods and tools, integration of methods, and education and technology transfer. We close with summary remarks and pointers to resources for more information.

Journal ArticleDOI
TL;DR: This article reviewed the four central claims of situated learning with respect to education: action is grounded in the concrete situation in which it occurs; knowledge does not transfer between tasks; training by abstraction is of little use; and instruction must be done in complex, social environments.
Abstract: This paper provides a review of the claims of situated learning that are having an increasing influence on education generally and mathematics education particularly. We review the four central claims of situated learning with respect to education: (1) action is grounded in the concrete situation in which it occurs; (2) knowledge does not transfer between tasks; (3) training by abstraction is of little use; and (4) instruction must be done in complex, social environments. In each case, we cite empirical literature to show that the claims are overstated and that some of the educational implications that have been taken from these claims are misguided.

Journal ArticleDOI
TL;DR: In this article, a survey of advice seekers and those who replied was conducted to test hypotheses about the viability and usefulness of such electronic weak tie exchanges, and the usefulness of this help may depend on the number of ties, the diversity of ties or the resources of help providers.
Abstract: People use weak ties—relationships with acquaintances or strangers—to seek help unavailable from friends or colleagues. Yet in the absence of personal relationships or the expectation of direct reciprocity, help from weak ties might not be forthcoming or could be of low quality. We examined the practice of distant employees (strangers) exchanging technical advice through a large organizational computer network. A survey of advice seekers and those who replied was conducted to test hypotheses about the viability and usefulness of such electronic weak tie exchanges. Theories of organizational motivation suggest that positive regard for the larger organization can substitute for direct incentives or personal relationships in motivating people to help others. Theories of weak ties suggest that the usefulness of this help may depend on the number of ties, the diversity of ties, or the resources of help providers. We hypothesized that, in an organizational context, the firm-specific resources and organizational...

Journal ArticleDOI
TL;DR: In this article, a simple model based on the idea of R&D cost spreading was proposed to explain the prior findings about the relationship between the propensity to perform research and the size of a firm.
Abstract: Numerous studies have shown that, within industries, the propensity to perform R&D and the amount of R&D conducted by performers are closely related to the size of the firm, while R&D productivity declines with firm size. These findings have been widely interpreted to indicate that there is no advantage to large firm size in conducting R&D. The authors show how a simple model based on the idea of R&D cost spreading can explain the prior findings about the R&D-firm size relationship, as well as additional features of the R&D-firm size relationship, implying an advantage to large size in R&D.


Journal ArticleDOI
TL;DR: In this paper, a review of techniques for constructing non-informative priors is presented and some of the practical and philosophical issues that arise when they are used are discussed.
Abstract: Subjectivism has become the dominant philosophical foundation for Bayesian inference. Yet in practice, most Bayesian analyses are performed with so-called “noninformative” priors, that is, priors constructed by some formal rule. We review the plethora of techniques for constructing such priors and discuss some of the practical and philosophical issues that arise when they are used. We give special emphasis to Jeffreys's rules and discuss the evolution of his viewpoint about the interpretation of priors, away from unique representation of ignorance toward the notion that they should be chosen by convention. We conclude that the problems raised by the research on priors chosen by formal rules are serious and may not be dismissed lightly: When sample sizes are small (relative to the number of parameters being estimated), it is dangerous to put faith in any “default” solution; but when asymptotics take over, Jeffreys's rules and their variants remain reasonable choices. We also provide an annotated b...

Proceedings ArticleDOI
24 Mar 1996
TL;DR: It has been proven that the delay bound provided by WFQ is within one packet transmission time of that provided by GPS, and a new packet approximation algorithm of GPS called worst-case fair weighted fair queueing (WF/sup 2/Q) is proposed.
Abstract: The generalized processor sharing (GPS) discipline is proven to have two desirable properties: (a) it can provide an end-to-end bounded-delay service to a session whose traffic is constrained by a leaky bucket; (b) it can ensure fair allocation of bandwidth among all back logged sessions regardless of whether or not their traffic is constrained. The former property is the basis for supporting guaranteed service traffic while the later property is important for supporting best-effort service traffic. Since GPS uses an idealized fluid model which cannot be realized in the real world, various packet approximation algorithms of the GPS have been proposed. Among these, weighted fair queueing (WFQ) also known as packet generalized processor sharing (PGPS) has been considered to be the best one in terms of accuracy. In particular, it has been proven that the delay bound provided by WFQ is within one packet transmission time of that provided by GPS. We show that, contrary to popular belief there could be large discrepancies between the services provided by the packet WFQ system and the fluid GPS system. We argue that such a discrepancy will adversely effect many congestion control algorithms that rely on services similar to those provided by GPS. A new packet approximation algorithm of GPS called worst-case fair weighted fair queueing (WF/sup 2/Q) is proposed. The service provided by WF/sup 2/Q is almost identical to that of GPS, differing no more than one maximum size packet.

Book
01 Jan 1996
TL;DR: In this article, the authors present a new agenda for cognitive development, which is based on evolutionary and cognitive development with a focus on the evolution and cognitive variability of children and the adaptivity of multiplicity.
Abstract: 1. Whose Children are we Talking About? 2. Evolution and Cognitive Development 3. Cognitive Variability: The Ubiquity of Multiplicity 4. Strategic Development: Trudging up the Staircase or Swimming with the Tide 5. The Adaptivity of Multiplicity 6. Formal Models of Strategy Choice or Plasterers and Professors 7. How Children Generate New Ways of Thinking 8. A New Agenda for Cognitive Development

Journal ArticleDOI
04 Oct 1996-Science
TL;DR: The comprehension of visually presented sentences produces brain activation that increases with the linguistic complexity of the sentence, and the amount of neural activity that a given cognitive process engenders is dependent on the computational demand that the task imposes.
Abstract: The comprehension of visually presented sentences produces brain activation that increases with the linguistic complexity of the sentence. The volume of neural tissue activated (number of voxels) during sentence comprehension was measured with echo-planar functional magnetic resonance imaging. The modulation of the volume of activation by sentence complexity was observed in a network of four areas: the classical left-hemisphere language areas (the left laterosuperior temporal cortex, or Wernicke's area, and the left inferior frontal gyrus, or Broca's area) and their homologous right-hemisphere areas, although the right areas had much smaller volumes of activation than did the left areas. These findings generally indicate that the amount of neural activity that a given cognitive process engenders is dependent on the computational demand that the task imposes.

Journal ArticleDOI
TL;DR: The current study demonstrates the separability of spatial and verbal working memory resources among college students and demonstrates that both the processing and storage components of working memory tasks are important for predicting performance on spatial thinking and language processing tasks.
Abstract: The current study demonstrates the separability of spatial and verbal working memory resources among college students. In Experiment 1, we developed a spatial span task that taxes both the processing and storage components of spatial working memory. This measure correlates with spatial ability (spatial visualization) measures, but not with verbal ability measures. In contrast, the reading span test, a common test of verbal working memory, correlates with verbal ability measures, but not with spatial ability measures. Experiment 2, which uses an interference paradigm to cross the processing and storage demands of span tasks, replicates this dissociation and further demonstrates that both the processing and storage components of working memory tasks are important for predicting performance on spatial thinking and language processing tasks.

Journal ArticleDOI
TL;DR: In this paper, a unified framework for Ginzburg-Landau and Cahn-Hilliard type equations was developed using a balance law for microforces in conjunction with constitutive equations consistent with a mechanical version of the second law.

Journal ArticleDOI
10 May 1996-Science
TL;DR: A radical polymerization process that yields well-defined polymers normally obtained only through anionic polymerizations is reported, and has all of the characteristics of a living polymerization.
Abstract: A radical polymerization process that yields well-defined polymers normally obtained only through anionic polymerizations is reported. Atom transfer radical polymerizations of styrene were conducted with several solubilizing ligands for the copper(I) halides: 4,4′-di-tert-butyl, 4,4′-di-n-heptyl, and 4,4′-di-(5-nonyl)-2,2′-dipyridyl. The resulting polymerizations have all of the characteristics of a living polymerization and displayed linear semilogarithmic kinetic plots, a linear correlation between the number-average molecular weight and the monomer conversion, and low polydispersities (ratio of the weight-average to number-average molecular weights of 1.04 to 1.05). Similar results were obtained for the polymerization of acrylates.


Proceedings ArticleDOI
01 May 1996
TL;DR: A set of constraints intrinsic to mobile computing is described, and the impact of these constraints on the design of distributed systems are examined, including the Coda and Odyssey systems.
Abstract: : This paper is an answer to the question: 'What is unique and conceptually different about mobile computing?' The paper begins by describing a set of constraints intrinsic to mobile computing, and examining the impact of these constraints on the design of distributed systems. Next, it summarizes the key results of the Coda and Odyssey systems. Finally, it describes the research opportunities in five important topics relevant to mobile computing: caching metrics, semantic callbacks and validators, resource revocation, analysis of adaptation, and global estimation from local observations.

Journal ArticleDOI
TL;DR: An adaptive statistical language model is described, which successfully integrates long distance linguistic information with other knowledge sources, and shows the feasibility of incorporating many diverse knowledge sources in a single, unified statistical framework.