scispace - formally typeset
Search or ask a question

Showing papers by "Brown University published in 1996"


Journal ArticleDOI
Claude Amsler1, Michael Doser2, Mario Antonelli, D. M. Asner3  +173 moreInstitutions (86)
TL;DR: This biennial Review summarizes much of particle physics, using data from previous editions.

12,798 citations


Journal ArticleDOI
TL;DR: Central issues of reinforcement learning are discussed, including trading off exploration and exploitation, establishing the foundations of the field via Markov decision theory, learning from delayed reinforcement, constructing empirical models to accelerate learning, making use of generalization and hierarchy, and coping with hidden state.
Abstract: This paper surveys the field of reinforcement learning from a computer-science perspective. It is written to be accessible to researchers familiar with machine learning. Both the historical basis of the field and a broad selection of current work are summarized. Reinforcement learning is the problem faced by an agent that learns behavior through trial-and-error interactions with a dynamic environment. The work described here has a resemblance to work in psychology, but differs considerably in the details and in the use of the word "reinforcement." The paper discusses central issues of reinforcement learning, including trading off exploration and exploitation, establishing the foundations of the field via Markov decision theory, learning from delayed reinforcement, constructing empirical models to accelerate learning, making use of generalization and hierarchy, and coping with hidden state. It concludes with a survey of some implemented systems and an assessment of the practical utility of current methods for reinforcement learning.

6,895 citations


Posted Content
TL;DR: A survey of reinforcement learning from a computer science perspective can be found in this article, where the authors discuss the central issues of RL, including trading off exploration and exploitation, establishing the foundations of RL via Markov decision theory, learning from delayed reinforcement, constructing empirical models to accelerate learning, making use of generalization and hierarchy, and coping with hidden state.
Abstract: This paper surveys the field of reinforcement learning from a computer-science perspective. It is written to be accessible to researchers familiar with machine learning. Both the historical basis of the field and a broad selection of current work are summarized. Reinforcement learning is the problem faced by an agent that learns behavior through trial-and-error interactions with a dynamic environment. The work described here has a resemblance to work in psychology, but differs considerably in the details and in the use of the word ``reinforcement.'' The paper discusses central issues of reinforcement learning, including trading off exploration and exploitation, establishing the foundations of the field via Markov decision theory, learning from delayed reinforcement, constructing empirical models to accelerate learning, making use of generalization and hierarchy, and coping with hidden state. It concludes with a survey of some implemented systems and an assessment of the practical utility of current methods for reinforcement learning.

5,970 citations


Journal ArticleDOI
TL;DR: A new way of measuring the smoothness of a numerical solution is proposed, emulating the idea of minimizing the total variation of the approximation, which results in a fifth-order WENO scheme for the caser= 3, instead of the fourth-order with the original smoothness measurement by Liuet al.

5,649 citations


Journal ArticleDOI
TL;DR: The distinction between rule-based and associative systems of reasoning has been discussed extensively in cognitive psychology as discussed by the authors, where the distinction is based on the properties that are normally assigned to rules.
Abstract: Distinctions have been proposed between systems of reasoning for centuries. This article distills properties shared by many of these distinctions and characterizes the resulting systems in light of recent findings and theoretical developments. One system is associative because its computations reflect similarity structure and relations of temporal contiguity. The other is "rule based" because it operates on symbolic structures that have logical content and variables and because its computations have the properties that are normally assigned to rules. The systems serve complementary functions and can simultaneously generate different solutions to a reasoning problem. The rule-based system can suppress the associative system but not completely inhibit it. The article reviews evidence in favor of the distinction and its characterization. One of the oldest conundrums in psychology is whether people are best conceived as parallel processors of information who operate along diffuse associative links or as analysts who operate by deliberate and sequential manipulation of internal representations. Are inferences drawn through a network of learned associative pathways or through application of a kind of"psychologic" that manipulates symbolic tokens in a rule-governed way? The debate has raged (again) in cognitive psychology for almost a decade now. It has pitted those who prefer models of mental phenomena to be built out of networks of associative devices that pass activation around in parallel and distributed form (the way brains probably function) against those who prefer models built out of formal languages in which symbols are composed into sentences that are processed sequentially (the way computers function). An obvious solution to the conundrum is to conceive of the

3,488 citations


Journal ArticleDOI
TL;DR: An integrative conceptual model of child development is presented, anchored within social stratification theory, emphasizing the importance of racism, prejudice, discrimination, oppression, and segregation on the development of minority children and families.
Abstract: In this article a conceptual model for the study of child development in minority populations in the United States is proposed. In support of the proposed model, this article includes (a) a delineation and critical analysis of mainstream theoretical frameworks in relation to their attention and applicability to the understanding of developmental processes in children of color and of issues at the intersection of social class, culture, ethnicity, and race, and (b) a description and evaluation of the conceptual frameworks that have guided the extant literature on minority children and families. Based on the above considerations, an integrative conceptual model of child development is presented, anchored within social stratification theory, emphasizing the importance of racism, prejudice, discrimination, oppression, and segregation on the development of minority children and families.

2,333 citations


Journal ArticleDOI
TL;DR: In this paper, a Lagrangian finite element method of fracture and fragmentation in brittle materials is developed, where a cohesive-law fracture model is used to propagate multiple cracks along arbitrary paths.

1,970 citations


Journal ArticleDOI
TL;DR: In this article, a finite element methodology for the analysis of problems requiring the simultaneous resolution of continuum and atomistic length scales-and associated deformation processes-in a unified manner is presented.
Abstract: We develop a method which permits the analysis of problems requiring the simultaneous resolution of continuum and atomistic length scales-and associated deformation processes-in a unified manner. A finite element methodology furnishes a continuum statement of the problem of interest and provides the requisite multiple-scale analysis capability by adaptively refining the mesh near lattice defects and other highly energetic regions. The method differs from conventional finite element analyses in that interatomic interactions are incorporated into the model through a crystal calculation based on the local state of deformation. This procedure endows the model with crucial properties, such as slip invariance, which enable the emergence of dislocations and other lattice defects. We assess the accuracy of the theory in the atomistic limit by way of three examples: a stacking fault on the (111) plane, and edge dislocations residing on (111) and (100) planes of an aluminium single crystal. The method correctly predicts the splitting of the (111) edge dislocation into Shockley partials. The computed separation of these partials is consistent with results obtained by direct atomistic simulations. The method predicts no splitting of the Al Lomer dislocation, in keeping with observation and the results of direct atomistic simulation. In both cases, the core structures are found to be in good agreement with direct lattice statics calculations, which attests to the accuracy of the method at the atomistic scale.

1,487 citations


Journal ArticleDOI
TL;DR: In patients with septic shock, treatment with the TNFR:Fc fusion protein does not reduce mortality, and higher doses appear to be associated with increased mortality.
Abstract: Background A recombinant, soluble fusion protein that is a dimer of an extracellular portion of the human tumor necrosis factor (TNF) receptor and the Fc portion of IgG1 (TNFR:Fc) binds and neutralizes TNF-α and prevents death in animal models of bacteremia and endotoxemia. Methods To evaluate the safety and efficacy of TNFR:Fc in the treatment of septic shock, we conducted a randomized, double-blind, placebo-controlled, multicenter trial. A total of 141 patients were randomly assigned to receive either placebo or a single intravenous infusion of one of three doses of TNFR:Fc (0.15, 0.45, or 1.5 mg per kilogram of body weight). The primary end point was mortality from all causes at 28 days. Results There were 10 deaths among the 33 patients in the placebo group (30 percent mortality), 9 deaths among the 30 patients receiving the low dose of TNFR:Fc (30 percent mortality), 14 deaths among the 29 receiving the middle dose (48 percent mortality), and 26 deaths among the 49 receiving the high dose (53 percent...

1,218 citations


Journal ArticleDOI
TL;DR: In this paper, the existence and uniqueness of optimal maps are discussed. But the uniqueness of the optimal map is not discussed. And the role of the map in finding the optimal solution is left open.
Abstract: Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 113 1. Summary of main results . . . . . . . . . . . . . . . . . . . . . . . . 120 2. Background on optimal measures . . . . . . . . . . . . . . . . . . . 126 Part I. Strictly convex costs . . . . . . . . . . . . . . . . . . . . . . . . . . 133 3. Existence and uniqueness of optimal maps . . . . . . . . . . . . . 133 4. Characterization of the optimal map . . . . . . . . . . . . . . . . . 137 Part II. Costs which are strictly concave as a function of d i s t a n c e . . . 141 5. The role of optimal maps . . . . . . . . . . . . . . . . . . . . . . . . 141 6. Uniqueness of optimal solutions . . . . . . . . . . . . . . . . . . . . 144 Part III. Appendices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 148 A. Legendre transforms and conjugate costs . . . . . . . . . . . . . . 148 B. Examples of c-concave potentials . . . . . . . . . . . . . . . . . . . 152 C. Regularity of c-concave potentials . . . . . . . . . . . . . . . . . . 154 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 159

939 citations


Journal ArticleDOI
Abstract: We develop a framework for quantifying the amount of risk sharing among states in the United States, and construct data that allow us to decompose the cross-sectional variance in gross state product into several components which we refer to as levels of smoothing. We find that 39 percent of shocks to gross state product are smoothed by capital markets, 13 percent are smoothed by the federal government, and 23 percent are smoothed by credit markets. The remaining 25 percent are not smoothed. We also decompose the federal government smoothing into subcategories: taxes, transfers, and grants to states.

Journal ArticleDOI
Oded Galor1
TL;DR: This article argued that the convergence controversy may reflect differences in perception regarding the viable set of competing testable hypotheses generated by existing growth theories, and that in contrast to the prevailing wisdom, the traditional neo-classical growth paradigm generates the club convergence hypothesis as well as the conditional convergence hypothesis.
Abstract: This essay suggests that the convergence controversy may reflect, in part, differences in perception regarding the viable set of competing testable hypotheses generated by existing growth theories. It argues that in contrast to the prevailing wisdom, the traditional neo-classical growth paradigm generates the club convergence hypothesis as well as the conditional convergence hypothesis. Furthermore, the inclusion of empirically significant variables such as human capital, income distribution, and fertility in conventional growth models, along with capital market imperfections, externalities, and non-convexities, strengthens the viability of club convergence as a competing hypothesis with conditional convergence.

Proceedings ArticleDOI
01 Aug 1996
TL;DR: The SKETCH application uses simple non-photorealistic rendering and a purely gestural interface based on simplified line drawings of primitives that allows all operations to be specified within the 3D world.
Abstract: Sketching communicates ideas rapidly through approximate visual images with low overhead (pencil and paper), no need for precision or specialized knowledge, and ease of low-level correction and revision. In contrast, most 3D computer modeling systems are good at generating arbitrary views of precise 3D models and support high-level editing and revision. The SKETCH application described in this paper attempts to combine the advantages of each in order to create an environment for rapidly conceptualizing and editing approximate 3D scenes. To achieve this, SKETCH uses simple non-photorealistic rendering and a purely gestural interface based on simplified line drawings of primitives that allows all operations to be specified within the 3D world.

Journal ArticleDOI
TL;DR: In this article, the authors compare the disconflrmation model to several other models of how prior beliefs influence current judgments and present data that provide support for the disconfirmation model.
Abstract: Two experiments provided evidence for a disconfirmation bias in argument evaluation such that arguments incompatible with prior beliefs are scrutinized longer, subjected to more extensive refutational analyses, and consequently are judged to be weaker than arguments compatible with prior beliefs. The idea that people are unable to evaluate evidence independently of prior beliefs has been documented elsewhere, including in the classic study by C. G. Lord, L. Ross, and M. R. Lepper (1979). The present findings contribute to this literature by specifying the processes by which prior beliefs affect the evaluation of evidence. The authors compare the disconflrmation model to several other models of how prior beliefs influence current judgments and present data that provide support for the disconfirmation model. Results indicate that whether a person's prior belief is accompanied by emotional conviction affects the magnitude and form of the disconfirmation bias. When evaluating an argument, can one assess its strength independently of one's prior belief in the conclusion? A good deal of evidence indicates the answer is an emphatic no (e.g., Batson, 1975; Chapman & Chapman, 1959; Darley & Gross, 1983; Geller & Pitz, 1968; Nisbett & Ross, 1980; Sherif & Hovland, 1961). This phenomenon, which we refer to as the prior belief effect, has important implications. Given two people, or groups, with opposing beliefs about a social, political, or scientific issue, the degree to which they will view relevant evidence as strong will differ. This difference, in turn, may result in a failure of the opposing parties to converge on any kind of meaningful agreement, and, under some circumstances, they may become more extreme in their beliefs. Perhaps the most renowned study documenting the prior belief effect is one conducted by Lord, Ross, and Lepper (1979); this study served as the starting point for our work. Lord et al. were concerned with people's evaluations of arguments about

Journal ArticleDOI
10 Jul 1996-JAMA
TL;DR: Recent data on HIV pathogenesis, methods to determine plasma HIV RNA, clinical trial data, and availability of new drugs point to the need for new approaches to treatment, and therapeutic approaches need to be updated as new data continue to emerge.
Abstract: Objective. —To provide clinical recommendations for antiretroviral therapy for human immunodeficiency virus (HIV) disease with currently (mid 1996) available drugs. When to start therapy, what to start with, when to change, and what to change to were addressed. Participants. —A 13-member panel representing international expertise in antiretroviral research and HIV patient care was selected by the International AIDS Society—USA. Evidence. —Available clinical and basic science data, including phase 3 controlled trials, clinical endpoint data, virologic and immunologic endpoint data, interim analyses, studies of HIV pathophysiology, and expert opinions of panel members were considered. Recommendations were limited to drugs available in mid 1996. Process. —For each question posed, 1 or more member(s) reviewed and presented available data. Recommendations were determined by group consensus (January 1996); revisions as warranted by new data were incorporated by group consensus (February-May 1996). Conclusions. —Recent data on HIV pathogenesis, methods to determine plasma HIV RNA, clinical trial data, and availability of new drugs point to the need for new approaches to treatment. Therapy is recommended based on CD4+cell count, plasma HIV RNA level, or clinical status. Preferred initial drug regimens include nucleoside combinations; at present protease inhibitors are probably best reserved for patients at higher progression risk. For treatment failure or drug intolerance, subsequent regimen considerations include reasons for changing therapy, available drug options, disease stage, underlying conditions, and concomitant medication(s). Therapy for primary (acute) infection, high-risk exposures to HIV, and maternal-to-fetal transmission are also addressed. Therapeutic approaches need to be updated as new data continue to emerge.

Journal ArticleDOI
TL;DR: The mechanisms of LTD are discussed and their possible relevance to hippocampal function is discussed.
Abstract: Long-term depression (LTD) is a lasting decrease in synaptic effectiveness that follows some types of electrical stimulation in the hippocampus. Two broad types of LTD may be distinguished. Heterosynaptic LTD can occur at synapses that are inactive, normally during high-frequency stimulation of a converging synaptic input. Homosynaptic LTD can occur at synapses that are activated, normally at low frequencies. Here we discuss the mechanisms of LTD and their possible relevance to hippocampal function.

Journal ArticleDOI
06 Jun 1996-Nature
TL;DR: In visual cortex of light-deprived rats that LTP is enhanced and LTD diminished over a range of stimulation frequencies, and that these effects can be reversed by as little as two days of light exposure, support the idea that a variable synaptic-modification threshold allows synaptic weights in neural networks to achieve a stable equilibrium.
Abstract: In many regions of the cerebral cortex, Ca2+ influx through NMDA (N-methyl-D-aspartate) sensitive glutamate receptors (NMDA receptors) can trigger two forms of synaptic plasticity: long-term depression (LTD) and long-term potentiation (LTP). LTD is induced by low levels of postsynaptic NMDA-receptor activation, for instance in response to low-frequency stimulation, whereas LTP is induced by the stronger activation that occurs following high-frequency stimulation. Theoretical studies have shown that the properties of synaptic LTD and LTP can account for many aspects of experience-dependent plasticity in the developing visual cortex, provided that the LTD-LTP crossover point (the modification threshold, theta(m)) varies as a function of the history of cortical activity. Here we provide direct experimental evidence that the value of theta(m) depends on sensory experience. We find in visual cortex of light-deprived rats that LTP is enhanced and LTD diminished over a range of stimulation frequencies, and that these effects can be reversed by as little as two days of light exposure. Our findings support the idea that a variable synaptic modification threshold allows synaptic weights in neural networks to achieve a stable equilibrium.

Journal ArticleDOI
TL;DR: Two of the more fundamental ways in which molecules change their behavior when they are dissolved are that they can begin to exchange energy with the surrounding liquid and they can induce their surroundings to rearrange so as to provide a significant stabilizing influence as mentioned in this paper.
Abstract: Two of the more fundamental ways in which molecules change their behavior when they are dissolved are that they can begin to exchange energy with the surrounding liquid and they can induce their surroundings to rearrange so as to provide a significant stabilizing influence. The first of these is typified by the process of vibrational population relaxation of a vibrationally hot species. The second conceptcritical to solution chemistryis what is known as solvation. Both of these processes are sufficiently fundamental that one would really like to know, at the most mechanical and molecular level possible, just what events are required in order to make them happen. But how difficult is it going to be to extract such molecular detail from the complicated many-body dynamics? The most microscopic level of understanding one could ever hope to possess might seem far removed from the finely detailed dynamical information which is available routinely for individual isolated molecules and for molecule−molecule colli...

Journal ArticleDOI
TL;DR: In this paper, the authors present new, well-dated, multi-proxy records of past monsoon variation from three separate Arabian Sea sediment cores that span the last glacial maximum to late-Holocene.
Abstract: Previously published results suggest that the strength of the SW Indian Monsoon can vary significantly on century- to millenium time scales, an observation that has important implications for assessments of future climate and hydrologic change over densely populated portions of Asia. We present new, well-dated, multi-proxy records of past monsoon variation from three separate Arabian Sea sediment cores that span the last glacial maximum to late-Holocene. To a large extent, these records confirm earlier published suggestions that the monsoon strengthened in a series of abrupt events over the last deglaciation. However, our data provide a somewhat refined picture of when these events took place, and suggest the primacy of two abrupt increases in monsoon intensity, one between 13 and 12.5 ka, and the other between 10 and 9.5 ka. This conclusion is supported by the comparisons between our new marine data and published paleoclimatic records throughout the African-Asian monsoon region. The comparison of data sets further supports the assertion that maximum monsoon intensity lagged peak insolation forcing by about 3000 years, and extended from about 9.5 to 5.5 ka. The episodes of rapid monsoon intensification coincided with major shifts in North Atlantic-European surface temperatures and ice-sheet extent. This coincidence, coupled with new climate model experiments, suggests that the large land-sea thermal gradient needed to drive strong monsoons developed only after glacial conditions upstream of, and on, the Tibetan Plateau receded (cold North Atlantic sea-surface temperatures, European ice-sheets, and extensive Asian snow cover). It is likely that abrupt changes in seasonal soil hydrology were as important to past monsoon forcing as were abrupt snow-related changes in regional albedo. Our analysis suggests that the monsoon responded more linearly to insolation forcing after the disappearance of glacial boundary conditions, decreasing gradually after about 6 ka. Our data also support the possibility that significant century-scale decreases in monsoon intensity took place during the early to mid-Holocene period of enhanced monsoon strength, further highlighting the need to understand paleomonsoon dynamics before accurate assessments of future monsoon strength can be made.

Journal ArticleDOI
TL;DR: The epidemiological, clinical, and laboratory data have proved that garlic contains many biologically and pharmacologically important compounds, which are beneficial to human health from cardiovascular, neoplastic, and several other diseases.
Abstract: Most studies on garlic during the past 15 years have been primarily in the fields of cardiovascular and cancer research. Cardiovascular studies have been mainly related to atherosclerosis, where effects were examined on serum cholesterol, LDL, HDL, and triglycerides. Although the studies were not consistent in relation to the dosage, standardization of garlic preparations, and period of treatment, most findings suggest that garlic decreases cholesterol and triglycerides levels in patients with increased levels of these lipids. Lowering of serum lipids by garlic ingestion may decrease the atherosclerosis process. The other major beneficial effect of garlic is due to its antithrombotic actions. This field of garlic research has been extensively studied. Garlic extracts and several garlic constituents demonstrate significant antithrombotic actions both in vitro and in vivo systems. Allicin and adenosine are the most potent antiplatelet constituents of garlic because of their in vitro effects. Since both allicin and adenosine are rapidly metabolized in human blood and other tissues, it is doubtful that these compounds contribute to any antithrombotic actions in the body. In addition, ajoene also seems not to be an active antiplatelet principle, because it is not naturally present in garlic, garlic powders, or other commercial garlic preparations. Only a small amount of ajoene can be found in garlic oil-macerates; however, ajoene is being developed as a drug for treatment of thromboembolic disorders. Recent findings on the identification of potent enzyme inhibiting activities of adenosine deaminase and cyclic AMP phosphodiesterase in garlic extracts are interesting, and may have a significant role in the pharmacological actions in the body. Presence of such enzyme inhibitors in garlic may perhaps explain several clinical effects in the body, including the antithrombotic, vasodilatory, and anticancer actions. Epidemiological studies have suggested that garlic plays a significant role in the reduction of deaths caused by malignant diseases. This had led many investigators to examine garlic and garlic constituents for their antitumor and cytotoxic actions both in vitro and in laboratory animals. The data from these investigations suggest that garlic contains several potentially important agents that possess antitumor and anticarcinogenic properties. In summary, the epidemiological, clinical, and laboratory data have proved that garlic contains many biologically and pharmacologically important compounds, which are beneficial to human health from cardiovascular, neoplastic, and several other diseases. Numerous studies are in progress all over the world to develop effective and odorless garlic preparations, as well as to isolate the active principles that may be therapeutically useful.

Journal ArticleDOI
TL;DR: In vivo footprinting of the albumin enhancer in precursor gut endoderm shows that the transcriptionally silent but potentially active chromatin is characterized by occupancy of an HNF-3 site, which provides insight into the evolutionary origin of different endodermal cell types.
Abstract: We have studied the initial development of pluripotent gut endoderm to hepatocytes using a tissue explant system from mouse embryos. We not only find cellular interactions that specify hepatic differentiation but also those that block hepatogenesis in regions of the endoderm that normally give rise to other tissues. The results implicate both positive and negative signaling in early hepatic specification. In vivo footprinting of the albumin enhancer in precursor gut endoderm shows that the transcriptionally silent but potentially active chromatin is characterized by occupancy of an HNF-3 site. Upon hepatic specification, a host of other factors bind nearby sites as the gene becomes active. Genes in pluripotent cells therefore may be marked for potential expression by entry points in chromatin, where additional factors bind during cell type specification. The findings also provide insight into the evolutionary origin of different endodermal cell types.

Posted Content
TL;DR: In this article, the authors present a model of growth and technology transfer based on the idea that technologies are specific to particular combinations of inputs, and argue that a model with appropriate technology and technology diffusion is more appealing, and has more realistic predictions for long-run convergence and growth, than either the standard neoclassical model or simple endogenous-growth models.
Abstract: We present a model of growth and technology transfer based on the idea that technologies are specific to particular combinations of inputs. We argue that this model is more realistic than the usual specification, in which an improvement in any technique for producing a given good improves all other techniques for producing that good. Our model implies that technology improvements will diffuse only slowly, even if there are no barriers to the flow of knowledge and no adoption costs. On the other hand, although our basic production technology is of the `Ak' variety, technology diffusion implies that countries with identical policies and different initial incomes do eventually converge to the same level of per-capita income. We argue that a model with appropriate technology and technology diffusion is more appealing, and has more realistic predictions for long-run convergence and growth, than either the standard neoclassical model or simple endogenous-growth models.

Journal ArticleDOI
TL;DR: In this paper, the authors examined alternative generalized method of moments procedures for estimation of a stochastic autoregressive volatility model by Monte Carlo methods and provided guidelines that help achieve desirable small-sample properties in settings characterized by strong conditional heteroscedasticity and correlation among the moments.
Abstract: We examine alternative generalized method of moments procedures for estimation of a stochastic autoregressive volatility model by Monte Carlo methods. We document the existence of a tradeoff between the number of moments, or information, included in estimation and the quality, or precision, of the objective function used for estimation. Furthermore, an approximation to the optimal weighting matrix is used to explore the impact of the weighting matrix for estimation, specification testing, and inference procedures. The results provide guidelines that help achieve desirable small-sample properties in settings characterized by strong conditional heteroscedasticity and correlation among the moments.

Proceedings ArticleDOI
01 May 1996
TL;DR: The optimal solution to the problem of how actions should be chosen is presented, formulated as a partially observable Markov decision process, which goes on to explore a variety of heuristic control strategies.
Abstract: Discrete Bayesian models have been used to model uncertainty for mobile-robot navigation, but the question of how actions should be chosen remains largely unexplored. This paper presents the optimal solution to the problem, formulated as a partially observable Markov decision process. Since solving for the optimal control policy is intractable, in general, it goes on to explore a variety of heuristic control strategies. The control strategies are compared experimentally, both in simulation and in runs on a robot.

Journal ArticleDOI
Francis Bloch1
TL;DR: In this paper, the authors analyzed a sequential game of coalition formation when the division of the coalitional surplus is fixed and the payoffs are defined relative to the whole coalition structure and showed that any core stable coalition structure can be attained as a stationary perfect equilibrium of the game.

Journal ArticleDOI
TL;DR: The role of grain boundaries in the magnetoresistance (MR) properties of the manganites has been investigated and the polycrystalline films show large MR over a wide temperature range down to 5 K.
Abstract: The role of grain boundaries in the magnetoresistance (MR) properties of the manganites has been investigated by comparing the properties of epitaxial and polycrystalline films of ${\mathrm{La}}_{067}{D}_{033}\mathrm{Mn}{\mathrm{O}}_{3\ensuremath{-}\ensuremath{\delta}} (D=\mathrm{C}\mathrm{a},\phantom{\rule{0ex}{0ex}}\mathrm{S}\mathrm{r},\phantom{\rule{0ex}{0ex}}\mathrm{o}\mathrm{r}\phantom{\rule{0ex}{0ex}}\mathrm{v}\mathrm{a}\mathrm{c}\mathrm{a}\mathrm{n}\mathrm{c}\mathrm{i}\mathrm{e}\mathrm{s})$ While the MR in the epitaxial films is strongly peaked near the ferromagnetic transition temperature and is very small at low temperatures, the polycrystalline films show large MR over a wide temperature range down to 5 K The results are explained in terms of switching of magnetic domains in the grains and disorder-induced canting of Mn spins in the grain-boundary region

Journal ArticleDOI
28 Jun 1996-Science
TL;DR: Pore fluids from the upper 60 meters of sediment 3000 meters below the surface of the tropical Atlantic indicate that the oxygen isotopic composition of seawater at this site during the last glacial maximum was 0.8 ± 0.1 per mil higher than it is today.
Abstract: Pore fluids from the upper 60 meters of sediment 3000 meters below the surface of the tropical Atlantic indicate that the oxygen isotopic composition (δ 18 O) of seawater at this site during the last glacial maximum was 0.8 ± 0.1 per mil higher than it is today. Combined with the δ 18 O change in benthic foraminifera from this region, the elevated ratio indicates that the temperature of deep water in the tropical Atlantic Ocean was 4°C colder during the last glacial maximum. Extrapolation from this site to a global average suggests that the ice volume contribution to the change in δ 18 O of foraminifera is 1.0 per mil, which partially reconciles the foraminiferal oxygen isotope record of tropical sea surface temperatures with estimates from Barbados corals and terrestrial climate proxies.

Journal ArticleDOI
TL;DR: Important relationships between cigarette smoking and psychiatric disorders among adolescents are suggested, particularly with regard to MDD, drug abuse/dependence, and disruptive behavior disorders.
Abstract: Objective To examine cross-sectionally and prospectively relationships of cigarette smoking with major depressive disorder (MDD), controlling for comorbidity in a community sample of adolescents. Method A representative sample of 1,709 adolescents (aged 14 through 18 years) was assessed by using semistructured diagnostic interviews on two occasions, approximately 1 year apart. Logistic regression analyses were used to examine cross-sectionally and prospectively the relationships between smoking and each psychiatric disorder, while statistically controlling for the other disorders. Results Cross-sectional analyses revealed significant relationships of drug abuse/dependence and disruptive behavior disorders with adolescent smoking, even after the co-occurrence of all other disorders was controlled. Prospectively, smoking was found to increase the risk of developing an episode of MDD and drug abuse/dependence, after adjusting for other disorders. Finally, only lifetime prevalence of MDD remained a significant predictor of smoking onset, after controlling for other disorders. Gender did not moderate any of the relationships between psychopathology and smoking. Conclusions The results suggest important relationships between cigarette smoking and psychiatric disorders among adolescents, particularly with regard to MDD, drug abuse/dependence, and disruptive behavior disorders. These findings have important clinical implications, both for psychiatric care and for smoking prevention and cessation efforts with adolescents.

Journal Article
TL;DR: In vivo neutralization studies of IFN-alphabeta, TNF, and IL-12 showed that each of these factors had NK and T cell-independent antiviral functions, as well as specific effects on NK cell responses.
Abstract: Murine cytomegalovirus (MCMV) infection of mice induces early cytokines. Although certain of these can directly inhibit viral replication, they also can promote defense by activating NK cells. MCMV induces IFN-alphabeta-dependent NK cell cytotoxicity and IL-12-dependent NK cell IFN-gamma production. Studies were initiated to define cytokine-mediated NK and T cell-independent antiviral defense and specific cytokine-elicited NK cell responses during MCMV infections. IFN-alphabeta, TNF, IL-12, and IFN-gamma were all shown to be induced 2 days after infection of immunocompetent mice. Infections of NK and T cell-deficient mice demonstrated that virus-induced IFN-alphabeta, TNF, and IL-12, but not IFN-gamma, were produced independently of these populations, and that IL-12 production occurred in the absence of detectable IFN-gamma. In vivo neutralization studies of IFN-alphabeta, TNF, and IL-12 showed that each of these factors had NK and T cell-independent antiviral functions, as well as specific effects on NK cell responses. Examination of NK cell cytotoxicity, blastogenesis, and IFN-gamma production demonstrated that: IL-12 was required for NK cell IFN-gamma production but not blastogenesis and cytotoxicity; IFN-alphabeta was necessary for NK cell blastogenesis and cytotoxicity but not IFN-gamma production; and TNF facilitated IFN-gamma production but inhibited NK cell cytotoxicity. This work defines the biologic consequences of early cytokine expression during viral infection.

Journal ArticleDOI
TL;DR: It is concluded that olfactory memory has a variety of important distinguishing characteristics, but that more data are needed to confer this distinction.
Abstract: We critically review the cognitive literature on olfactory memory and identify the similarities and differences between odor memory and visual-verbal memory. We then analyze this literature using criteria from a multiple memory systems approach to determine whether olfactory memory can be considered to be a separate memory system. We conclude that olfactory memory has a variety of important distinguishing characteristics, but that more data are needed to confer this distinction. We suggest methods for the study of olfactory memory that should make a resolution on the separate memory system hypothesis possible while simultaneously advancing a synthetic understanding of olfaction and cognition.