scispace - formally typeset
Search or ask a question

Showing papers by "University of California, Santa Cruz published in 1999"


Journal ArticleDOI
TL;DR: Emerging evidence shows that most species are declining and are being replaced by a much smaller number of expanding species that thrive in human-altered environments, leading to a more homogenized biosphere with lower diversity at regional and global scales.
Abstract: Human activities are not random in their negative and positive impacts on biotas. Emerging evidence shows that most species are declining as a result of human activities ('losers') and are being replaced by a much smaller number of expanding species that thrive in human-altered environments ('winners'). The result will be a more homogenized biosphere with lower diversity at regional and global scales. Recent data also indicate that the many losers and few winners tend to be non-randomly distributed among higher taxa and ecological groups, enhancing homogenization.

2,283 citations


Journal ArticleDOI
TL;DR: In this paper, the authors explore the continued evolution of rotating helium stars, Mα 10 M☉, in which iron-core collapse does not produce a successful outgoing shock but instead forms a black hole of 2-3 Mˉ.
Abstract: Using a two-dimensional hydrodynamics code (PROMETHEUS), we explore the continued evolution of rotating helium stars, Mα 10 M☉, in which iron-core collapse does not produce a successful outgoing shock but instead forms a black hole of 2-3 M☉. The model explored in greatest detail is the 14 M☉ helium core of a 35 M☉ main-sequence star. The outcome is sensitive to the angular momentum. For j16 ≡ j/(1016 cm2 s-1) 3, material falls into the black hole almost uninhibited. No outflows are expected. For j16 20, the infalling matter is halted by centrifugal force outside 1000 km where neutrino losses are negligible. The equatorial accretion rate is very low, and explosive oxygen burning may power a weak equatorial explosion. For 3 j16 20, however, a reasonable value for such stars, a compact disk forms at a radius at which the gravitational binding energy can be efficiently radiated as neutrinos or converted to beamed outflow by magnetohydrodynamical (MHD) processes. These are the best candidates for producing gamma-ray bursts (GRBs). Here we study the formation of such a disk, the associated flow patterns, and the accretion rate for disk viscosity parameter α ≈ 0.001 and 0.1. Infall along the rotational axis is initially uninhibited, and an evacuated channel opens during the first few seconds. Meanwhile the black hole is spun up by the accretion (to a ≈ 0.9), and energy is dissipated in the disk by MHD processes and radiated by neutrinos. For the α = 0.1 model, appreciable energetic outflows develop between polar angles of 30° and 45°. These outflows, powered by viscous dissipation in the disk, have an energy of up to a few times 1051 ergs and a mass ~1 M☉ and are rich in 56Ni. They constitute a supernova-like explosion by themselves. Meanwhile accretion through the disk is maintained for approximately 10-20 s but is time variable (±30%) because of hydrodynamical instabilities at the outer edge in a region where nuclei are experiencing photodisintegration. Because the efficiency of neutrino energy deposition is sensitive to the accretion rate, this instability leads to highly variable energy deposition in the polar regions. Some of this variability, which has significant power at 50 ms and overtones, may persist in the time structure of the burst. During the time followed, the average accretion rate for the standard α = 0.1 and j16 = 10 model is 0.07 M☉ s-1. The total energy deposited along the rotational axes by neutrino annihilation is (1-14) × 1051 ergs, depending upon the evolution of the Kerr parameter and uncertain neutrino efficiencies. Simulated deposition of energy in the polar regions, at a constant rate of 5 × 1050 ergs s-1 per pole, results in strong relativistic outflow jets beamed to about 1% of the sky. These jets may be additionally modulated by instabilities in the sides of the "nozzle" through which they flow. The jets blow aside the accreting material, remain highly focused, and are capable of penetrating the star in ~10 s. After the jet breaks through the surface of the star, highly relativistic flow can emerge. Because of the sensitivity of the mass ejection and jets to accretion rate, angular momentum, and disk viscosity, and the variation of observational consequences with viewing angle, a large range of outcomes is possible, ranging from bright GRBs like GRB 971214 to faint GRB-supernovae like SN 1998bw. X-ray precursors are also possible as the jet first breaks out of the star. While only a small fraction of supernovae make GRBs, we predict that collapsars will always make supernovae similar to SN 1998bw. However, hard, energetic GRBs shorter than a few seconds will be difficult to produce in this model and may require merging neutron stars and black holes for their explanation.

2,209 citations


Journal ArticleDOI
TL;DR: This paper argues that the total impact of an invader includes three fundamental dimensions: range, abundance, and the per-capita or per-biomass effect of the invader, and recommends previous approaches to measuring impact at different organizational levels, and suggests some new approaches.
Abstract: Although ecologists commonly talk about the impacts of nonindigenous species, little formal attention has been given to defining what we mean by impact, or connecting ecological theory with particular measures of impact. The resulting lack of generalizations regarding invasion impacts is more than an academic problem; we need to be able to distinguish invaders with minor effects from those with large effects in order to prioritize management efforts. This paper focuses on defining, evaluating, and comparing a variety of measures of impact drawn from empirical examples and theoretical reasoning. We begin by arguing that the total impact of an invader includes three fundamental dimensions: range, abundance, and the per-capita or per-biomass effect of the invader. Then we summarize previous approaches to measuring impact at different organizational levels, and suggest some new approaches. Reviewing mathematical models of impact, we argue that theoretical studies using community assembly models could act as a basis for better empirical studies and monitoring programs, as well as provide a clearer understanding of the relationship among different types of impact. We then discuss some of the particular challenges that come from the need to prioritize invasive species in a management or policy context. We end with recommendations about how the field of invasion biology might proceed in order to build a general framework for understanding and predicting impacts. In particular, we advocate studies designed to explore the correlations among different measures: Are the results of complex multivariate methods adequately captured by simple composite metrics such as species richness? How well are impacts on native populations correlated with impacts on ecosystem functions? Are there useful bioindicators for invasion impacts? To what extent does the impact of an invasive species depend on the system in which it is measured? Three approaches would provide new insights in this line of inquiry: (1) studies that measure impacts at multiple scales and multiple levels of organization, (2) studies that synthesize currently available data on different response variables, and (3) models designed to guide empirical work and explore generalities.

1,821 citations


Journal ArticleDOI
05 Aug 1999-Nature
TL;DR: It appears that the decline and disappearance of the coyote, in conjunction with the effects of habitat fragmentation, affect the distribution and abundance of smaller carnivores and the persistence of their avian prey.
Abstract: Mammalian carnivores are particularly vulnerable to extinction in fragmented landscapes1, and their disappearance may lead to increased numbers of smaller carnivores that are principle predators of birds and other small vertebrates. Such ‘mesopredator release’2 has been implicated in the decline and extinction of prey species2,3,4,5,6. Because experimental manipulation of carnivores is logistically, financially and ethically problematic6,7, however, few studies have evaluated how trophic cascades generated by the decline of dominant predators combine with other fragmentation effects to influence species diversity in terrestrial systems. Although the mesopredator release hypothesis has received only limited critical evaluation8 and remains controversial9, it has become the basis for conservation programmes justifying the protection of carnivores6. Here we describe a study that exploits spatial and temporal variation in the distribution and abundance of an apex predator, the coyote, in a landscape fragmented by development. It appears that the decline and disappearance of the coyote, in conjunction with the effects of habitat fragmentation, affect the distribution and abundance of smaller carnivores and the persistence of their avian prey.

1,552 citations


Journal ArticleDOI
TL;DR: In this article, the authors investigate several different recipes for star formation and supernova feedback, including choices that are similar to the treatment in Kauffmann, White & Guiderdoni (1993) and Cole et al. (1994) as well as some new recipes.
Abstract: Using semi-analytic models of galaxy formation, we investigate galaxy properties such as the Tully-Fisher relation, the B and K-band luminosity functions, cold gas contents, sizes, metallicities, and colours, and compare our results with observations of local galaxies. We investigate several different recipes for star formation and supernova feedback, including choices that are similar to the treatment in Kauffmann, White & Guiderdoni (1993) and Cole et al. (1994) as well as some new recipes. We obtain good agreement with all of the key local observations mentioned above. In particular, in our best models, we simultaneously produce good agreement with both the observed B and K-band luminosity functions and the I-band Tully-Fisher relation. Improved cooling and supernova feedback modelling, inclusion of dust extinction, and an improved Press-Schechter model all contribute to this success. We present results for several variants of the CDM family of cosmologies, and find that models with values of 0 ≃ 0.3–0.5 give the best agreement with observations.

1,061 citations


Journal ArticleDOI
TL;DR: The availability of high-resolution structures has facilitated a more detailed understanding of the complex chaperone machinery and mechanisms, including the ATP-dependent reaction cycles of the GroEL and HSP70 chaperones.
Abstract: The folding of most newly synthesized proteins in the cell requires the interaction of a variety of protein cofactors known as molecular chaperones. These molecules recognize and bind to nascent polypeptide chains and partially folded intermediates of proteins, preventing their aggregation and misfolding. There are several families of chaperones; those most involved in protein folding are the 40-kDa heat shock protein (HSP40; DnaJ), 60-kDa heat shock protein (HSP60; GroEL), and 70-kDa heat shock protein (HSP70; DnaK) families. The availability of high-resolution structures has facilitated a more detailed understanding of the complex chaperone machinery and mechanisms, including the ATP-dependent reaction cycles of the GroEL and HSP70 chaperones. For both of these chaperones, the binding of ATP triggers a critical conformational change leading to release of the bound substrate protein. Whereas the main role of the HSP70/HSP40 chaperone system is to minimize aggregation of newly synthesized proteins, the HSP60 chaperones also facilitate the actual folding process by providing a secluded environment for individual folding molecules and may also promote the unfolding and refolding of misfolded intermediates.

1,052 citations


Journal ArticleDOI
TL;DR: It is demonstrated that this nanopore behaves as a detector that can rapidly discriminate between pyrimidine and purine segments along an RNA molecule.

1,044 citations


Journal ArticleDOI
TL;DR: In this article, a variety of different alkoxyamine structures led to α-hydrido derivatives based on a 2,2,5-trimethyl-4-phenyl-3-azahexane-3oxy, 1, skeleton which were able to control the polymerization of styrene, acrylate and acrylamide based monomers.
Abstract: Examination of novel alkoxyamines has demonstrated the pivotal role that the nitroxide plays in mediating the “living” or controlled polymerization of a wide range of vinyl monomers. Surveying a variety of different alkoxyamine structures led to α-hydrido derivatives based on a 2,2,5-trimethyl-4-phenyl-3-azahexane-3-oxy, 1, skeleton which were able to control the polymerization of styrene, acrylate, acrylamide, and acrylonitrile based monomers. For each monomer set, the molecular weight could be controlled from 1000 to 200 000 amu with polydispersities typically 1.05−1.15. Block and random copolymers based on combinations of the above monomers could also be prepared with similar control. In comparison with 2,2,6,6-tetramethylpiperidinoxy (TEMPO), these new systems represent a dramatic increase in the range of monomers that can be polymerized under controlled conditions and overcome many of the limitations associated with nitroxide-mediated “living” free radical procedures. Monomer selection and functional...

1,018 citations


Journal ArticleDOI
01 Nov 1999-Icarus
TL;DR: In this article, the authors used a smooth particle hydrodynamics method to simulate colliding rocky and icy bodies from centimeter scale to hundreds of kilometers in diameter in an effort to define self-consistently the threshold for catastrophic disruption.

831 citations


Journal ArticleDOI
TL;DR: In this article, the authors used a numerical model for relativistic disk accretion to study steady state accretion at high rates of gamma-ray burst (GRBs) using a variety of current models of GRBs.
Abstract: A variety of current models of gamma-ray bursts (GRBs) suggest a common engine: a black hole of several solar masses accreting matter from a disk at a rate of 0.01 to 10 M☉ s-1. Using a numerical model for relativistic disk accretion, we have studied steady state accretion at these high rates. Outside about 108 cm, the disk is advection dominated; energy released by dissipation is carried in by the optically thick gas, and the disk does not cool. Inside this radius, for accretion rates greater than about 0.01 M☉ s-1 a global state of balanced power comes to exist between neutrino losses, chiefly pair capture on nucleons, and dissipation. As a result of these losses, the temperature is reduced, the density is raised, and the disk scale height is reduced compared to the advective solution. The sudden onset of neutrino losses (due to the high temperature dependence) and photodisintegration leads to an abrupt thinning of the disk that may provide a favorable geometry for jet production. The inner disk remains optically thin to neutrinos for accretion rates of up to about 1 M☉ s-1. The energy emitted in neutrinos is less, and in the case of low accretion rates, very much less, than the maximum efficiency factor for black hole accretion (0.057 for no rotation; 0.42 for extreme Kerr rotation) times the accretion rate, c2. Neutrino temperatures at the last stable orbit range from 2 MeV (no rotation, slow accretion) to 13 MeV (Kerr geometry, rapid accretion), and the density ranges from 109 to 1012 g cm-3. The efficiency for producing a pair fireball along the rotational axis by neutrino annihilation is calculated and found to be highly variable and very sensitive to the accretion rate. For some of the higher accretion rates studied, it can be several percent or more; for accretion rates less than 0.05 M☉ s-1, it is essentially zero. The efficiency of the Blandford-Znajek mechanism in extracting rotational energy from the black hole is also estimated. In light of these results, the viability of various gamma-ray burst models is discussed, and the sensitivity of the results to disk viscosity, black hole rotation rate, and black hole mass is explored. A diverse range of GRB energies seems unavoidable, and neutrino annihilation in hyperaccreting black hole systems can explain bursts of up to 1052 ergs. Larger energies can be inferred for beaming systems.

767 citations


Journal ArticleDOI
TL;DR: The core-assisted mesh protocol (CAMP) is introduced for multicast routing in ad hoc networks, which generalizes the notion of core-based trees introduced for internet multicasting into multicast meshes that have much richer connectivity than trees.
Abstract: The core-assisted mesh protocol (CAMP) is introduced for multicast routing in ad hoc networks. CAMP generalizes the notion of core-based trees introduced for internet multicasting into multicast meshes that have much richer connectivity than trees. A shared multicast mesh is defined for each multicast group; the main goal of using such meshes is to maintain the connectivity of multicast groups even while network routers move frequently, CAMP consists of the maintenance of multicast meshes and loop-free packet forwarding over such meshes. Within the multicast mesh of a group, packets from any source in the group are forwarded along the reverse shortest path to the source, just as in traditional multicast protocols based on source-based trees. CAMP guarantees that within a finite time, every receiver of a multicast group has a reverse shortest path to each source of the multicast group. Multicast packets for a group are forwarded along the shortest paths front sources to receivers defined within the group's mesh. CAMP uses cores only to limit the traffic needed for a router to join a multicast group; the failure of cores does not stop packet forwarding or the process of maintaining the multicast meshes.

Journal ArticleDOI
24 Sep 1999-Science
TL;DR: Structures of 70S ribosome complexes containing messenger RNA and transfer RNA (tRNA), or tRNA analogs, have been solved by x-ray crystallography at up to 7.8 angstrom resolution.
Abstract: Structures of 70S ribosome complexes containing messenger RNA and transfer RNA (tRNA), or tRNA analogs, have been solved by x-ray crystallography at up to 7.8 angstrom resolution. Many details of the interactions between tRNA and the ribosome, and of the packing arrangements of ribosomal RNA (rRNA) helices in and between the ribosomal subunits, can be seen. Numerous contacts are made between the 30S subunit and the P-tRNA anticodon stem-loop; in contrast, the anticodon region of A-tRNA is much more exposed. A complex network of molecular interactions suggestive of a functional relay is centered around the long penultimate stem of 16S rRNA at the subunit interface, including interactions involving the “switch” helix and decoding site of 16S rRNA, and RNA bridges from the 50S subunit.

Journal ArticleDOI
TL;DR: The results indicate a potentially widespread cost of sexual selection caused by conflicts inherent to promiscuity, and suggest males evolved to be less harmful to their mates, and females evolved to been less resistant to male-induced harm.
Abstract: Although sexual selection can provide benefits to both sexes, it also can be costly because of expanded opportunities for intersexual conflict We evaluated the role of sexual selection in a naturally promiscuous species, Drosophila melanogaster In two replicate populations, sexual selection was removed through enforced monogamous mating with random mate assignment or retained in promiscuous controls Monogamous mating constrains the reproductive success of mates to be identical, thereby converting prior conflicts between mates into opportunities for mutualism Random mate assignment removes the opportunity for females to choose beneficial qualities in their mate The mating treatments were maintained for 47 generations, and evolution was allowed to proceed naturally within the parameters of the design In the monogamous populations, males evolved to be less harmful to their mates, and females evolved to be less resistant to male-induced harm The monogamous populations also evolved a greater net reproductive rate than their promiscuous controls These results indicate a potentially widespread cost of sexual selection caused by conflicts inherent to promiscuity

Journal ArticleDOI
TL;DR: In this paper, a series of two-dimensional core-collapse supernova simulations for a range of progenitor masses and different input physics were presented, and the current accuracy of the models was compared to the observations.
Abstract: We present a series of two-dimensional core-collapse supernova simulations for a range of progenitor masses and different input physics. These models predict a range of supernova energies and compact remnant masses. In particular, we study two mechanisms for black hole formation: prompt collapse and delayed collapse owing to fallback. For massive progenitors (greater than 20 M☉), after a hydrodynamic time for the helium core (a few minutes to a few hours), fallback drives the compact object beyond the maximum neutron star mass, causing it to collapse into a black hole. With the current accuracy of the models, progenitors more massive than 40 M☉ form black holes directly with no supernova explosion (if rotating, these black holes may be the progenitors of gamma-ray bursts). We calculate the mass distribution of black holes formed and compare these predictions to the observations, which represent a small biased subset of the black hole population. Uncertainties in these estimates are discussed.

Journal ArticleDOI
28 Oct 1999-Nature
TL;DR: In this article, a series of computer simulations of the Earth's dynamo illustrates how the thermal structure of the lowermost mantle might affect convection and magnetic field generation in the fluid core.
Abstract: A series of computer simulations of the Earth's dynamo illustrates how the thermal structure of the lowermost mantle might affect convection and magnetic-field generation in the fluid core. Eight different patterns of heat flux from the core to the mantle are imposed over the core–mantle boundary. Spontaneous magnetic dipole reversals and excursions occur in seven of these cases, although sometimes the field only reverses in the outer part of the core, and then quickly reverses back. The results suggest correlations among the frequency of reversals, the duration over which the reversals occur, the magnetic-field intensity and the secular variation. The case with uniform heat flux at the core–mantle boundary appears most ‘Earth-like’. This result suggests that variations in heat flux at the core–mantle boundary of the Earth are smaller than previously thought, possibly because seismic velocity anomalies in the lowermost mantle might have more of a compositional rather than thermal origin, or because of enhanced heat flux in the mantle's zones of ultra-low seismic velocity.

Journal ArticleDOI
TL;DR: In this paper, the authors compare the reliability of cosmological gas-dynamical simulations of clusters in the simplest astrophysically relevant case, that in which the gas is assumed to be nonradiative.
Abstract: We have simulated the formation of an X-ray cluster in a cold dark matter universe using 12 different codes. The codes span the range of numerical techniques and implementations currently in use, including smoothed particle hydrodynamics (SPH) and grid methods with fixed, deformable, or multilevel meshes. The goal of this comparison is to assess the reliability of cosmological gasdynamical simulations of clusters in the simplest astrophysically relevant case, that in which the gas is assumed to be nonradiative. We compare images of the cluster at different epochs, global properties such as mass, temperature and X-ray luminosity, and radial profiles of various dynamical and thermodynamical quantities. On the whole, the agreement among the various simulations is gratifying, although a number of discrepancies exist. Agreement is best for properties of the dark matter and worst for the total X-ray luminosity. Even in this case, simulations that adequately resolve the core radius of the gas distribution predict total X-ray luminosities that agree to within a factor of 2. Other quantities are reproduced to much higher accuracy. For example, the temperature and gas mass fraction within the virial radius agree to within about 10%, and the ratio of specific dark matter kinetic to gas thermal energies agree to within about 5%. Various factors, including differences in the internal timing of the simulations, contribute to the spread in calculated cluster properties. Based on the overall consistency of results, we discuss a number of general properties of the cluster we have modeled.

Journal ArticleDOI
TL;DR: The physical origin of the low-redshift Lyα forest was studied in hydrodynamic simulations of four cosmological models, all variants of the cold dark matter scenario as mentioned in this paper.
Abstract: We study the physical origin of the low-redshift Lyα forest in hydrodynamic simulations of four cosmological models, all variants of the cold dark matter scenario. Our most important conclusions are insensitive to the cosmological model, but they depend on our assumption that the UV background declines at low redshift in concert with the declining population of quasar sources. We find that the expansion of the universe drives rapid evolution of dN/dz (the number of absorbers per unit redshift above a specified equivalent width threshold) at z1.7, but that at lower redshift the fading of the UV background counters the influence of expansion, leading to slow evolution of dN/dz. The draining of gas from low-density regions into collapsed structures has a mild but not negligible effect on the evolution of dN/dz, especially for high equivalent-width thresholds. At every redshift, weaker lines come primarily from moderate fluctuations of the diffuse, unshocked intergalactic medium (IGM) and stronger lines originate in shocked or radiatively cooled gas of higher overdensity. However, the neutral hydrogen column density associated with structures of fixed overdensity drops as the universe expands, so an absorber at z=0 is dynamically analogous to an absorber that has column density 10-50 times higher at z=2-3. In particular, the mildly overdense IGM fluctuations that dominate the Lyα forest opacity at z>2 produce optically thin lines at z<1, while the marginally saturated (NH I~1014.5 cm−2) lines at z<1 typically arise in gas that is overdense by a factor of 20-100. We find no clear distinction between lines arising in galaxy halos and lines arising in larger scale structures; however, galaxies tend to lie near the dense regions of the IGM that are responsible for strong Lyα lines. The simulations provide a unified physical picture that accounts for the most distinctive observed properties of the low-redshift Lyα forest: (1) a sharp transition in the evolution of dN/dz at z~1.7, (2) stronger evolution for absorbers of higher equivalent width, (3) a correlation of increasing Lyα equivalent width with decreasing galaxy impact parameter that extends to rp~500 h−1 kpc, and (4) a tendency for stronger lines to arise in close proximity to galaxies while weaker lines trace more diffuse large-scale structure.

Journal ArticleDOI
TL;DR: In this article, the authors present the results of numerical simulations of protostellar accretion disks that are perturbed by a protoplanetary companion that has a much smaller mass than the central object.
Abstract: We present the results of numerical simulations of protostellar accretion disks that are perturbed by a protoplanetary companion that has a much smaller mass than the central object We consider the limiting cases where the companion is in a coplanar, circular orbit and is initially embedded in the disk Three independent numerical schemes are employed, and generic features of the flow are found in each case In the first series of idealized models, the secondary companion is modeled as a massless, orbiting sink hole able to absorb all matter incident upon it without exerting any gravitational torque on the disk In these simulations, accretion onto the companion induces surface density depression and gap formation centered on its orbital radius After an initial transitory adjustment, the accretion rate onto the sink hole becomes regulated by the rate at which viscous evolution of the disk can cause matter to diffuse into the vicinity of the sink hole orbit, and thus the sink hole grows on a disk viscous timescale In the second series of comprehensive models, the companion's gravity is included When the tidal torque exerted by the companion on the disk becomes important, the angular momentum exchange rate between the companion and the disk induces the protoplanetary accretion to reduce markedly below that in the idealized sink hole models Whether this process is effective in inhibiting protoplanetary accretion depends on the equation of state and disk model parameters For polytropic or isothermal equations of state, we find, in basic agreement with earlier work, that when the mean Roche lobe radius of the companion exceeds the disk thickness and when the mass ratio, q, between the companion and the central object exceeds ~40/, where is the effective Reynolds number, a clean deep gap forms Although precise estimation is rendered difficult due to the limitation of numerical schemes in dealing with large density contrasts, the generic results of three series of simulations indicate that accretion onto sufficiently massive protoplanets can become ineffective over the expected disk lifetimes in their neighborhood We suggest that such a process operates during planetary formation and is important in determining the final mass of giant planets

Proceedings Article
06 Aug 1999
TL;DR: A new method, called the Fisher kernel method, for detecting remote protein homologies is introduced and shown to perform well in classifying protein domains by SCOP superfamily.
Abstract: A new method, called the Fisher kernel method, for detecting remote protein homologies is introduced and shown to perform well in classifying protein domains by SCOP superfamily. The method is a variant of support vector machines using a new kernel function. The kernel function is derived from a hidden Markov model. The general approach of combining generative models like HMMs with discriminative methods such as support vector machines may have applications in other areas of biosequence analysis as well.

Journal ArticleDOI
01 Apr 1999-Nature
TL;DR: In this article, optical and near-infrared observations of the afterglow of GRB990123 were presented, and a redshift of z ⩾ 1.6 was determined.
Abstract: Long-lived emission, known as afterglow, has now been detected from about a dozen γ-ray bursts. Distance determinations place the bursts at cosmological distances, with redshifts, z, ranging from ∼1 to 3. The energy required to produce these bright γ-ray flashes is enormous: up to ∼1053 erg, or 10 per cent of the rest-mass energy of a neutron star, if the emission is isotropic. Here we present optical and near-infrared observations of the afterglow of GRB990123, and we determine a redshift of z ⩾ 1.6. This is to date the brightest γ-ray burst with a well-localized position and if the γ-rays were emitted isotropically, the energy release exceeds the rest-mass energy of a neutron star, so challenging current theoretical models of the sources. We argue, however, that our data may provide evidence of beamed (rather than isotropic) radiation, thereby reducing the total energy released to a level where stellar-death models are still tenable.

Journal ArticleDOI
Abstract: The cosmological origin of at least an appreciable fraction of classical gamma-ray bursts (GRBs) is now supported by redshift measurements for a half-dozen faint host galaxies. Still, the nature of the central engine (or engines) that provide the burst energy remains unclear. While many models have been proposed, those currently favored are all based upon the formation of and/or rapid accretion into stellar-mass black holes. Here we discuss a variety of such scenarios and estimate the probability of each. Population synthesis calculations are carried out using a Monte Carlo approach in which the many uncertain parameters intrinsic to such calculations are varied. We estimate the event rate for each class of model as well as the propagation distances for those having significant delay between formation and burst production, i.e., double neutron star (DNS) mergers and black hole-neutron star (BH/NS) mergers. One conclusion is a 1-2 order of magnitude decrease in the rate of DNS and BH/NS mergers compared to that previously calculated using invalid assumptions about common envelope evolution. Other major uncertainties in the event rates and propagation distances include the history of star formation in the universe, the masses of the galaxies in which merging compact objects are born, and the radii of the hydrogen-stripped cores of massive stars. For reasonable assumptions regarding each, we calculate a daily event rate in the universe for (1) merging neutron stars: ~100 day-1; (2) neutron star-black hole mergers: ~450 day-1; (3) collapsars: ~104 day-1; (4) helium star black hole mergers: ~1000 day-1; and (5) white dwarf-black hole mergers: ~20 day-1. The range of uncertainty in these numbers, however, is very large, typically 2-3 orders of magnitude. These rates must additionally be multiplied by any relevant beaming factor (fΩ < 1) and sampling fraction (if the entire universal set of models is not being observed). Depending upon the mass of the host galaxy, one-half of the DNS mergers will happen within 60 kpc (for a galaxy with a mass comparable to that of the Milky Way) to 5 Mpc (for a galaxy with negligible mass) from the Galactic center. The same numbers characterize BH/NS mergers. Because of the delay time, neutron star and black hole mergers will happen at a redshift 0.5-0.8 times that of the other classes of models. Information is still lacking regarding the hosts of short, hard bursts, but we suggest that they are due to DNS and BH/NS mergers and thus will ultimately be determined to lie outside of galaxies and at a closer mean distance than long complex bursts (which we attribute to collapsars). In the absence of a galactic site, the distance to these bursts may be difficult to determine.

Journal ArticleDOI
TL;DR: In this article, the authors reconstructs late Oligocene to late Miocene pCO2 from ep values based on carbon isotopic analyses of diunsaturated alkenones and planktonic foraminifera from Deep Sea Drilling Project sites 588 and 608 and Ocean Drilling Program site 730.
Abstract: Changes in pCO2 or ocean circulation are generally invoked to explain warm early Miocene climates and a rapid East Antarctic ice sheet (EAIS) expansion in the middle Miocene. This study reconstructs late Oligocene to late Miocene pCO2 from ep values based on carbon isotopic analyses of diunsaturated alkenones and planktonic foraminifera from Deep Sea Drilling Project sites 588 and 608 and Ocean Drilling Program site 730. Our results indicate that highest pCO2 occurred during the latest Oligocene (∼350 ppmv) but decreased rapidly at ∼25 Ma. The early and middle Miocene was characterized by low pCO2 (260–190 ppmv). Lower intervals of pCO2 correspond to inferred organic carbon burial events and glacial episodes with the lowest concentrations occurring during the middle Miocene. There is no evidence for either high pCO2 during the late early Miocene climatic optimum or a sharp pCO2 decrease associated with EAIS growth. Paradoxically, pCO2 increased following EAIS growth and obtained preindustrial levels by ∼10 Ma. Although we emphasize an oceanographic control on Miocene climate, low pCO2 could have primed the climate system to respond sensitively to changes in heat and vapor transport.

Journal ArticleDOI
23 Sep 1999-Nature
TL;DR: It is proposed that SYD-2 regulates the differentiation of presynaptic termini in particular the formation of the active zone, by acting as an intracellular anchor for RPTP signalling at synaptic junctions.
Abstract: At synaptic junctions, specialized subcellular structures occur in both pre- and postsynaptic cells. Most presynaptic termini contain electron-dense membrane structures1, often referred to as active zones, which function in vesicle docking and release2. The components of those active zones and how they are formed are largely unknown. We report here that a mutation in the Caenorhabditis elegans syd-2 (for synapse-defective) gene causes a diffused localization of several presynaptic proteins and of a synaptic-vesicle membrane associated green fluorescent protein (GFP) marker3,4. Ultrastructural analysis revealed that the active zones of syd-2 mutants were significantly lengthened, whereas the total number of vesicles per synapse and the number of vesicles at the prominent active zones were comparable to those in wild-type animals. Synaptic transmission is partially impaired in syd-2 mutants. syd-2 encodes a member of the liprin (for LAR-interacting protein) family of proteins which interact with LAR-type (for leukocyte common antigen related) receptor proteins with tyrosine phosphatase activity (RPTPs)5,6. SYD-2 protein is localized at presynaptic termini independently of the presence of vesicles, and functions cell autonomously. We propose that SYD-2 regulates the differentiation of presynaptic termini in particular the formation of the active zone, by acting as an intracellular anchor for RPTP signalling at synaptic junctions.

Journal ArticleDOI
TL;DR: It is shown that application of harmonic alignment to scales involving syntactic relations and several substantive dimensions characterizes the universal markedness relations operative in this domain, and provides the constraints necessary for grammar construction.
Abstract: Among the most robust generalizations in syntactic markedness is the association of semantic role with person/animacy rank, discussed first in Silverstein (1976). The present paper explores how Silverstein's generalization might be expressed in a formal theory of grammar, and how it can play a role in individual grammars. The account, which focuses here on the role of person, is developed in Optimality Theory. Central to it are two formal devices which have been proposed in connection with phonology: harmonic alignment of prominence scales, and local conjunction of constraints. It is shown that application of harmonic alignment to scales involving syntactic relations and several substantive dimensions characterizes the universal markedness relations operative in this domain, and provides the constraints necessary for grammar construction. Differences between languages can be described as differences in the ranking of universal constraints.

Journal ArticleDOI
TL;DR: In this article, the authors investigate several approaches for constructing Monte Carlo realizations of the merging history of virialized dark matter haloes (''merger trees'' using the extended Press-Schechter formalism.
Abstract: We investigate several approaches for constructing Monte Carlo realizations of the merging history of virialized dark matter haloes (`merger trees') using the extended Press--Schechter formalism. We describe several unsuccessful methods in order to illustrate some of the difficult aspects of this problem. We develop a practical method that leads to the reconstruction of the mean quantities that can be derived from the Press--Schechter model. This method is convenient, computationally efficient, and works for any power spectrum or background cosmology. In addition, we investigate statistics that describe the distribution of the number of progenitors and their masses as a function of redshift.

Proceedings ArticleDOI
31 Oct 1999
TL;DR: Simulation results show that STAR is an order of magnitude more efficient than any topology-broadcast protocol, and four times moreefficient than ALP, which was the most efficient table-driven routing protocol based on partial link-state information reported to date.
Abstract: We present the source-tree adaptive routing (STAR) protocol and analyze its performance in wireless networks with broadcast radio links. Routers in STAR communicate to the neighbors their source routing trees either incrementally or in atomic updates. Source routing trees are specified by stating the link parameters of each link belonging to the paths used to reach every destination. Hence, a router disseminates link-state updates to its neighbors for only those links along paths used to reach destinations. Simulation results show that STAR is an order of magnitude more efficient than any topology-broadcast protocol, and four times more efficient than ALP, which was the most efficient table-driven routing protocol based on partial link-state information reported to date. The results also show that STAR is even more efficient than the dynamic source routing (DSR) protocol, which has been shown to be one of the best performing on-demand routing protocols.

Journal ArticleDOI
TL;DR: In this paper, the author reviews the extent of self-persuasion with emphasis on its significance to current societal problems and suggests that self-Persuasion must be used to alter the negative attitudes and behavior among the population.
Abstract: Self-persuasion is an indirect antecedents of interpersonal attraction which entails placing of people in situations where they are motivated to persuade themselves to change their own attitudes or behavior compared with traditional direct techniques of persuasion. The phenomenon of self-persuasion has the ability to affect long-term changes in attitudes and behavior. In this paper the author reviews the extent of his research on self-persuasion with emphasis on its significance to current societal problems. Theory of Cognitive Dissonance most commonly associated with self-persuasion is aroused when an action opposes the decent and rational self-concept of the individual. The power of self-persuasion was used as a fundamental part of a basic research program aimed to test derivations from the theory of cognitive dissonance. Among these programs include condom use water conservation interpersonal attraction and prejudice reduction. During the implementation of jigsaw experiment it was observed that not only clear and powerful results were produced but this has also led to an improvement of the lifestyles among the youth. In conclusion this article suggests that self-persuasion must be used to alter the negative attitudes and behavior among the population.

Journal ArticleDOI
TL;DR: In this article, the authors present population synthesis calculations of these models using a Monte Carlo approach in which the many uncertain parameters intrinsic to such calculations are varied, and estimate the event rate for each class of model as well as the propagation distance for those having significant delay between formation and burst production.
Abstract: While many models have been proposed for GRBs, those currently favored are all based upon the formation of and/or rapid accretion into stellar mass black holes. We present population synthesis calculations of these models using a Monte Carlo approach in which the many uncertain parameters intrinsic to such calculations are varied. We estimate the event rate for each class of model as well as the propagation distance for those having significant delay between formation and burst production, i.e., double neutron star (DNS) mergers and black hole-neutron star (BH/NS) mergers. For reasonable assumptions regarding the many uncertainties in population synthesis, we calculate a daily event rate in the universe for i) merging neutron stars: ~100/day; ii) neutron-star black hole mergers: ~450/day; iii) collapsars: ~10,000/day; iv) helium star black hole mergers: ~1000/day; and v) white dwarf black hole mergers: ~20/day. The range of uncertainty in these numbers however, is very large, typically two to three orders of magnitude. These rates must additionally be multiplied by any relevant beaming factor and sampling fraction (if the entire universal set of models is not being observed). Depending upon the mass of the host galaxy, half of the DNS and BH/NS mergers will happen within 60kpc (for a Milky-Way massed galaxy) to 5Mpc (for a galaxy with negligible mass) from the galactic center. Because of the delay time, neutron star and black hole mergers will happen at a redshift 0.5 to 0.8 times that of the other classes of models. Information is still lacking regarding the hosts of short hard bursts, but we suggest that they are due to DNS and BH/NS mergers and thus will ultimately be determined to lie outside of galaxies and at a closer mean distance than long complex bursts (which we attribute to collapsars).

Journal ArticleDOI
30 Jul 1999
TL;DR: In this article, the authors consider on-line density estimation with a parameterized density from the exponential family and prove bounds on the additional total loss of the online algorithm over the total loss in the off-line algorithm.
Abstract: We consider on-line density estimation with a parameterized density from the exponential family. The on-line algorithm receives one example at a time and maintains a parameter that is essentially an average of the past examples. After receiving an example the algorithm incurs a loss which is the negative log-likelihood of the example w.r.t. the past parameter of the algorithm. An off-line algorithm can choose the best parameter based on all the examples. We prove bounds on the additional total loss of the on-line algorithm over the total loss of the off-line algorithm. These relative loss bounds hold for an arbitrary sequence of examples. The goal is to design algorithms with the best possible relative loss bounds. We use a certain divergence to derive and analyze the algorithms. This divergence is a relative entropy between two exponential distributions.

Journal ArticleDOI
TL;DR: The approach of community food security (CFS) seeks to relink production and consumption, with the goal of ensuring both an adequate and accessible food supply in the present and the future as discussed by the authors.
Abstract: The American food system has produced both abundance and food insecurity, with production and consumption dealt with as separate issues. The new approach of community food security (CFS) seeks to re-link production and consumption, with the goal of ensuring both an adequate and accessible food supply in the present and the future. In its focus on consumption, CFS has prioritized the needs of low-income people; in its focus on production, it emphasizes local and regional food systems. These objectives are not necessarily compatible and may even be contradictory. This article describes the approach of community food security and raises some questions about how the movement can meet its goals of simultaneously meeting the food needs of low-income people and developing local food systems. It explores the conceptual and political promise and pitfalls of local, community-based approaches to food security and examines alternative economic strategies such as urban agriculture and community-supported agriculture. It concludes that community food security efforts are important additions to, but not subsitutes for, a nonretractable governmental safety net that protects against food insecurity.