scispace - formally typeset
Search or ask a question

Showing papers by "University of California, Santa Cruz published in 1994"


Journal ArticleDOI
K. Hagiwara, Ken Ichi Hikasa1, Koji Nakamura, Masaharu Tanabashi1, M. Aguilar-Benitez, Claude Amsler2, R. M. Barnett3, Patricia R. Burchat4, C. D. Carone5, C. Caso, G. Conforto6, Olav Dahl3, Michael Doser7, Semen Eidelman8, Jonathan L. Feng9, L. K. Gibbons10, Maury Goodman11, Christoph Grab12, D. E. Groom3, Atul Gurtu7, Atul Gurtu13, K. G. Hayes14, J. J. Herna`ndez-Rey15, K. Honscheid16, Christopher Kolda17, Michelangelo L. Mangano7, David Manley18, Aneesh V. Manohar19, John March-Russell7, Alberto Masoni, Ramon Miquel3, Klaus Mönig, Hitoshi Murayama3, Hitoshi Murayama20, S. Sánchez Navas12, Keith A. Olive21, Luc Pape7, C. Patrignani, A. Piepke22, Matts Roos23, John Terning24, Nils A. Tornqvist23, T. G. Trippe3, Petr Vogel25, C. G. Wohl3, Ron L. Workman26, W-M. Yao3, B. Armstrong3, P. S. Gee3, K. S. Lugovsky, S. B. Lugovsky, V. S. Lugovsky, Marina Artuso27, D. Asner28, K. S. Babu29, E. L. Barberio7, Marco Battaglia7, H. Bichsel30, O. Biebel31, Philippe Bloch7, Robert N. Cahn3, Ariella Cattai7, R. S. Chivukula32, R. Cousins33, G. A. Cowan34, Thibault Damour35, K. Desler, R. J. Donahue3, D. A. Edwards, Victor Daniel Elvira, Jens Erler36, V. V. Ezhela, A Fassò7, W. Fetscher12, Brian D. Fields37, B. Foster38, Daniel Froidevaux7, Masataka Fukugita39, Thomas K. Gaisser40, L. Garren, H.-J. Gerber12, Frederick J. Gilman41, Howard E. Haber42, C. A. Hagmann28, J.L. Hewett4, Ian Hinchliffe3, Craig J. Hogan30, G. Höhler43, P. Igo-Kemenes44, John David Jackson3, Kurtis F Johnson45, D. Karlen, B. Kayser, S. R. Klein3, Konrad Kleinknecht46, I.G. Knowles47, P. Kreitz4, Yu V. Kuyanov, R. Landua7, Paul Langacker36, L. S. Littenberg48, Alan D. Martin49, Tatsuya Nakada7, Tatsuya Nakada50, Meenakshi Narain32, Paolo Nason, John A. Peacock47, Helen R. Quinn4, Stuart Raby16, Georg G. Raffelt31, E. A. Razuvaev, B. Renk46, L. Rolandi7, Michael T Ronan3, L.J. Rosenberg51, Christopher T. Sachrajda52, A. I. Sanda53, Subir Sarkar54, Michael Schmitt55, O. Schneider50, Douglas Scott56, W. G. Seligman57, Michael H. Shaevitz57, Torbjörn Sjöstrand58, George F. Smoot3, Stefan M Spanier4, H. Spieler3, N. J. C. Spooner59, Mark Srednicki60, A. Stahl, Todor Stanev40, M. Suzuki3, N. P. Tkachenko, German Valencia61, K. van Bibber28, Manuella Vincter62, D. R. Ward63, Bryan R. Webber63, M R Whalley49, Lincoln Wolfenstein41, J. Womersley, C. L. Woody48, O. V. Zenin 
Tohoku University1, University of Zurich2, Lawrence Berkeley National Laboratory3, Stanford University4, College of William & Mary5, University of Urbino6, CERN7, Budker Institute of Nuclear Physics8, University of California, Irvine9, Cornell University10, Argonne National Laboratory11, ETH Zurich12, Tata Institute of Fundamental Research13, Hillsdale College14, Spanish National Research Council15, Ohio State University16, University of Notre Dame17, Kent State University18, University of California, San Diego19, University of California, Berkeley20, University of Minnesota21, University of Alabama22, University of Helsinki23, Los Alamos National Laboratory24, California Institute of Technology25, George Washington University26, Syracuse University27, Lawrence Livermore National Laboratory28, Oklahoma State University–Stillwater29, University of Washington30, Max Planck Society31, Boston University32, University of California, Los Angeles33, Royal Holloway, University of London34, Université Paris-Saclay35, University of Pennsylvania36, University of Illinois at Urbana–Champaign37, University of Bristol38, University of Tokyo39, University of Delaware40, Carnegie Mellon University41, University of California, Santa Cruz42, Karlsruhe Institute of Technology43, Heidelberg University44, Florida State University45, University of Mainz46, University of Edinburgh47, Brookhaven National Laboratory48, Durham University49, University of Lausanne50, Massachusetts Institute of Technology51, University of Southampton52, Nagoya University53, University of Oxford54, Northwestern University55, University of British Columbia56, Columbia University57, Lund University58, University of Sheffield59, University of California, Santa Barbara60, Iowa State University61, University of Alberta62, University of Cambridge63
TL;DR: This biennial Review summarizes much of Particle Physics using data from previous editions, plus 2205 new measurements from 667 papers, and features expanded coverage of CP violation in B mesons and of neutrino oscillations.
Abstract: This biennial Review summarizes much of Particle Physics. Using data from previous editions, plus 2205 new measurements from 667 papers, we list, evaluate, and average measured properties of gauge bosons, leptons, quarks, mesons, and baryons. We also summarize searches for hypothetical particles such as Higgs bosons, heavy neutrinos, and supersymmetric particles. All the particle properties and search limits are listed in Summary Tables. We also give numerous tables, figures, formulae, and reviews of topics such as the Standard Model, particle detectors, probability, and statistics. This edition features expanded coverage of CP violation in B mesons and of neutrino oscillations. For the first time we cover searches for evidence of extra dimensions (both in the particle listings and in a new review). Another new review is on Grand Unified Theories. A booklet is available containing the Summary Tables and abbreviated versions of some of the other sections of this full Review. All tables, listings, and reviews (and errata) are also available on the Particle Data Group website: http://pdg.lbl.gov.

5,143 citations


Journal ArticleDOI
TL;DR: In this paper, the authors derived analogues for the Airy kernel of the following properties of the sine kernel: the completely integrable system of P.D.E., the expression of the Fredholm determinant in terms of a Painleve transcendent, the existence of a commuting differential operator, and the fact that this operator can be used in the derivation of asymptotics, for generaln, of the probability that an interval contains preciselyn eigenvalues.
Abstract: Scaling level-spacing distribution functions in the “bulk of the spectrum” in random matrix models ofN×N hermitian matrices and then going to the limitN→∞ leads to the Fredholm determinant of thesine kernel sinπ(x−y)/π(x−y). Similarly a scaling limit at the “edge of the spectrum” leads to theAiry kernel [Ai(x)Ai(y)−Ai′(x)Ai(y)]/(x−y). In this paper we derive analogues for this Airy kernel of the following properties of the sine kernel: the completely integrable system of P.D.E.'s found by Jimbo, Miwa, Mori, and Sato; the expression, in the case of a single interval, of the Fredholm determinant in terms of a Painleve transcendent; the existence of a commuting differential operator; and the fact that this operator can be used in the derivation of asymptotics, for generaln, of the probability that an interval contains preciselyn eigenvalues.

1,923 citations


Proceedings ArticleDOI
01 Jun 1994
TL;DR: The high-resolution echelle spectrometer (HIRES) as discussed by the authors is a standard in-plane spectrograph with grating post dispersion, which is permanently located at a Nasmyth focus.
Abstract: We describe the high resolution echelle spectrometer (HIRES) now in operation on the Keck Telescope. HIRES, which is permanently located at a Nasmyth focus, is a standard in-plane echelle spectrometer with grating post dispersion. The collimated beam diameter is 12', and the echelle is a 1 x 3 mosaic, 12' by 48' in total size, of 52.6 gr mmMIN1, R-2.8 echelles. The cross disperser is a 2 x 1 mosaic, 24' by 16 ' in size. The camera is of a unique new design: a large (30' aperture) f/1.0, all spherical, all fused silica, catadioptric system with superachromatic performance. It spans the entire chromatic range from 0.3 (mu) to beyond 1.1 (mu) , delivering 12.6-micron (rms) images, averaged over all colors and field angles, without refocus. The detector is a thinned, backside-illuminated, Tektronix 2048 x 2048 CCD with 24-micron pixels, which spans the spectral region from 0.3 (mu) to 1.1 (mu) with very high overall quantum efficiency. The limiting spectral resolution of HIRES is 67,000 with the present CCD pixel size. The overall 'throughput' (resolution x slit width) product achieved by HIRES is 39,000 arcseconds. Peak overall efficiency for the spectrograph (not including telescope and slit losses) is 13% at 6000 angstrom. Some first-light science activities, including quasar absorption line spectra, beryllium abundances in metal-poor stars, lithium abundances in brown-dwarf candidates, and asteroseismology are discussed.© (1994) COPYRIGHT SPIE--The International Society for Optical Engineering. Downloading of the abstract is permitted for personal use only.

1,703 citations


Journal ArticleDOI
TL;DR: In this article, detailed models for intermediate and old stellar populations are described and compared against a wide variety of available observations, including broadband magnitudes, spectral energy distributions, surface brightness fluctuation magnitudes and a suite of 21 absorption feature indices.
Abstract: The construction of detailed models for intermediate and old stellar populations is described. Input parameters include metallicity (-2 less than (Fe/H) less than 0.5), single-burst age (between 1.5 and 17 Gyr), and initial mass function (IMF) exponent. Quantities output include broadband magnitudes, spectral energy distributions, surface brightness fluctuation magnitudes, and a suite of 21 absorption feature indices. The models are checked against a wide variety of available observations. Examinations of model output yield the following conclusions. (1) If the percentage change delta age/delta Z approximately equals 3/2 for two populations, they will appear almost identical in most indices. A few indices break this degeneracy by being either more abundance sensitive (Fe4668, Fe5015, Fe5709, and Fe5782) or more age sensitive (G4300, H beta, and presumably higher order Balmer lines) than usual. (2) Present uncertainties in stellar evolution are of the same magnitude as the effects of IMF and Y in the indices studied. (3) Changes in abundance ratios (like (Mg/Fe)) are predicted to be readily apparent in the spectra of old stellar populations. (4) The I-band flux of a stellar population is predicted to be nearly independent of metallicity and only modestly sensitive to age. The I band is therefore recommended for standard candle work or studies of M/L in galaxies. Other conclusions stem from this work. (1) Intercomparison of models and observations of two TiO indices seem to indicate variation of the (V/Ti) ratio among galaxies, but it is not clear how this observation ties into the standard picture of chemical enrichment. (2) Current estimates of (Fe/H) for the most metal-rich globulars that are based on integrated indices are probably slightly too high. (3) Colors of population models from different authors exhibit a substantial range. At solar metallicity and 13 Gyr, this range corresponds to an age error of roughly +/- 7 Gyr. Model colors from different authors applied in a differential sense have smaller uncertainties. (4) In the present models the dominant error for colors is probably the transformation from stellar atmospheric parameters to stellar colors. (5) Stellar B - V is difficult to model, and current spreads among different authors can reach 0.2 mag. (6) If known defects in the stellar flux library are corrected, the population model colors of this work in passbands redder than U would be accurate to roughly 0.03 mag in an absolute sense. These corrections are not made in the tables of model output.

1,665 citations


Journal ArticleDOI
TL;DR: It is shown that in addition to its functions during flower development, AP2 activity is also required during seed development, and this suggests that AP2 represents a new class of plant regulatory proteins that may play a general role in the control of Arabidopsis development.
Abstract: APETALA2 (AP2) plays a central role in the establishment of the floral meristem, the specification of floral organ identity, and the regulation of floral homeotic gene expression in Arabidopsis. We show here that in addition to its functions during flower development, AP2 activity is also required during seed development. We isolated the AP2 gene and found that it encodes a putative nuclear protein that is distinguished by an essential 68-amino acid repeated motif, the AP2 domain. Consistent with its genetic functions, we determined that AP2 is expressed at the RNA level in all four types of floral organs--sepals, petals, stamens, and carpels--and in developing ovules. Thus, AP2 gene transcription does not appear to be spatially restricted by the floral homeotic gene AGAMOUS as predicted by previous studies. We also found that AP2 is expressed at the RNA level in the inflorescence meristem and in nonfloral organs, including leaf and stem. Taken together, our results suggest that AP2 represents a new class of plant regulatory proteins that may play a general role in the control of Arabidopsis development.

1,008 citations


Journal ArticleDOI
TL;DR: In this article, one-dimensional, convective, vertical structure models and one dimensional time-dependent, radial diffusion models are combined to create a selfconsistent picture in which FU Orionis outbursts occur in young stellar objects (YSOs) as the result of a large-scale, self-regulated, thermal ionization instability in the surrounding protostellar accretion disk.
Abstract: One-dimensional, convective, vertical structure models and one dimensional time-dependent, radial diffusion models are combined to create a self-consistent picture in which FU Orionis outbursts occur in young stellar objects (YSOs) as the result of a large-scale, self-regulated, thermal ionization instability in the surrounding protostellar accretion disk. Although active accretion disks have long been postulated to be ubiqitous among low-mass YSOs, few constraints have until now been imposed on physical conditions in these disks. By fitting the results of time-dependent disk models to observed timescales of FU Orionis events, we estimate the magnitude of the effective viscous stress in the inner disk (r approximately less than 1 AU) to be, in accordance with an ad hoc 'alpha' prescription, the product of the local sound speed, pressure scale height, and an efficiency factor alpha of 10(exp -4) where hydrogen is neutral and 10(exp 3) where hydrogen is ionized. We hypothesize that all YSOs receive infall onto their outer disks which is steady (or slowly declining with time) and that FU Orionis outbursts are self-regulated, disk outbursts which occur only in systems which transport matter inward at a rate sufficiently high to cause hydrogen to be ionized in the inner disk. We estimate a critical mass flux of dm(sub crit)/dt = 5 x 10(exp 7) solar mass/yr independent of the magnitude of alpha for systems with one solar mass, three solar radius central objects. Infall accretion rates in the range of dm(sub in)/dt = 1-10) x 10(exp -6) solar mass/yr produce observed FU Orionis timescales consistent with estimates of spherical molecular cloud core collapse rates. Modeled ionization fronts are typically initiated near the inner edge of the disk and propogate out to a distance of several tens of stellar radii. Beyond this region, the disk transports mass steadily inward at the supplied constant infall rate. Mass flowing through the innermost disk annulus is equal to dm(sub in)/dt only in a time-averaged sense and is regulated by the ionization of hydrogen in the inner disk such that long intervals (approximately 1000 yr) of low-mass flux: (1-30) x 10(exp -8) solar mass/yr are punctuated by short intervals (approximately 100 yr) of high-mass flux: (1-30) x 10(exp -5) solar mass/yr. Timescales and mass fluxes derived for quiescent and outburst stages are consistent with estimates from observations of T Tauri and FU Orionis systems, respectively.

832 citations


Journal ArticleDOI
TL;DR: Characteristics of higher plant terpenoids that result in mediation of numerous kinds of ecological interactions are discussed as a framework for this Symposium on Chemical Ecology of Terpenoids, and the role of terpenoid mixtures is emphasized.
Abstract: Characteristics of higher plant terpenoids that result in mediation of numerous kinds of ecological interactions are discussed as a framework for this Symposium on Chemical Ecology of Terpenoids. However, the role of terpenoid mixtures, either constitutive or induced, their intraspecific qualitative and quantitative compositional variation, and their dosage-dependent effects are emphasized in subsequent discussions. It is suggested that little previous attention to these characteristics may have contributed to terpenoids having been misrepresented in some chemical defense theories. Selected phytocentric examples of terpenoid interactions are presented: (1) defense against generalist and specialist insect and mammalian herbivores, (2) defense against insect-vectored fungi and potentially pathogenic endophytic fungi, (3) attraction of entomophages and pollinators, (4) allelopathic effects that inhibit seed germination and soil bacteria, and (5) interaction with reactive troposphere gases. The results are integrated by discussing how these terpenoids may be contributing factors in determining some properties of terrestrial plant communities and ecosystems. A terrestrial phytocentric approach is necessitated due to the magnitude and scope of terpenoid interactions. This presentation has a more broadly based ecological perspective than the several excellent recent reviews of the ecological chemistry of terpenoids.

811 citations


Journal ArticleDOI
TL;DR: In this article, a simulation of a 20 solar mass "delayed" supernova explosion is presented, where the authors follow the detailed evolution of material moving through the bubble at the late times appropiate to r-process nucleosynthesis.
Abstract: As a neutron star is formed by the collapse of the iron core of a massive star, its Kelvin-Helmholtz evolution is characterized by the release of gravitational binding energy as neutrinos. The interaction of these neutrinos with heated material above the neutron star generates a hot bubble in an atmosphere that is nearly in hydrostatic equilibrium and heated, after approximately 10 s, to an entropy of S/N(sub AS)k greater than or approximately = 400. The neutron-to-proton ratio for material moving outward through this bubble is set by the balance between neutrino and antineutrino capture on nucleons. Because the electron antineutrino spectrum at this time is hotter than the electron neutrino spectrum, the bubble is neutron-rich (0.38 less than or approximately = Y(sub e) less than or approximately = 0.47). Previous work using a schematic model has shown that these conditions are well suited to the production of heavy elements by the r-process. In this paper we have advanced the numerical modeling of a 20 solar mass 'delayed' supernova explosion to the point that we can follow the detailed evolution of material moving through the bubble at the late times appropiate to r-process nucleosynthesis. The supernova model predicts a final kinetic energy for the ejecta of 1.5 x 10(exp 51) ergs and leaves behind a remnant with a baryon mass of 1.50 solar mass (and a gravitational mass of 1.445 solar mass). We follow the thermodynamic and compositional evolution of 40 trajectories in rho(t), T(t), Y(sub e)(t) for a logarithmic grid of mass elements for the last approximately = 0.03 solar mass to be ejected by the proto-neutron star down to the last less than 10(exp -6) solar mass of material expelled at up to approximately = 18 s after core collapse. We find that an excellent fit to the solar r-process abundance distribution is obtained with no adjustable parameters in the nucleosynthesis calculations. Moreover, the abundances are produced in the quantities required to account for the present Galactic abundances. However, at earlier times, this one-dimensional model ejects too much material with entropies S/N(sub A)k approximately 50 and Y(sub e) approximately 0.46. This leads to an acceptable over production of N = 50 nuclei, particularly Sr-88, Y-89, and Zr-90, relative to their solar abundances. We speculate on various means to avoid the early overproduction and/or ejection of N = 50 isotonic nuclei while still producing and ejecting the correct amount of r-process material.

693 citations


Journal ArticleDOI
TL;DR: In this article, the authors present an extensive study of the inception of supernova explosions by following the evolution of the cores of two massive stars (15 and 25 Solar mass) in multidimension.
Abstract: We present an extensive study of the inception of supernova explosions by following the evolution of the cores of two massive stars (15 and 25 Solar mass) in multidimension. Our calculations begin at the onset of core collapse and stop several hundred milliseconds after the bounce, at which time successful explosions of the appropriate magnitude have been obtained. Similar to the classical delayed explosion mechanism of Wilson, the explosion is powered by the heating of the envelope due to neutrinos emitted by the protoneutron star as it radiates the gravitational energy liberated by the collapse. However, as was shown by Herant, Benz, & Colgate, this heating generates strong convection outside the neutrinosphere, which we demonstrate to be critical to the explosion. By breaking a purely stratified hydrostatic equilibrium, convection moves the nascent supernova away from a delicate radiative equilibrium between neutrino emission and absorption, Thus, unlike what has been observed in one-dimensional calculations, explosions are rendered quite insensitive to the details of the physical input parameters such as neutrino cross sections or nuclear equation of state parameters. As a confirmation, our comparative one-dimensional calculations with identical microphysics, but in which convection cannot occur, lead to dramatic failures. Guided by our numerical results, we have developed a paradigm for the supernova explosion mechanism. We view a supernova as an open cycle thermodynamic engine in which a reservoir of low-entropy matter (the envelope) is thermally coupled and physically connected to a hot bath (the protoneutron star) by a neutrino flux, and by hydrodynamic instabilities. This paradigm does not invoke new or modified physics over previous treatments, but relies on compellingly straightforward thermodynamic arguments. It provides a robust and self-regulated explosion mechanism to power supernovae that is effective under a wide range of physical parameters.

580 citations


Journal ArticleDOI
TL;DR: In this paper, the authors explore the triggering of starburst activity in disk galaxies which accrete low-mass dwarf companions, and show that the presence of a bulge component in the disk galaxy can suppress the radial gas flow and limit the strength of the associated starburst, depending on the overall mass profile.
Abstract: Using numerical simulation, we explore the triggering of starburst activity in disk galaxies which accrete low-mass dwarf companions. In response to the tidal perturbation of an infalling satellite, a disk galaxy develops a strong two-armed spiral pattern, which in turn drives large quantities of disk gas into its central regions. The global star formation rate stays constant during the early stages of an accretion, before rising rapidly by an order of magnitude when the central gas density becomes very large. The associated central starburst is quite compact. Models which include a bulge component in the disk galaxy show that the presence of a bulge can suppress the radial gas flow and limit the strength of the associated starburst, depending on the overall mass profile. The fact that such relatively common 'minor' mergers may trigger strong starburst activity suggests that many disk galaxies may have experienced starbursts at some point in their lifetime. Implications for galaxy evolution and formation are discussed.

470 citations


Journal ArticleDOI
TL;DR: In this paper, the authors present an extensive study of the inception of supernova explosions by following the evolution of the cores of two massive stars (15 Msun and 25 Msun) in two dimensions.
Abstract: Condensed Abstract: We present an extensive study of the inception of supernova explosions by following the evolution of the cores of two massive stars (15 Msun and 25 Msun) in two dimensions. Our calculations begin at the onset of core collapse and stop several 100 ms after the bounce, at which time successful explosions of the appropriate magnitude have been obtained. (...) Guided by our numerical results, we have developed a paradigm for the supernova explosion mechanism. We view a supernova as an open cycle thermodynamic engine in which a reservoir of low-entropy matter (the envelope) is thermally coupled and physically connected to a hot bath (the protoneutron star) by a neutrino flux, and by hydrodynamic instabilities. (...) In essence, a Carnot cycle is established in which convection allows out-of-equilibrium heat transfer mediated by neutrinos to drive low entropy matter to higher entropy and therefore extracts mechanical energy from the heat generated by gravitational collapse. We argue that supernova explosions are nearly guaranteed and self-regulated by the high efficiency of the thermodynamic engine. (...) Convection continues to accumulate energy exterior to the neutron star until a successful explosion has occurred. At this time, the envelope is expelled and therefore uncoupled from the heat source (the neutron star) and the energy input ceases. This paradigm does not invoke new or modified physics over previous treatments, but relies on compellingly straightforward thermodynamic arguments. It provides a robust and self-regulated explosion mechanism to power supernovae which is effective under a wide range of physical parameters.

Journal ArticleDOI
TL;DR: Results show that after having been trained on as few as 20 tRNA sequences from only two tRNA subfamilies, the model can discern general tRNA from similar-length RNA sequences of other kinds, can find secondary structure of new t RNA sequences, and can produce multiple alignments of large sets of tRNAs.
Abstract: Stochastic context-free grammars (SCFGs) are applied to the problems of folding, aligning and modeling families of tRNA sequences. SCFGs capture the sequences' common primary and secondary structure and generalize the hidden Markov models (HMMs) used in related work on protein and DNA. Results show that after having been trained on as few as 20 tRNA sequences from only two tRNA subfamilies (mitochondrial and cytoplasmic), the model can discern general tRNA from similar-length RNA sequences of other kinds, can find secondary structure of new tRNA sequences, and can produce multiple alignments of large sets of tRNA sequences. Our results suggest potential improvements in the alignments of the D- and T-domains in some mitochondrial tRNAs that cannot be fit into the canonical secondary structure.

Journal ArticleDOI
TL;DR: In this paper, it was shown that for a class of kernels which arise when one rescales the Laguerre or Jacobi ensembles at the edge of the spectrum, namely,
Abstract: Scaling models of randomN×N hermitian matrices and passing to the limitN→∞ leads to integral operators whose Fredholm determinants describe the statistics of the spacing of the eigenvalues of hermitian matrices of large order. For the Gaussian Unitary Ensemble, and for many others'as well, the kernel one obtains by scaling in the “bulk” of the spectrum is the “sine kernel”\(\frac{{\sin \pi (x - y)}}{{\pi (x - y)}}\). Rescaling the GUE at the “edge” of the spectrum leads to the kernel\(\frac{{Ai(x)Ai'(y) - Ai'(x)Ai(y)}}{{x - y}}\), where Ai is the Airy function. In previous work we found several analogies between properties of this “Airy kernel” and known properties of the sine kernel: a system of partial differential equations associated with the logarithmic differential of the Fredholm determinant when the underlying domain is a union of intervals; a representation of the Fredholm determinant in terms of a Painleve transcendent in the case of a single interval; and, also in this case, asymptotic expansions for these determinants and related quantities, achieved with the help of a differential operator which commutes with the integral operator. In this paper we show that there are completely analogous properties for a class of kernels which arise when one rescales the Laguerre or Jacobi ensembles at the edge of the spectrum, namely $$\frac{{J_\alpha (\sqrt x )\sqrt y J'_\alpha (\sqrt y ) - \sqrt x J'_\alpha (\sqrt x )J_\alpha (\sqrt y )}}{{2(x - y)}},$$ , whereJα(z) is the Bessel function of order α. In the cases α=∓1/2 these become, after a variable change, the kernels which arise when taking scaling limits in the bulk of the spectrum for the Gaussian orthogonal and symplectic ensembles. In particular, an asymptotic expansion we derive will generalize ones found by Dyson for the Fredholm determinants of these kernels.

Journal ArticleDOI
23 Sep 1994-Science
TL;DR: In oceanic, coastal, and estuarine environments, traditional nitrogen-15 techniques were found to underestimate new and regenerated production by up to 74 and 50 percent, respectively.
Abstract: In oceanic, coastal, and estuarine environments, an average of 25 to 41 percent of the dissolved inorganic nitrogen (NH4+ and NO3–) taken up by phytoplankton is released as dissolved organic nitrogen (DON). Release rates for DON in oceanic systems range from 4 to 26 nanogram-atoms of nitrogen per liter per hour. Failure to account for the production of DON during nitrogen-15 uptake experiments results in an underestimate of gross nitrogen uptake rates and thus an underestimate of new and regenerated production. In these studies, traditional nitrogen-15 techniques were found to underestimate new and regenerated production by up to 74 and 50 percent, respectively. Total DON turnover times, estimated from DON release resulting from both NH4+ and NO3– uptake, were 10 ± 1, 18 ± 14, and 4 days for oceanic, coastal, and estuarine sites, respectively.

Journal ArticleDOI
TL;DR: In this article, the authors considered the case where the underlying set is the union of intervals and the determinants were thought of as functions of the end-points of the set.
Abstract: Orthogonal polynomial random matrix models ofN×N hermitian matrices lead to Fredholm determinants of integral operators with kernel of the form (ϕ(x)ψ(y)−ψ(x)ϕ(y))/x−y. This paper is concerned with the Fredholm determinants of integral operators having kernel of this form and where the underlying set is the union of intervals\(J = \cup _{j = 1}^m (a_{2j - 1 ,{\text{ }}} a_{2j} )\). The emphasis is on the determinants thought of as functions of the end-pointsak.

Journal ArticleDOI
TL;DR: In this article, vertical concentration profiles of the dissolved and suspended particulate phases were determined for a suite of reactive trace metals, Al, Fe, Mn, Zn, and Cd, during summertime at a station in the center of the North Pacific gyre.

Journal ArticleDOI
TL;DR: A spectral approach to multi-way ratio-cut partitioning that provides a generalization of the ratio- cut cost metric to L-way partitioning and a lower bound on this cost metric is developed.
Abstract: Recent research on partitioning has focused on the ratio-cut cost metric, which maintains a balance between the cost of the edges cut and the sizes of the partitions without fixing the size of the partitions a priori. Iterative approaches and spectral approaches to two-way ratio-cut partitioning have yielded higher quality partitioning results. In this paper, we develop a spectral approach to multi-way ratio-cut partitioning that provides a generalization of the ratio-cut cost metric to L-way partitioning and a lower bound on this cost metric. Our approach involves finding the k smallest eigenvalue/eigenvector pairs of the Laplacian of the graph. The eigenvectors provide an embedding of the graph's n vertices into a k-dimensional subspace. We devise a time and space efficient clustering heuristic to coerce the points in the embedding into k partitions. Advancement over the current work is evidenced by the results of experiments on the standard benchmarks. >

Journal ArticleDOI
TL;DR: The exact behavior of a given protein at low pH is a complex interplay between a variety of stabilizing and destabilizing forces, some of which are very sensitive to the environment.
Abstract: A systematic investigation of the effect of acid on the denaturation of some 20 monomeric proteins indicates that several different types of conformational behavior occur, depending on the protein, the acid, the presence of salts or denaturant, and the temperature. Three major types of effects were observed. Type I proteins, when titrated with HCl in the absence of salts, show two transitions, initially unfolding in the vicinity of pH 3-4 and then refolding to a molten globule-like conformation, the A state, at lower pH. Two variations in this behavior were noted: some type I proteins, when titrated with HCl in the absence of salts, show only partial unfolding at pH 2 before the transition to the molten globule state; others of this class form an A state that is a less compact from of the molten globule state. In the presence of salts, these proteins transform directly from the native state to the molten globule conformation. Type II proteins, upon acid titration, do not fully unfold but directly transform to the molten globule state, typically in the vicinity of pH 3. Type III proteins show no significant unfolding to pH as low as 1, but may be caused to behave similarly to type I in the presence of urea. Thus, the exact behavior of a given protein at low pH is a complex interplay between a variety of stabilizing and destabilizing forces, some of which are very sensitive to the environment. In particular, the protein conformation is quite sensitive to salts (anions) that affect the electrostatic interactions, denaturants, and temperature, which cause additional global destabilization.(ABSTRACT TRUNCATED AT 250 WORDS)

Journal ArticleDOI
TL;DR: In this article, R. W. Connell reexamines the schooling of children in poverty in several industrial countries and suggests that major rethinking is due that draws on two assets that have not been considered by policymakers in the past: the accumulated practical experience of teachers and parents with compensatory programs, and a much more sophisticated sociology of education.
Abstract: In this article, R. W. Connell reexamines the schooling of children in poverty in several industrial countries. He suggests that major rethinking is due that draws on two assets that have not been considered by policymakers in the past: the accumulated practical experience of teachers and parents with compensatory programs, and a much more sophisticated sociology of education. Connell uses these assets to question the social and educational assumptions behind the general design of compensatory programs, to propose an alternative way of thinking about children in poverty that is drawn from current practice and social research, and to explore some larger questions about the strategy of reform this rethinking implies. He goes on to demonstrate that compensatory programs may even reinforce the patterns that produce inequality, since they function within existing institutions that force children to compete although the resources on which they can draw are unequal. At the core of this process, according to Conn...

Journal ArticleDOI
TL;DR: In this article, a one-parameter family of models of stable sperical stellar systems in which the phase-space distribution function depends only on energy was described, which can be used to estimate the detectability of central black holes and the velocity-dispersion profiles of galaxies that contain central cusps, with or without a central black hole.
Abstract: We describe a one-parameter family of models of stable sperical stellar systems in which the phase-space distribution function depends only on energy. The models have similar density profiles in their outer parts (rho propotional to r(exp -4)) and central power-law density cusps, rho proportional to r(exp 3-eta), 0 less than eta less than or = 3. The family contains the Jaffe (1983) and Hernquist (1990) models as special cases. We evaluate the surface brightness profile, the line-of-sight velocity dispersion profile, and the distribution function, and discuss analogs of King's core-fitting formula for determining mass-to-light ratio. We also generalize the models to a two-parameter family, in which the galaxy contains a central black hole; the second parameter is the mass of the black hole. Our models can be used to estimate the detectability of central black holes and the velocity-dispersion profiles of galaxies that contain central cusps, with or without a central black hole.

Journal ArticleDOI
TL;DR: In this paper, a multi-method approach was used to investigate the chemical speciation of dissolved copper and nickel in South San Francisco Bay and determined dissolved copper speciation by four different analytical approaches: competitive ligand equilibration-cathodic stripping voltammetry [CLE-CSV], differential pulse anodic stripping (DPSV), DPASV(TMF-RGCDE) and chelating resin column partitioning-graphite furnace atomic absorption spectrometry [CRCP-GFAAS].

Journal ArticleDOI
27 Oct 1994-Nature
TL;DR: In this article, the distance to the Virgo cluster of galaxies was measured using the Hubble Space Telescope and a distance of 17.1 ± 1.8 Mpc was derived.
Abstract: Accurate distances to galaxies are critical for determining the present expansion rate of the Universe or Hubble constant (H_0). An important step in resolving the current uncertainty in H_0 is the measurement of the distance to the Virgo cluster of galaxies. New observations using the Hubble Space Telescope yield a distance of 17.1 ± 1.8 Mpc to the Virgo cluster galaxy M100. This distance leads to a value of H_0 = 80 ± 17 km s^(−1) Mpc^(−1). A comparable value of H_0 is also derived from the Coma cluster using independent estimates of its distance ratio relative to the Virgo cluster.

Journal ArticleDOI
TL;DR: The authors applied a new twist on cognitive dissonance theory to the problem of AIDS prevention among sexually active young adults and found that the induction of hypocrisy would motivate subjects to reduce dissonance by purchasing condoms at the completion of the experiment.
Abstract: This experiment applied a new twist on cognitive dissonance theory to the problem of AIDS prevention among sexually active young adults. Dissonance was created after a proattitudinal advocacy by inducing hypocrisy-having subjects publicly advocate the importance of safe sex and then systematically making the subjects mindful of their own past failures to use condoms. It was predicted that the induction of hypocrisy would motivate subjects to reduce dissonance by purchasing condoms at the completion of the experiment. The results showed that more subjects in the hypocrisy condition bought condoms and also bought more condoms, on average, than subjects in the control conditions. The implications of the hypocrisy procedure for AIDS prevention programs and for current views of dissonance theory are discussed.

Journal ArticleDOI
TL;DR: In this article, the evolution of the bright Type II supernova discovered last year in M81, SN 1993J, is consistent with that expected for the explosion of a star which on the main sequence had a mass of 13-16 Solar Mass but which, owing to mass exchange with a binary companion (a intially approximately 3-5 AU, depending upon the actual presupernova radius and the masses of the two stars) lost almost all of its hydrogen-rich envelope during late helium burning.
Abstract: The evolution of the bright Type II supernova discovered last year in M81, SN 1993J, is consistent with that expected for the explosion of a star which on the main sequence had a mass of 13-16 Solar Mass but which, owing to mass exchange with a binary companion (a intially approximately 3-5 AU, depending upon the actual presupernova radius and the masses of the two stars) lost almost all of its hydrogen-rich envelope during late helium burning. At the time of explosion, the helium core mass was 4.0 +/- 0.5 Solar Mass and the hydrogen envelope, 0.20 +/- 0.05 Solar Mass. The envelope was helium and nitrogen-rich (carbon-deficient) and the radius of the star, 4 +/- 1 x 10(exp 13) cm. The luminosity of the presupernova star was 3 + 1 x 10(exp 38) ergs/s, with the companion star contributing an additional approximately 10(exp 38) ergs/s. The star may have been a pulsating variable at the time of the explosion. For an explosion energy near 10(exp 51) ergs (KE at infinity) and an assumed distance of 3.3 Mpc, a mass of Ni-56 in the range 0.07 +/- 0.01 Solar Mass was produced and ejected. This presciption gives a light curve which compares favorably with the bolomatric observations. Color photometry is more restrictive and requires a model in which the hydrogen-envelope mass is low and the mixing of hydrogen inward has been small, but in which appreciable Ni-56 has been mixed outward into the helium and heavy-element core. It is possible to obtain good agreement with B and V light curves during the first 50 days, but later photometry, especially in bands other than B and V, will require a non-local thermo-dynamic equilibrium (LTE) spectral calculation for comparison. Based upon our model, we predict a flux of approximately 10(exp -5)(3.3 Mpc/D)(exp 2) photons/sq cm/s in the 847 keV line of CO-56 at peak during 1993 August. It may be easier to detect the Computonized continuum which peaks at a few times 10(exp -4) photons /s/sq cm/MeV at 40 keV a few months after the explosion (though neither of these signals were, or should have been, detected by the Compton Gamma-Ray observatory (CGRO). The presupernova star was filing its Roche lobe at the time of the explosion and thus its envelope was highly deformed (about 3:2). The companion star is presently embedded in the supernova, but should becopme visable at age 3 yr (perhaps earlier in the ultraviolet) when the supernova has faded below 10(exp 38) ergs/s. Indeed, if 'kicks' have not played an important role, it is still bound to the neutron star.

Journal ArticleDOI
TL;DR: From a review of the physiological and psychological evidence, it is concluded that no subtraction, compensation, or evaluation need take place and the problem for which these solutions were developed turns out to be a false one.
Abstract: We identify two aspects of the problem of maintaining perceptual stability despite an observer's eye movements. The first, visual direction constancy, is the (egocentric) stability of apparent positions of objects in the visual world relative to the perceiver. The second, visual position constancy, is the (exocentric) stability of positions of objects relative to each other. We analyze the constancy of visual direction despite saccadic eye movements.Three information sources have been proposed to enable the visual system to achieve stability: the structure of the visual field, proprioceptive inflow, and a copy of neural efference or outflow to the extraocular muscles. None of these sources by itself provides adequate information to achieve visual direction constancy; present evidence indicates that all three are used.Our final question concerns how information processing operations result in a stable world. The three traditionally suggested means have been elimination, translation, or evaluation. All are rejected. From a review of the physiological and psychological evidence we conclude that no subtraction, compensation, or evaluation need take place. The problem for which these solutions were developed turns out to be a false one. We propose a “calibration” solution: correct spatiotopic positions are calculated anew for each fixation. Inflow, outflow, and retinal sources are used in this calculation: saccadic suppression of displacement bridges the errors between these sources and the actual extent of movement.

Journal ArticleDOI
01 Jun 1994-Nature
TL;DR: In this article, the authors present mantle shear-wave impedance profiles obtained from multiple-ScS reverberation mapping for corridors connecting western Pacific subduction zone earthquakes with digital seismograph stations in eastern China, imaging a ∼5.8% impedance decrease roughly 330 km beneath the Sea of Japan, Yellow Sea and easternmost Asia.
Abstract: LABORATORY results demonstrating that basic to ultrabasic melts become denser than olivine-rich mantle at pressures above 6 GPa (refs 1–3) have important implications for basalt petrogenesis, mantle differentiation and the storage of volatiles deep in the Earth. A density cross-over between melt and solid in the extensively molten Archaean mantle has been inferred from komatiitic volcanism4–6 and major-element mass balances7, but present-day evidence of dense melt below the seismic low-velocity zone is lacking. Here we present mantle shear-wave impedance profiles obtained from multiple-ScS reverberation mapping for corridors connecting western Pacific subduction zone earthquakes with digital seismograph stations in eastern China, imaging a ∼5.8% impedance decrease roughly 330 km beneath the Sea of Japan, Yellow Sea and easternmost Asia. We propose that this represents the upper surface of a layer of negatively buoyant melt lying on top of the olivine→β-phase transition (the 410-km seismic discontinuity). Volatile-rich fluids expelled from the partial melt zone as it freezes may migrate upwards, acting as metasomatic agents8,9 and perhaps as the deep 'proto-souree' of kimberlites10,11. The remaining, dense, crystalline fraction would then concentrate above 410 km, producing a garnet-rich layer that may flush into the transition zone.

Journal ArticleDOI
TL;DR: The conventional distinction between economic and engineering approaches to energy analysis obscures key methodological issues concerning the measurement of the costs and benefits of policies to promote the adoption of energy-efficient technologies.

Journal ArticleDOI
TL;DR: In this paper, the power of any test of an environmental impact is simultaneously constrained by the variability of the data, the magnitude of the putative impact, and the number of independent sampling events.
Abstract: The power of any test of an environmental impact is simultaneously constrained by (1) the variability of the data, (2) the magnitude of the putative impact, and (3) the number of independent sampling events. In the context of the Before–After—Control–Impact design with Paired sampling (BACIP), the variability of interest is the temporal variation in the estimated differences in a parameter (e.g., population density) between two unperturbed sites. The challenges in designing a BACIP study are to choose appropriate parameters to measure and to determine the adequate number and timing of sampling events. Two types of studies that are commonly conducted can provide useful information in designing a BACIP study. These are (1) long—term studies that provide estimates of the natural temporal and spatial variability of environmental parameters and (2) spatial surveys around already—perturbed areas ("After—only" studies) that can suggest the magnitude of impacts. Here we use data from a long—term study and an After—only study to illustrate their potential contributions to the design of BACIP studies. The long—term study of parameters sampled at two undisturbed sites yielded estimates of natural temporal variability. Between site differences in chemical—physical parameters (e.g., elemental concentration) and in individual—based biological parameters (e.g., body size) were quite consistent through time, while differences in population—based parameters (e.g., density) were more variable. Serial correlation in the time series of differences was relatively small and did not appear to vary among the parameter groups. The After—only study yielded estimates of the magnitude of impacts through comparison of sites near and distant from a point—source discharge. The estimated magnitude of effects was greatest for population—based parameters and least for chemical—physical parameters, which tended to balance the statistical power associated with these two parameter groups. Individual—based parameters were intermediate in estimates of effect size. Thus, the ration of effect size to variability was greatest for individual—based parameters and least for population and chemical—physical parameters. The results suggest that relatively few of the population and chemical—physical parameters could provide adequate power given the time constraints of most studies. This indicates that greater emphasis on individual—based parameters is needed in field assessments of environmental impacts. It will be critical to develop and test predictive models that link these impacts with effects on populations.

Journal ArticleDOI
TL;DR: The authors use the notion of "margins" as a conceptual site from which to explore the imaginative quality and the specificity of local/global cultural formation, where contradictory discourses overlap, or where discrepant kinds of meaning-making converge.
Abstract: How can anthropology benefit from cultural studies and cultural studies benefit from anthropology? One area in which these two scholarly trajectories work best together is in theorizing the interface of local and global frames of analysis. The challenge here is to move from situated, that is "local," controversies to widely circulating or "global" issues of power and knowledge and back, as this allows us to develop understandings of the institutions and dialogues in which both local and global cultural agendas are shaped. This essay is about margins as a conceptual site from which to explore the imaginative quality and the specificity of local/global cultural formation. Margins here are not a geographical, descriptive location. Nor do I refer to margins as the sites of deviance from social norms. Instead, I use the term to indicate an analytic placement that makes evident both the constraining, oppressive quality of cultural exclusion and the creative potential of rearticulating, enlivening, and rearranging the very social categories that peripheralize a group's existence. Margins, in this use, are sites from which we see the instability of social categories. One way of thinking about this agenda is to imagine a conversation between the approaches of Michel Foucault and Antonio Gramsci. Foucault shows us the discursive construction of social categories and forms of subjectivity. Gramsci reminds us that these categories and their associated kinds of agency have no unquestioned hegemony. Where Gramsci assumes too much about self-evident political interests that produce resistance and social transformation, Foucault, in showing the convention-laden assumptions behind resistance, obscures the suspense that infects possibilities of change. My interest is in the zones of unpredictability at the edges of discursive stability, where contradictory discourses overlap, or where discrepant kinds of meaning-making converge; these are what I call margins. Attention to marginality highlights both the play and constraint of subordinate social positions. In the United States, for example, minorities are marginalized by exclusion from the assumption of being ordinary-and often the jobs, housing, or political opportunities that "ordinary" (white) people expect. At the

Journal ArticleDOI
TL;DR: In this article, the authors argue that industrial restructuring debates provide an inadequate conceptual architecture for analyses of the dynamics of change in agrarian production structures and rural spatial organisation, and also interrogate the theoretical foundations of related literatures concerned with international food regimes, the new internationalisation of agriculture, the repositioning of agriculture industry relations, and the reconfiguration of rural space.
Abstract: Recent analyses of the restructuring of the agro‐food system draw uncritically on the industrial restructuring literature, notably regulation theory and Fordism/post‐Fordism debates on capitalist transition. We question the extension of this periodisation and conceptual framework to the political economy of agrarian restructuring. We also interrogate the theoretical foundations of several related literatures concerned with international food regimes, the ‘new internationalisation’ of agriculture, the repositioning of agriculture‐industry relations, and the reconfiguration of rural space. Our general argument is that the industrial restructuring debates provide an inadequate conceptual architecture for analyses of the dynamics of change in agrarian production structures and rural spatial organisation.